PDA

View Full Version : RCTIME + AD592 + uFPU problem



rtowler
11-28-2006, 02:38 AM
I'm trying to calibrate an AD592 temp sensor following the example in the Applied Sensors (v1.4) book but have run into a frustrating problem. The circuit is set according to the schematic on pg. 66. The one difference is that I am using the uFPU floating point co-processor to do the calculations. I don't *have* to use it, but it is on-board due to another requirement of the project and I would like to use it. I also have the DS1620 on-board. The stamp is a BS2p24.

Everything is working fine, but I cannot calibrate this. I have created my ice bath, and I am reporting both the raw value from RCTIME and the calculated temperature. Following the text on pages 69-70 I take the raw value and multiply it by 273 to get my calibration constant. I then alter my uFPU code (the values passed to FWRITEB) to insert the constant. I reprogram the Stamp and... the raw value reported from RCTIME changes. Humm, ok. recalculate the calibration constant, change the uFPU code, reprogram and the value changes back!

The raw value reported from RCTIME changes back and forth between say 1100 and 1120 and definitely is related to the value I plug in as the calibration constant. I know this sounds insane. I can't imagine why this would happen but nothing is changing with regards to the setup. I have also tried reprogramming the stamp over and over with the same cal constant thinking that maybe the act of programming the stamp had something to do with it. Nope. But changing the value of the calibration constant *always* changes the value returned by RCTIME. When running, the values are reasonably stable, but the problem is that I can't get a good cal of the sensor.

I understand that I could add a 2nd calibration parameter (offset) to compensate for this but the real issue here is understanding what is going on so I can trust my code and the stamp. Is there any explanation for this behavior?


Since I am using the uFPU, the code is a bit different than found in the book:

' Get the raw temp value from AD592
LOW AD592
RCTIME AD592, 0, rawTemp
SEROUT SerPin, SerMode, ["Raw AD592 Val: ", DEC rawTemp, CR, LF]

' Divide calibration constant by rawTemp value (constant = 303030)
SHIFTOUT FpuOut, FpuClk, MSBFIRST, [um_RawTemp, FWRITEB, $48, $93, $F6, $C0, FSET,
LOADWORD, rawTemp.HIGHBYTE, rawTemp.LOWBYTE, FDIV]

' subtract 273 to convert from deg K to Deg C
SHIFTOUT FpuOut, FpuClk, MSBFIRST, [um_RawTemp, FWRITEB, $C3, $88, $80, $00, FADD]

' display results
SEROUT SerPin, SerMode, ["AD592 Temp (deg c): "]
SHIFTOUT FpuOut, FpuClk, MSBFIRST, [um_RawTemp]
format = 81
GOSUB Print_FloatFormat
SEROUT SerPin, SerMode, [CR, LF]


Many thanks for any input.

rtowler
11-28-2006, 04:21 AM
I have found a workaround but I am still a bit puzzled about the behavior of RCTIME.

If I use a different method for loading the constant into the uFPU, my problem goes away. Changing from using FWRITEB to ATOF solves the whack-a-mole RCTIME issue:

problematic method:

' Divide calibration constant by rawTemp value (constant = 303030)
SHIFTOUT FpuOut, FpuClk, MSBFIRST, [um_RawTemp, FWRITEB, $48, $93, $F6, $C0, FSET,
LOADWORD, rawTemp.HIGHBYTE, rawTemp.LOWBYTE, FDIV]

working method:

' Divide calibration constant by rawTemp value (constant = 303030)
SHIFTOUT FpuOut, FpuClk, MSBFIRST, [um_RawTemp, ATOF, "303030", 0, FSET,
LOADWORD, rawTemp.HIGHBYTE, rawTemp.LOWBYTE, FDIV]

Now I can change the calibration constant to whatever I desire and the raw value returned from RCTIME is stable. I'm still wondering why RCTIME behaved the way it did though...

rtowler
11-28-2006, 06:06 AM
It's me again.

Sorry to be working this out on the board but I have determined that my workaround isn't complete. The issue seems more basic in that if I make changes to my basic stamp code, the value reported by RCTIME for my AD592 changes. And it can change significantly, say 10-15% or more. So while my workaround still allows me to calibrate the sensor, I have to recalibrate any time I make a change to the code.

For example, if I take the two lines which return the rawTemp value and place them in a subroutine. Then replace these two lines in my main function with a GOSUB to that subroutine, the value returned by RCTIME changes by over 10%. If I comment out these changes the value returned by RCTIME returns to its previous value. I would like to understand this as I am hoping to use the AD592 to calibrate another sensor. But if calibrating the AD592 is too problematic I can't use it.


What's other people experiences with RCTIME? What could account for this?

Tracy Allen
11-28-2006, 07:36 AM
Try putting the LOW AD592 immediately _after_ the RCTIME:




LOW AD592 ' inititialize
' ....
' main routine
RCTIME AD592, 0, rawTemp
LOW AD592 '<--- after the RCtime



That assures that the capacitor is 100% discharged by the time it gets around to the next RCtime. The other way, the capacitor will continue to charge up, and it will charge up more if the computation takes a longer time. When the LOW comes only 200 microseconds before the RCTIME there is evidently not enough time to discharge the capacitor completely, and the charge remaining is weakly proportional to the length of the computation.

What capacitor/resistor combination are you using?

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Tracy Allen
www.emesystems.com (http://www.emesystems.com)

rtowler
11-29-2006, 12:54 AM
Thanks for the reply, Tracy.

Your tip makes perfect sense (I'm still pretty green). It explains why my values change when the code in my main loop changes (execution time thru the main loop changes) and it also could explain the drift I have seen. Overnight it drifted another 10%.

I using a 100 ohm resistor and a .22mf temp stable cetralab cap.

Off to try your suggestion...