ADC auto-calibration
ManAtWork
Posts: 2,178
in Propeller 2
I'm currently experimenting with the smartpin auto-calibration feature. I try to measure the phase currents of an AC motor as accurate as possible. I have 180MHz sysclock and programmed the ADCs to 8192 clocks period and SINC2 filtering mode. So I should get 14 bits resolution and ~22ksps.
I have three current sensors and 6 pins altogether. The stategy is to use a pin pair per sensor with one pin measuring the actual input voltage while the other is used for self-calibration measuring ground and VIO levels. The code looks quite ugly and is hardly readable because lots of ALTD/ALTS to handle indexing and source/destination switching. I use a table driven state machine to handle the switching.
Ground and VIO values are averaged over 256 samples. Offset and gain correction values are calculated and stored in another table (12 entries, gain+offset per pin).
Gain/Offset correction is done like this (simplified code without ALTD/S)
The correction values are calculated like this:
I have three current sensors and 6 pins altogether. The stategy is to use a pin pair per sensor with one pin measuring the actual input voltage while the other is used for self-calibration measuring ground and VIO levels. The code looks quite ugly and is hardly readable because lots of ALTD/ALTS to handle indexing and source/destination switching. I use a table driven state machine to handle the switching.
DAT ' adc calibration state machine table ' bit#0 -> rewind ' bit#1 -> Ain source odd ' bit#2 -> Avg source odd ' bit#3 -> 256 instead of 4 samples ' bit#4 -> do calibration calculations ' bit#5 -> avg = vio ' bits 16..15 -> mode0 VVV ' bits 18..17 -> mode1 VVV adc_states long %00_11<<15 + %000000 ' swap channels, switch U1 to GIO long %00_11<<15 + %001100 ' sample U0, average U1 GIO long %01_11<<15 + %010000 ' switch U1 to VIO, calib GIO long %01_11<<15 + %001100 ' sample U0, average U1 VIO long %11_11<<15 + %110000 ' switch U1 to In, calib VIO long %11_00<<15 + %000010 ' swap channels, switch U0 to GIO long %11_00<<15 + %001010 ' sample U1, average U0 GIO long %11_01<<15 + %010010 ' switch U0 to VIO, calib GIO long %11_01<<15 + %001010 ' sample U1, average U0 VIO long %11_11<<15 + %110011 ' switch U0 to In, calib VIO, rewind ' %AAAA_BBBB_FFF_PPPVVV_OHHHLLL_TT_MMMMM_0 'adc_mode long %0000_0000_000_100011_0000000_00_11000_0 'gio_mode long %0000_0000_000_100000_0000000_00_11000_0 'vio_mode long %0000_0000_000_100001_0000000_00_11000_0 { %VVV = ADC config 000: GIO, 1x (~5 volt range, centred on VIO/2) 001: VIO, 1x " 010: PinB, 1x " 011: PinA, 1x " 100: PinA, 3.16x (~1.58 volt range, centred on VIO/2) 101: PinA, 10x (~0.5 volt range, centred on VIO/2) 110: PinA, 31.6x (~0.158 volt range, centred on VIO/2) 111: PinA, 100x (~0.05 volt range, centred on VIO/2) }Every time I change the smartpin mode (signal source) I wait 4 samples before I actually use the results. Because each pin needs two calibration values (GND and VIO) and a gap for filter settling I need 5 states per pin or 10 states altogether.
Ground and VIO values are averaged over 256 samples. Offset and gain correction values are calculated and stored in another table (12 entries, gain+offset per pin).
Gain/Offset correction is done like this (simplified code without ALTD/S)
mov x,adcRaw sub x,offset ' offset correction fges x,#0 mul x,gain ' gain correction shr x,#13 fle x,##$FFFF sub x,##$8000 ' 16 bits unsigned -> signed mov result,x
The correction values are calculated like this:
mov x,avgSumG shr adc,#8 ' average = sum/256 mov offset,x mov x,avgSumV sub x,avgSumG qfrac #1<<5,x ' 2^29/average(vio-gio) getqx gainThe constant #1<<5 comes from the multiplication of all included resolutions (ADC 14 bits, result 15 bits, average sum 8 bits = 37 bits) and the shifting of 32 bits of qfrac vs. qdiv.
Comments
However, I've noticed that I don't get the same results when I short the inputs to GND or VIO with a jumper wire. The ADC readings are always around 1% higher than the internally switched GIO and VIO readings. I wonder if there is some bias current or offset voltage in my circuit or if the switches internal to the P2 are leaking a bit. It could also be that my code is still buggy and suffers from "cross-talk" between internal variables (4 sample settle time doesn't work or averages are not cleared correctly...)
If I look at the corrected samples I get fairly stable values for pin pair V0/V1 which has relatively well matched gain and quite a lot of ripple for the other pin pairs.
Has anybody else tried this out and has some experience what accuracy can be expected? If there is little temperature drift maybe it would be better to only do a static offset adjustment once at startup and quit the continous "chopper stabilisastion" .
On most microprocessors ADC inputs are expensive but not on the P2. So I decided to measure all three phase currents at least in the prototype. I could always ignore the third to compare if it makes a difference. If not I could leave out the third sensor in the production version later (and save the current sensor which IS expensive).
We can see that the input switching works as expected and the ADC is allowed 4 sample times to settle to the new source. The value stabilizes after the 3rd sample. The calculated offset and gain values also look plausible.
However, there is a small step in the result when switching over from pin U0 to U1. This level change remains there even after some cycles of calibration and shows up as ripple in the signal. This might be caused by a changing bias current fed into the source impedance (~2k ohm) of the sensor. I'll check if it is reduced when I connect a source with nearly zero impedance.
In terms of acccuracy this is fully tolerable. However, a square wave of 40mA (20A full scale) and ~250Hz (88 samples state machine cycle) would produce a hearable sound and should be avoided.
The amplitude is around 50mA peak-peak. The state machine that schedules the automatic ADC calibration changes the input pin aprox. every 2ms. The active input pin samples the sensor signal while the other is used in the background to measure GND and VIO levels. So in theory offset and gain errors of the ADC circuits inside the chip should be compensated. However, it seems that the rsistance of the analogue switches are not perfectly matched or there are bias or leakage currents that are dependent on the selected signal source. (if they were constant they should be cancelled out by auto-calibration)
I have 3 current sensors and 3 ADC pin pairs which all behave nearly the same. Theres a remarkably differing offset and gain error in the raw data for each pin. But the ripple in the compensated data is 40..60mA peak-peak for all 3 pairs.
Which pin numbers are being used for the pairs? Are the 3 pairs contiguous, or could there be something digital in between the pairs (i ask because we saw a small offset when doing dac testing due to the current drawn by the pin pad block itself)
Have you checked the VIO voltage at a pin to see whether it also has any of that 580 Hz present?
When I did testing on each ADC channel I did see quite a different offset per pin, however I did not see the difference in gain error that you report
Is this the interfering square wave in your latest report. ie: When switching between two pins?
From the debug hexdumps I can see that there is a step in the signal exactly the moment the input pin is switched. The U sensor signal is connected to P44 to P47 (pins 71 to 75), V to P48 to P51 and W to P52 to P55. Each sensor is connected to 4 pins alltogether. I use two pins for ADC and two pins for fast overcurrent detection (DAC compare mode, not implemented yet).
The VIO for the ADC pins is fed from an extra linear voltage regulator connected to pins 73, 79, 85 and 91 to minimize digital noise. Scematic and layout can be viewed in this thread.
The RevA/B crosstalk issue shouldn't matter in this case when both pins of a pair are connected to the same signal. I'll check if I can measure any traces of the square wave at the physical pins with a real scope.
And again, I don't complain about missing precision. We are talking about an error of 40mA at a full input scale of 40A (+/-20A). Thats 0.1%. Most other microprocessors are limited to 12bit ADC resolution and have much worse gain and offset error specs. It's just an inconvenience because it causes audible ringing as soon as the control loop is closed. So I have to "iron it out" somehow.
1) doing no GIO/VIO calibration at all
This would mean I could only implement a static offset compensation. I can measure the ADC input before enabling the power stage so that I know that the motor current is zero. Once the motor runs there would be no chance to measure the offset again so this couldn't compensate temperature drift. Also, I couldn't do any gain error compensation. Matched gain errors of all 3 channels wouldn't matter at all. The velocity control loop cancels out all inaccuracies of the current control loop as long as they are below 20 or 30%. But unbalanced gain errors between winding currents cause cogging.
2) time multiplexed GIO/VIO calibration with only one pin per sensor
As the sigma delta ADC requires 3 samples to settle after an input change this would require significantly long time slots where I can't sample the actual signal. I'd need 6-times oversampling and loose at least 2 bits of resolution.
3) using external ADCs with better precision
This not only costs money and board space. I'd like to do a performance benchmark to see what is possible with the P2.
You'll probably lose some accuracy against your current solution but you get to eliminate those teeth.
PS: It is just wild guessing from this end, btw. I've not done much with the ADCs.
For the moment I'd rather try sticking to the flip-pin method and doing a statical delta-offset compensation, i.e. measuring the amplitude of the "ghost" square wave once at startup and synchronously subtracting it from the signal. This has much more complexity than your simple approach but the worst case is not much worse than the 0,1% error, now.
Uncalibrated error is really high, something like 10% full scale. If the drift of adjacent pins differs by only 10% then it would still mean 1% error which is ten times the current error with calibrartion.
Yes I'm sure that the RevB crosstalk problem has nothing to do with this phenomenon. The waveform I posted was recorded using P44/P45 (pins 71/72).
But I remeber that you said that something is on your to-do list for the next silicon revision about the ADC input multiplexers. I don't remeber exactly what it was and I can't find the post at the moment. It had something to do with placing the muxes before or behind some resistors to improve accuracy.
Never mind... But the question is, could it be possible that the GIO/VIO modes have enough tolerance from pin to pin that it explains the staircase-effect I'm seeing? Or in other words, what accuracy (gain and offset error, pin-to-pin matching) can I expect when using auto-calibration? Is that 0.1% tolerance inside the usual design margin?
I'm not sure if that applies in this case, but it would explain staircase steps that are correlated with calibrations.
I can show some hexdumps that show the actual samples of each pin and how my code uses them to calculate the calibration values for offset and gain. I could even write a demonstration program that runs on an EVAL board without my special hardware if necessary.
Again, could you please answer my question: Do you think it is possible that the 0.1% steps could be explained by some effect inside the P2 chip or do you believe my code is faulty?
https://forums.parallax.com/discussion/169602/characterizing-p2-eval-analog-performance
ManAtWork I just wanted to check your maths on the bits vs sampling rate, if you're at 180 MHz and 22ksps that'd be 13 bits which is 8192 count, rather than 14 bits, or are you claiming 14 bits because of using paired pins?
There is also an ADC Noise thread where we look at how GIO and VIO calibration values vary with respect to both temperature (induced self heating by clock rate) and by pin number and board number. I compared my board with OzProp's to get some idea of the variation from chip to chip (at least for a sample size of 2). Evanh plotted the results nicely
https://forums.parallax.com/discussion/download/124010/20-300%20Mhz%20%2859%20pins%29A.png
I don't currently have an explanation for why you'd see a ~120Hz square wave like you are, but would suggest trying a simpler (non SINC) approach to recording the data to see whether the interference is still present. Eg let P44 and P45 record your Sinc data acquisition, as you have it now, but put simpler 14 bit VIO - Vinput - GIO - Vinput sequence measuring on P46 and see how closely it corresponds with your state machine + sinc approach. I'd also try connecting a plain old battery to rule out "everything else" (I know you've already connected 1v8 rail but that could also be contributing to the square wave)
Soon we will have a lot more chips and can do some chip-to-chip comparisons across the batch
We'll find this gremlin...
I just recalled that I've already done that. Please see my post from Feb 26th. (XLS attached)
Explanation of data columns:
state = bit pattern encoding for state machine
raw U0 = raw data from ADC pin P44, 14 bit SINC2 filtering
raw U1 = raw data from ADC pin P45
avg = sum for averaging of calibration value
offsU0 = offset for P44
gainU0 = gain for P44
offsU1 = offset for P45
gainU1 = gain for P45
result = calibrated output data
Motor current is 0.0A
offset in output is caused by offset voltage of current sensor
Is each row of the spreadsheet a finite time, eg 1 adc sample?