Shop OBEX P1 Docs P2 Docs Learn Events
ADC auto-calibration - Page 2 — Parallax Forums

ADC auto-calibration

2»

Comments

  • Exactly, each row is one loop pass of the software. For every pass each input pin is sampled and depending on the bit pattern of the state machine state additional actions are invoked like updating the gain and offset values depending on the average taken.

    The P2 runs with 180MHz, so 14 bit SINC2 mode takes 8192 cycles. So each loop pass or row takes ~45.5µs which is roughly 22kHz sample rate. A full cycle of the state machine takes 88 samples or ~4ms.
  • jmgjmg Posts: 15,140
    ManAtWork wrote: »
    ...
    However, there is a small step in the result when switching over from pin U0 to U1. This level change remains there even after some cycles of calibration and shows up as ripple in the signal.
    ..
    .... But the question is, could it be possible that the GIO/VIO modes have enough tolerance from pin to pin that it explains the staircase-effect I'm seeing? Or in other words, what accuracy (gain and offset error, pin-to-pin matching) can I expect when using auto-calibration? Is that 0.1% tolerance inside the usual design margin?
    I'm not following the setup entirely here, but I think you are comparing two pins, both self calibrated, and they match ~0.1%
    Keeping in mind that the calibrate is GND and VIO and not a mid-point cal, any banana effect in each ADC will be a variation, so I'd say 0.1% is sounding quite good as a matching level.
  • evanhevanh Posts: 15,126
    edited 2020-05-08 04:34
    What I said, and meant, is to do an initial span calibration of VIO/GIO for the signal pin. Then, during unbroken data sampling, track thermal change with the paired pin. That's why it is fine to used heavy low-pass filtering on the tracking. Power supply changes will also be trackable this way. High frequency interference, not so much.

  • cgraceycgracey Posts: 14,133
    ManAtWork wrote: »
    Chip, of course, each pin is calibrated seperately. (see post #1 and #2) Using the VIO/GIO samples to calibrate a different pin was a suggestion of Evanh (in this post). I don't think that's a good idea either.

    I can show some hexdumps that show the actual samples of each pin and how my code uses them to calculate the calibration values for offset and gain. I could even write a demonstration program that runs on an EVAL board without my special hardware if necessary.

    Again, could you please answer my question: Do you think it is possible that the 0.1% steps could be explained by some effect inside the P2 chip or do you believe my code is faulty?

    ManAtWork, sorry, I was travelling all day and didn't have a good chance to answer. I don't know where your 0.1% steps are coming from. I don't know what inside the ADC circuit could be causing that. My suspicion would be that it is in the software controlling things.
  • cgraceycgracey Posts: 14,133
    ManAtWork, perhaps there could be some slight linearity mismatch between two adjacent pins, to result in those steps you are seeing.
  • cgraceycgracey Posts: 14,133
    ManAtWork, what if you always used an average, or sum, of those two pins' readings? That would get rid of the steps.
  • evanhevanh Posts: 15,126
    edited 2020-05-08 06:28
    Chip,
    He's currently alternating the pin input sampling to get faster sampling while also recalibrating both pins all the time. If he went for mixing the two sources it would mean running the pin input sampling in parallel instead of alternating. Which means the VIO/GIO recalibration cycle of both ADCs also runs in parallel to each other and therefore would mean reduced pin sample rate - three fold decrease I presume.

  • cgracey wrote: »
    I don't know where your 0.1% steps are coming from. I don't know what inside the ADC circuit could be causing that. My suspicion would be that it is in the software controlling things.
    ...
    ManAtWork, perhaps there could be some slight linearity mismatch between two adjacent pins, to result in those steps you are seeing.

    Thanks, and my apologies for being impatient. I'm just a bit lost because there is no official data sheet with specs about ADC accuracy. So I don't know if this is normal behaviour or if I do something wrong. I don't want to waste time searching for errors that aren't there.
    ManAtWork, what if you always used an average, or sum, of those two pins' readings? That would get rid of the steps.
    The current sensor signal is in the feedback path of the current PID control loop. Using an average (low pass filter) there would add phase delay which is bad for loop stability. The servo has three nested control loops: current, velocity and position. The current loop is the innermost so it has to be fast.

    I currently sample at 22kHz and use the samples directly for the PID feedback.

    An alternative would be sampling considerably faster, say 6 times. This would allow switching the input source of one pin to GIO or VIO plus settling time (3 samples).

    Switching pattern:

    Pin1 Pin2
    control loop pass 1
    GIO GIO < switch over
    GIO GIO settle...
    GIO GIO < take valid sample, store for calibration update
    Sensor Sensor < switch over
    Sensor Sensor settle...
    Sensor Sensor < take valid samples, calculate average

    control loop pass 2
    VIO VIO < switch over
    VIO VIO settle...
    VIO VIO < take valid sample, store for calibration update
    Sensor Sensor < switch over
    Sensor Sensor settle...
    Sensor Sensor < take valid sample, calculate average

    Having to throw away 2 (invalid) out of 3 samples means loosing some resolution even if I get some back by taking an average of the two pins, now.
  • jmg wrote: »
    I'm not following the setup entirely here, but I think you are comparing two pins, both self calibrated, and they match ~0.1%
    Yes.
    Keeping in mind that the calibrate is GND and VIO and not a mid-point cal, any banana effect in each ADC will be a variation, so I'd say 0.1% is sounding quite good as a matching level.
    Good point. I'll check that. I can repeat the test I did with the Excel sheet with three different input voltages. GND, mid-point and VIO. If only the midpoint data shows steps then it's an unmatched non-linearity. If I see the same square wave in all three cases it must be a pin-specific offset (unmatched bias current or something similar).
  • evanh wrote: »
    What I said, and meant, is to do an initial span calibration of VIO/GIO for the signal pin. Then, during unbroken data sampling, track thermal change with the paired pin. That's why it is fine to used heavy low-pass filtering on the tracking.
    Do I understand this correctly? You mean I should calibrate the signal pin with it's own GIO/VIO once at startup. Then I only sample the input signal during normal operation with this pin. The other pin is used only to track changes to the calibration. So you suggest sampling GIO/VIO with the second pin and use that data to re-calibrate the signal pin (low pass filtered).

    I have some doubts that this works. It's heavily based on the assumption that the temperature coefficients of the threshold voltages of the input FETs of both pins are very well matched.
    Power supply changes will also be trackable this way. High frequency interference, not so much.
    Power supply changes are much less critical. The current sensor output is proportional to the supply voltage and the input scaling of the ADCs also is. HF interference is also no problem. Interference from capacitive coupling of PCB traces is well above the input bandwidth. Magnetic coupling from the motor current and power supply ripple from PWM switching should be cancelled out because the sampling frequency and the PWM frequency is identical and phase locked. All ripple currents and voltages should have whole number multiples (harmonics) of the PWM frequency. So due to the averaging of the sigma delta ADC their sum during one sampling period is zero.
  • evanhevanh Posts: 15,126
    ManAtWork wrote: »
    Do I understand this correctly? You mean I should calibrate the signal pin with it's own GIO/VIO once at startup. Then I only sample the input signal during normal operation with this pin. The other pin is used only to track changes to the calibration. So you suggest sampling GIO/VIO with the second pin and use that data to re-calibrate the signal pin (low pass filtered).
    I'd call it a delta to the initial calibration rather than a recalibration but yes, the idea I'm imagining is to treat the temerature drift effects of immediate neighbour pins as a good match for determining the correct delta for retaining calibration of the signal pin.

  • ManAtWorkManAtWork Posts: 2,049
    edited 2020-05-08 13:07
    OK, I repeated the test 9 times, once for every pin pair and once for every input level of GND, mid-point (sensor bias level) and VIO. The digram shows the input voltage on the X axis and the error (difference between odd-even pin) on the Y axis.

    The "banana" of the V signal is likely to be an artifact. My gain and offset correction code cannot handle values where the ADC signal is below the GND calibration point. This value is saturated to -32768. Without that distortion the blue curve might also be quite linear. Wrong, the left point of U is saturated, not V.

    The conclusion is that there is very little non-linearity in the ADC transfer function. If non-linearity was the main cause of the odd/even difference then the error should be very small at the calibration points (min/max) and highest in the center (near X=0). This is not the case. Instead, the calibration itself seems to have an inherent error with quite random distribution. Both absolute (actual vs. nominal) and relative (pin to pin) errors can be positive or negative. There's a slight asymetry (all three curves have a falling slope) but as only one P2 unit has been tested this is not statistically meaningful.
  • cgraceycgracey Posts: 14,133
    Ah, this calibration mismatch would be due to the 540k resistors that go to the ADC from VIO, GIO, and the pin. If I can redesign this sometime, I would use a single resistor and have the VIO, GIO, and pin switches in front of the resistor.

    By the way, when I said you may be able to use averaging, I figured that as new samples arrived from each pin, they would substitute for the old into the equation, so they overlap, not quite destroying bandwidth as if you had to wait for two fresh samples.
  • cgraceycgracey Posts: 14,133
    If you could possibly control the voltage on your ADC pins, so that you could collectively drive them to GIO and VIO, you could get rid of these internal resistor mismatches and get better calibration. I know that is probably not practical, especially on a continuous basis.
  • cgracey wrote: »
    Ah, this calibration mismatch would be due to the 540k resistors that go to the ADC from VIO, GIO, and the pin. If I can redesign this sometime, I would use a single resistor and have the VIO, GIO, and pin switches in front of the resistor.

    You mean there are three separate input resistors, one for the actual input pin and one for GIO and VIO each?
    That would explain the mismatch. Equally sized resistors are quite well matched if they are located on the same die, I think, but not better than 0.1% if not trimmed.

    I already had the suspicion that the DC resistance of the (imperfect) input mux causes offsets that are leakage and voltage dependent. But even if that resistance was something in the 100+ohm range it wouldn't explain the error. If a leakage current caused a voltage drop across the resistance of the mux the error should increase noticably with temperature.

    The good news is that this is not the case. I've heated the P2 with a heatgun and the error stayed nearly the same (increased from 169 to 173, for example).
    By the way, when I said you may be able to use averaging, I figured that as new samples arrived from each pin, they would substitute for the old into the equation, so they overlap, not quite destroying bandwidth as if you had to wait for two fresh samples.

    Err, sorry, I don't understand. Can you please explain this in shorter sentences. :blush:
    cgracey wrote: »
    If you could possibly control the voltage on your ADC pins, so that you could collectively drive them to GIO and VIO, you could get rid of these internal resistor mismatches and get better calibration. I know that is probably not practical, especially on a continuous basis.

    Hmm, external analogue muxes are expensive, at least anything that is better than the cheap 74HC40xx series. That would probalbly cost nearly as much and require more board space than external ADCs.

    But I'm quite optimistic that the problem can be solved by clever software. When high currents of 5A or more flow through the motor windings an error of 50mA is neglible. I just have to keep everything quiet at standstill. So it would be sufficient to compensate the error near the center.

    I have to sleep through it...
  • cgraceycgracey Posts: 14,133
    edited 2020-05-08 16:46
    ManAtWork, yes, there are three 540k resistors that can feed the ADC input. They are switched on the side opposite of the ADC input and are summed together at the ADC input. Only one is connected at a time. They route, GIO, VIO, and the pin to the ADC.

    About averaging, I will elaborate. Say you have these readings coming in, in sequence:
    Sample
    Period	Pin A	Pin B
    ---------------------
    0	GIO	pin
    1	pin	GIO
    2	VIO	pin
    3	pin	VIO
    (repeat)
    

    You can compute new calibrated samples on each sample period using each pin's most recent GIO, VIO, and pin readings. By having the two out of phase, you get double the raw sample rate. I think you could compute a final sample on each period by summing your two calibrated samples. That should cancel out that step difference between the two.
  • Basically, good idea. But I see two possible problems:
    1) in SINC2 mode the sequence would rather be
    Period	Pin A	Pin B
    ---------------------
    0	GIO	pin (invalid)
    1	GIO	pin (invalid)
    2	GIO	pin (valid sample)
    3	pin	GIO (invalid)
    ...
    
    because the ADC needs two dummy samples to settle after switching the input source. This could be avoided by taking SINC1 mode instead. But I think even then I had to throw away one sample after switching because of the fade-in/fade-out described at the beginning of the "ADC sampling breakthrough" thread.

    2) the oversampling requires the sampling integration time to be shorter than one PWM period. This means perfect cancellation of HF interference is not possible. Say, theres a positive spike in the first half and a negative spike in the second half. The cancel each other if the PWM period and ADC integration time match. HF interference is by definition high-pass filtered meaning there is no DC component.

    There is heavy switching noise from the high voltage PWM power stage. Switchinh a MOSFET on while the opposite reverse diode is still conducting can cause reverse recovery currents as high as 100A+ for several 10ns.
  • A different idea would be to keep the current state machine with the 250Hz calibration cycle but to calculate a correction signal out of the two samples directly before and after the pin switchover.
    Period	Pin A	Pin B
    ---------------------
    43	pin	VIO
    44	GIO	pin
    ...
    87	VIO	pin
    88	pin	GIO
    ...
    
    The delta signal can be calculated by (PinA(43)+PinA(88)-PinB(44)-PinB(87)) / 4 and low-pass filter that over multiple cycles. If we then add this delta value to all samples of B and subtract it from all samples of A the superimposed square wave should be eliminated. The delta calculation is low pass filtered but the full bandwidth of the original signal is preserved.

    Of course, the filter is imperfect as it's a one-out-of-many statistical sieve and there is a small chance that parts of the actual current waveform pass through (similar to a resonance effect). To protect against a worst-case scenario we could limit the delta value to something around +/-0.5% of the full scale. That should cancel out the unwanted steps at low signal levels and keep the worst-case error harmless at large signal levels.
  • Can you calibrate much less often? Once every 10 seconds?
  • What would it help? There is already visible drift inside the 4ms window I have, now. I hope that that gets better when the board has thermally settled after startup.

    10s is an awful long time. I can't interrupt the ADC sampling. Switching to a different source takes at least 6 samples. Missing 6 samples would produce an audible click noise at best and a noticable bump in the toolpath in the worst case (heavy load).

    But now that we know the reason of the pin to pin errors, that is, mainly unmatched input resistance for the VIO/GIO paths, it's easier to find a workaround. Now, I'd even think that Evanhs suggestion of using the calibration data of pin B to compensate the drift of pin A could possibly work. (Let's call it trans-pin calibration, TPC)

    There are mainly two effects that influence ADC accuracy:
    1) FET gate threshold voltage depends on temperature and process variations. This results in a variable offset voltage.
    2) mismatched resistor values. Absolute resistor values have a large tolerance and also depend on temperature and process variations. But R-to-R ratios on the same chip are pretty stable. They cause gain errors.

    So whether TPC works or not depends on the question if a temperature change would have the same effect on the threshold voltage of all input transistors of different pins. Finding out requires some tests with multiple P2 chips to get statistically reliable data.
  • For now, I decided to do it like that:

    I add a statical (meaning once at startup) analysis of the offset between pins of a pair. I add/subtract the delta value to all samples of PinA/B. That shifts the curves of the error diagram vertically so that they all cross the zero-point minimizing the error for low currrents while keeping (most of the) errors at high (positive and negative) currents. So it's not perfect but doesn't cost much.

    In the case that the result is still not acceptable I'll try out both Evans TPC method and my .. let's call it edge-filter. The one producing the better results will win.
  • Yeah, looks good! :smiley:

    The first try has only cut the error down to one half. But if I wait for ~1000 samples after booting it gets down to 1/10 or less so it's lost in the noise. There seems to be a thermal gradient across the chip when cogs and smartpins startup.
    514 x 512 - 5K
    509 x 514 - 5K
  • cgraceycgracey Posts: 14,133
    When I wrote the test program that runs on the tester at the foundry, I went around and around trying to figure out why things were inconsistent. Eventually, I realized that a mere 100 microsecond delay between powering up the ADC and taking measurements was sufficient for consistency.
  • evanhevanh Posts: 15,126
    I guess the startup differences is an indication of the impact of temperature.

  • What kind of current sensors are you using?

    Some hall effect sensors such as TMCS1108 operate in discrete time mode. The output is sampled at ~250kHz steps. It's fine for average current measurement, but current mode control is a problem due to the lack of smooth slope in the output. I don't think it's likely this is the cause of the squarewave distortion, the frequency is too different. But there could be a slight chance of an odd aliasing effect that causes some samples to detect when the when the mosfet is on and others when it is off.

    Output impedance of the current sensor?
    If it's high, then the chopping action of the ADC mux could affect the pin voltage. It's probably more of an issue for voltage dividers. For current sensors they tend to the midpoint like the ADCs so it shouldn't be as much an issue.

    Perhaps it would work to have one pin only do signal samples and the other cycles between signal, power, and ground for calibration.

  • ManAtWorkManAtWork Posts: 2,049
    edited 2021-05-14 09:32

    I use the CQ-3302 current sensors from AsahiKASEI. (schematic available here) They are made from InAs semiconductor substrate and NOT from silicon. Therefore they have superior perfomance (lower noise and drift). They are completely analogue and don't use any kind of sampling or digital data transmission.

    The superimposed square wave is surely no aliasing or subsampling artefact. It is also there when the current is constant or even zero. The steps occur exacly when the state machine switches from one input pin to another and are gone if I use only one ADC pin.

    Perhaps it would work to have one pin only do signal samples and the other cycles between signal, power, and ground for calibration.

    We have discussed this somwhere else and came to the conclusion that that doesn't completely remove the ADC errors. Gain, offset and thermal coefficient are individual parameters and are sort of randomly distributed, that means, do not correlate from one pin to the next. So using the calibration info of one pin to correct another doesn't work.

    Chip said that auto-calibration doesn't work as perfect as it could be in the current silicon. So all we can do is to implement an observer for the remaining error (static/offline, only at boot time or dynamical/online in the background) and compensate it somehow.

    I hopefully will have some spare time in the near future to continue the project. I found out that doing faster samples (for example 256 clock SINC3 instead of 8192 clock SINC2) and post-filtering multiple samples with a software filter reduces the noise.

Sign In or Register to comment.