ADC idea for smart pins
cgracey
Posts: 14,152
Imagine this mode:
- At the start of each measurement period, some 16-bit (negative) constant is loaded into a 16-bit accumulator.
- On each clock, another 16-bit constant is added into the accumulator if the ADC output bit is high.
- At the end of the measurement period, the accumulator is output for reading via RDPIN.
Now, if we could automate the production of those calibration constants, we could have a smart pin ADC mode that returns proper conversions that are scaled and zeroed.
- At the start of each measurement period, some 16-bit (negative) constant is loaded into a 16-bit accumulator.
- On each clock, another 16-bit constant is added into the accumulator if the ADC output bit is high.
- At the end of the measurement period, the accumulator is output for reading via RDPIN.
Now, if we could automate the production of those calibration constants, we could have a smart pin ADC mode that returns proper conversions that are scaled and zeroed.
Comments
The initial 16-bit constant gets loaded into the top 16 bits of the accumulator. Bottom 16 bits are cleared.
The 16-bit adder constant adds into the 32-bit accumulator.
The upper 16 bits of the accumulator are the result.
For an 8-bit conversion, how many clocks does it take to get a variance of 255 one's between GIO and VIO calibration modes?
Then, we have a measurement period and a (negative) one's offset. No adders are needed, just counters with preloads. Very simple. Time is not constant, although, we can reasonably know what the worst case could be.
That's the tricky bit... The 'calibration constants' vary with process and temperature and voltage....
The temperature drift seems to be one practical ADC limiting factor, which is not helped by the larger thermal cycling seen in P2 operation ...
You would probably also want to be able to set the sampling clock.
There are limits to what the on-chip analog can do, & I think the smart pins can co-operate well with external ADC blocks, like OpAmp+D-FF+Vref for those needing higher ADC resolutions.
On the other hand, doing the computations with raw measurements and then scaling the final output can enhance accuracy.
It would be nice to have the option.
The advantage is that scale and zero constants are loaded once at the start, after that you just read the value upon each acquisition, without further math. So that is marginally nicer, but at the expense of hardware complexity.
What you're describing has some similarities with Goertzel implementation, but you'd need +1 and 0 multipliers instead of +1 and -1, and you'd need the ability to preload the accumulation register. Then, you'd just load the LUT table with a kind of 'dithered constant' - say you need a scaling value of 200.25, you'd load up the LUT with
200,201,200,200,...
Its good that you think about these things, Chip.
No scaling needed, as explained above. At least, no higher math than increment.
Here's how it could work:
Pick a reasonable estimate for the clocks required (N=300), given a desired sample range of 0..255, then:
(1) Measure GIO for N clocks, record number of 1's into GIO_ONES.
(2) Measure Pin for N clocks, subtract GIO_ONES, output sample.
(3) Measure VIO for N clocks, record number of ones into VIO_ONES.
(4) Measure Pin for N clocks, subtract GIO_ONES, output sample.
(5) DIFF_ONES = VIO_ONES - GIO_ONES.
(6) If DIFF_ONES > 255 then N--, else if DIFF_ONES < 255 then N++
(7) goto (1)
So, the calibration works like a tracking ADC and a sample is taken and reported between each alternate GIO and VIO calibration, allowing half the time for actual sample acquisition.
Because OzProp has the pin streaming working, we can record some real streams and run some simulations
There are internal calibration modes where you can connect the ADC input to GIO (GND) or VIO.
There are smart pin modes which count time until so many highs or lows are counted.
Oh, the N doesn't bounce around if the diff is 255. It need only adjust N if above or below.
That all sounds useful, it would likely need some signal to show when it was correctly tracking ? - because it could take many readings, before the 255 calibrate was attained.
How would you select differing numbers of resolution bits ? Does that need 2 registers (N & resolution) ?
The point is to make a type of ADC conversion that is dirt simple, at the expense of flexibility. The C flag on RDPIN could tell you if it's tracking.
The sample rate would be constant for each mode. 8/10/12-bit modes would take something like 640/2560/10240 clocks. IN would go high on each sample report. So, you could exploit this as a timebase for a sampled system.
That does sound easy to explain and document.
And these shorthand WRPIN instructions even work on spans of pins. So, with a single long instruction, you can fire up a whole range of ADC pins that are synchronized. Then, you can read each one via RDPIN.
Phil's been complaining that the chip is too complex, so stuff like this will make it simpler.
I've been programming smart pins for a while and I see there's a huge need for this. It keeps your head clear by keeping your code clean.
Sounds a good start - those sample times could be tested on a Rev A device, (maybe with low noise VIO ?) to see if they needed tuning, before being locked into silicon.
I'm not sure much testing has been done on many ADC's running, to check for crosstalk effects etc.
Then you get a negative or overly positive number. Due to some noise, that would happen under normal circumstances.
There will need to be an option to clip samples to 0 or 255.
Instead of adding the new pin instructions and using up the two D/S instruction slots, can't you just change WRPIN so that prefixing it with SETQ overrides the specified config/DAC value, and then make new PINDRV/PINDAC/PINwhatever mnemonics that are really just aliases for SETQ+WRPIN $ugly_constant? The code will look the same as it would with your new shortcut instructions: each SETxxx line would just assemble to SETQ+WRPIN instead of a single native instruction.
(Sorry, this probably should have gone in the other thread.)
Users might want to measure current sense voltages, 'thru ground', and there is also measure the +ve Clamp Diode as a means of temperature sense.
What are the returned values for < 0 and > 255 ? Users should be able to tell where they are at all time ?
The 32-bit returned value could be slightly negative ($FFFFFFFx) or over 255 ($0000010x).
Thought so, those seem easy enough for users to manage without needing HW clipping ? Hardware clipping can mask problems...
The (slightly) 'outside the rails' aspect of the P2 ADC is a useful feature.