Tubular and OzPropDev did some great testing on this a while back (on a P2D2) that’s inspired me to further check out the smart pin-based DAC’s/ADC’s, now on the P2-EVAL board and while using the LDO regulator. Here’s my testing regime:
First - I looked at DAC performance in isolation. I connected the DAC output (via a BNC cable - short as possible terminations - from the header pins - P17 and GND) to a 24 bit ADC (on a Linear Technology LTC2448 evaluation board). Means and sdevs of output voltages were then logged at various DAC settings over a 30 second measurement period. P2-EVAL-DACTest.jpg shows the results. The sdev’s on each of these points ranged between 50-70 microvolts. The limited ADC range on the LTC2448 (2.500V) restricted these measurements to DAC<=192. The DAC output was found to be 12.474+12.913(DAC setting) mV; i.e, the DAC has a small offset voltage.
Having established a baseline for DAC performance I then did the following :
Measure and record V1623 (3316 mV) for use in LabVIEW calculations.
(*) Sweep a DAC output on a chosen pin (again I used pin #17) from 0-255 (this pin is connected to an adjacent ADC pin via a shorting header plug).
At each DAC setting, take n readings (here n=2000) each of GIO, PIO and VIO (in sequence) using an ADC channel (here pin #16), setting a fixed period (ADCPER) in smart pin mode 15 and storing each resulting count into HUB RAM.
After said acquisition at each DAC setting, upload HUB to a LabVIEW host vi. This vi plots the respective GIO, PIO and VIO traces, computes n point (PIO-GIO), (VIO-GIO) and ADC result vectors and displays histograms/statistics for each of these parameters. Plots like these are very useful for looking at correlations and revealing underlying issues. In addition, my vi also calculates an effective number of bits (ENOB).
I chose to define ENOB as log2(MEAN[VIO-GIO]) – log2(SDEV[Result]) – this takes into account both the number of bits in the ADC span [VIO-GIO] and the # of bits of noise in the ADC result (which I computed using a 1 sigma measure – of course the ENOB results would be somewhat lower if we were instead to use 2 or 3 sigma). I’m sure the engineering definition of ENOB is different - but the method I’ve described seemed to make intuitive sense.
Store all data/stats into a text file at the current DAC setting and repeat at *
Then - repeat the entire sequence of operations using a new ADC period (ADCPER) setting – here I used values of 2^m with m ranging from 6-16 (i.e ADCPER : 64 – 65536)
I’ve also attached a screen capture of the vi at the end of the run, the full tab-delimited data file generated and an ENOB plot across the full DAC sweep for each ADCPER value. Approximate achievable sampling rates in these tests – (based on the above 3 point sampling method and ignoring a small ADCPRE acclimation period) - are 1250000/(2^(m-6)) Hz.
In this work my P2 was running at 240 MHz. There’s some interesting (and repeatable) “bowtie” structure in the ENOB traces at the lower ADCPER settings that becomes less pronounced at higher values; in those latter cases ENOB tapers off noticeably as the DAC output approaches VIO. I’m not sure of the reasons for these subtle effects at present. Given that there is some inherent variability in the DAC output on its own as characterized earlier, the actual ADC performance is going to be a little better than what is shown here. One can see that at ADCPER=65536 we are reaching the point of diminishing returns…
Clearly the P2 is offering us considerable flexibility in the ADC speed/resolution trade-off and having 60+ potential analog pins on one chip adds to the wow factor. Sure, there will be more demanding sensors requiring R > 13 bits and for these one will still need to use dedicated high performance ADC’s - but I’ll bet the P2 on its own will cover a large proportion of usage cases.