More interesting plots.
Comments: The Zero point on the LM358 is more of an outlier than the LTC1152, as the LM358 is not that capable very close to Zero.
Perhaps add another test point at 50mV ? - some uses might not need to go closer than 1% to the limits with precision.
If we take a nominal 5 ohm skew in P/N FET drive, then I make 22K as the 1/ 4096 point, but it is not quite that bad, as the virtial Vcc effect has a scale applied, = (Vcc-Vo)*(Vo/Vcc) and ~25%/50%/75% of FSD are useful check points
So for 2.2k we have a Virtual Vcc offset of 9.1LSB, and at 25% FSD we *3/4*1/4 to get 1.71bits of Resistive Skew error, and also at 75%, and the 50% level has the peak of 2.28 LSB (1/2*1/2)
For the 100K case, taking 20nA bias on a LM358, that is (100k*20n/3.3)/(1/4096) = 2.482 LSB { (100k*45n/3.3)/(1/4096) = 5.585 LSB, another datasheet } shift at zero, but reducing at full scale because that is the calibrate reference ?
LM358 Bias current offset numbers are better, so Rf could be chosen to (apx) work in offset mode.
Interesting that the 'special case' spike @ 8000000H never goes away, and there may be more at 25% and 75% and 12.5% etc
( A couple more test points, for those special cases ? )
Even with care to remove the slope and banana effects, those harmonic content variations will likely remain.
Note that whatever else the prop is doing will matter too, and a 0.8mV move in Virtual Vcc or Virtual GND is 1 LSB at 12 bits.
I wonder how much improvement a Balanced Amplifier design, driven from Duty and !Duty matching filters would give ?
Next, I tried the LTC1152 in a Sallen & Key configuration:
I'd make some small changes to that
* Increase the R1 from 10K to 22K (or even 33K) - this halves the P/N resistance effects
* Shift the Cap from OP to GND, as the Prop Pin has a lot of RF content, and a LM358 is not much of an ideal opamp at 40MHz
{ I even found a paper on stopband effects here http://postreh.com/vmichal/papers/frequency_filters_with_high_attenuation_Radio2009VMJS.pdf
* Increase the Rf values to 220k/110k (or 240k/120k), which (roughly) match the Opamp source values, to reduce bias current effects.
It's a known fact that some combinations gives more distortion for audio signals.
You would expect the IO pins nearest the Vcc/Gnd pins to be better, and expect the QFP to be better than DIP (more bonded Pwr/Gnd), ( and QFN to be best of all ?).
It also depends on 'what else' the other pins, and cores are doing..
It also depends on 'what else' the other pins, and cores are doing..
Yes, but for instance using core 0 will almost always give a better result for audio in my experience. Just by changing the starting order of cogs in most "audio + video applications" will give a huge impact on sound quality.
I am always using cog 0 or 1 for Audio and it works very well.
Yes, but for instance using core 0 will almost always give a better result for audio in my experience. Just by changing the starting order of cogs in most "audio + video applications" will give a huge impact on sound quality.
I am always using cog 0 or 1 for Audio and it works very well.
Then 'favoured die status' must ripple back to also apply to the COGS themselves. Which packages did you test ?
I asked Chip about this a while back and he said the reason Cog 0 was better is that the path on the die to the pin on the package was shortest.
Perhaps he meant the shortest path to the Pin-Mux (es) ? - as any cog, can drive any pin, after that Mux.
Time domain skew effects will apply along the whole path, whilst resistance skew effects will be pin-bound.
Did he say which COG was worst - to give Phil a best/worse target ?
I asked him specifically about John's WAV player. He said that COG 0 had the shortest path to the pin in the package he was using.
Don't quite sit right with me, there has to be something else afoot as well.
My understanding is that the audio got noisier on other COGs, and that in conjunction with COG 0, mixing LFSR noise into it helped greatly with quantization error of the samples.
'Quietest Cog' could also relate to Paths to the Vcc/Gnd pins; as those are what drive the DUTY pin(s), ripple matters, especially _differing_ frequencies in the ringing.
Suppose a P Fet is on for N cycles of ringing, and the N FET is on for N+1/2 of a sightly different frequency.
The P Fet will average to (roughly) zero, whilst the N fet will contribute an error aka a sync rectification effect.
So total supply noise can matter.
Phil's plots show a definite bump in his one special case value. (expect others)
One of the things I wanted to check was monotonicity: IOW are the voltage output step sizes between equally-spaced frqa steps approximately equal and, more importantly, do they all have a positive sign? In order to automate this, I had to operate both the Prop program and the voltmeter from a Perl script on the PC. I used the LTC1152 circuit shown above in post #31.
Since there seemed from my prior experiments to be a singularity of sorts at frqa == $8000_0000, I wanted to see what effect the actual frqa step size had on the monotonicity and whether avoiding power-of-two values was helpful. So I did four experiments, incrementing frqa in 255 steps by $0100_0000, $0101_0000, $0101_0100, and $0101_0101. Here are the resulting graphs:
From this I would have to conclude that avoiding power-of-two frqa values is helpful to a point.
It could be interesting to increment more steps, and plot both Step Size, and deviation from ideal.
That might eventually catch a monotonic failure.
How good is your voltmeter ? Does it read to 0.1mV or is that rounding later ?
(0.1mV is ~15.6 bits)
Those numbers indicate the useful differential linearity limit would be around
1 part in 3125, (11.6 bits) if you take the worse case peak shown, or
1 part in 8333 ( 13 bits) taking the second worst, and
1 part in 12500 (13.6 bits) on a more average pick.
Or, put another way, that an 8 bit DAC, is good to under 1/10 of a LSB in voltage.
How good is your voltmeter ? Does it read to 0.1mV or is that rounding later ?
It's this one, good to 6 1/2 digits of precision. I have it set to five digits to save sampling time.
I did some sampling with increments set to 1/4096 full-scale (12-bit precision), 256 samples straddling the "singularity" at $8000_0000, incrementing at $0010_0000 and again at $0010_0100. Here are the results:
The take-home lesson so far is this: do not just take your 12-bit value and shift it left by 20 places to set frqa! Instead, shift it left by 12, add the 12-bit value again, and shift it left another 8. Although the simpler technique is still monotonic, it's much less locally linear.
The take-home lesson so far is this: do not just take your 12-bit value and shift it left by 20 places to set frqa! Instead, shift it left by 12, add the 12-bit value again, and shift it left another 8. Although the simpler technique is still monotonic, it's much less locally linear.
Did you try just adding 100H to all left shifted values ? - ie I think it may be better to keep that value small, to have the same 'repeat frame' for averaging purposes.
Clean binary values must phase lock output pulses, with counter changes/rollovers, (and that has to cause ground bounce, and skew changes) but the fractional offset ones will tend to 'average-out' any such occurrences. It will also phase lock any sync rectification effects.
I think that extra fraction means the 'repeat frame' changes to be a further 4096x slower - in one case the 'repeat frame' is CLK/4096 or 80e6/4096 = 19.531kHz and the other it is 80e6/4096/4096 = 4.76Hz, which could be closer to your meter sample rate. ie it is another average step, that is useful in the measurement setup.
It may be less useful for Audio, but there are plenty of control and calibration systems that lower averages are still fine for.
(even for Audio, it may mean less noticeable distortion)
In the push for higher precision, what about setting the meter to 6 samples a second, and maybe tweaking that fractional value to
match the 'repeat frame', which I make as 80e6/(2^32/0b000000000000000101000010) = 5.9977Hz and fixed value adder is 0x0143
I do like the idea of using MORE of the 32 bits, to create two frame rates : one for LSB and another for the longer walk through time.
I wonder if the differential output can help - tho it will only cure some contributions.
In the push for higher precision, what about setting the meter to 6 samples a second, ...
I doubt that my cabling (twisted-together banana-plugged test leads) is up to that level of precision. I'm frankly surprised to get 0.1 mV with my setup.
I sampled all 4096 values this time, and plotted both the step size and the absolute error. This, time, in order to minimize the effects of noise, each sample is an average of five consecutive readings. Here are the results:
I could try, but that might be stretching things a bit.
You underestimate your setup - 14 bits is ~305uv, and you are within that for most of the span - impressive.
I'm wondering if it is better to keep the fractional part fixed at (eg) 0x143, which I think gives a steady sub-frame, or to add a ratio of the value, which you are doing in those plots. A steady sub frame may be better for a known end system sample rate.
ie I think you do ((N << 12)+N)<<8, so try instead ((N<<20)+0x143 ( at 6 readings / second 6.5 digits )
Here are my results from the 14-bit experiment using 10 uV (6-digit) resolution this time and no averaging. Rather than going proportional with the lower bits, I just added a 1/2-bit bias in order to avoid the singularity points. Here are the results:
Although the output is still strictly monotonic, the error veers into the 1+ LSB territory. But, hey, this is all wired on a solderless breadboard with unshielded twisted leads going to the voltmeter.
Impressive, Step size is getting of the order of +/-1 or 2 bits in a 16 bit scale,
and the absolute Non-linearity seems to have changed shape ? Seems to be good to around ~14 bits.
I wonder how good that HP Voltmeter is ?
Is that change the range-shift, or something else ?
A fixed 0x0004000 bias ( Is that 1/2 bit, or 1/4 bit? ) gives a frame of ~ 305Hz, so I'd still be curious if the bias of 143H I calculated, which gives a frame rate ~the same as the Voltmeter, improves anything, or makes no change.
Does this vary much by Cog / Pin ? (some have suggested Cog0 is better )
Does this have the better-stop-band filter change ? ( & the Charge Pump cap on pin 8 ?)
WOW, going to have to check over some of my duty mode dac code to make sure it's doing all this! I have tried using diffrential output and an op-amp configured as a diffrential low-pass filter. It worked, and made scaling a snap bipolar signals, but went back to a pre-filter based circuit because I wasn't sure what the RF from the duty mode would do to the op-amp.
Excellent work, Phil. Prop based SAR ADC here we come
It will be interesting to see whether the shape of that error curve remains the same across cogs and different silicon. It may be possible to introduce an offset profile to help linearise if it is common.
Frankly, the error curve could well be the result of drift, too. Reading 16K separate data points at 6 digits of precision takes a long time with the HP/Agilent meter. I probably ought to do a sequence of readings at randomized points to rule out any time-dependence.
BTW, today I looked up the LTC1152 op amp on DigiKey. Ouch! It's expensive! I bought them years ago when I was manufacturing DACs designed to drive a shielded cable. This was one of the only ones I could find that was stable under those circumstances. It really does have some nice characteristics, and I can recommend it without reservation -- assuming the price isn't off-putting.
Yes that would be a good idea, to help rule out time and related temperature dependence. Frankly you've already gone well beyond what I would have "guessed" would be possible, and you're still using proto board and twisted wires. Its exciting to see where it might end up.
That LTC1152 does look nice and "ideal", I guess we can't bemoan them for charging accordingly. I was recently surprised to find the dual version of an op amp cheaper than its single version, both having the same soic-8 package. The dual did have a lower bandwidth though.
It's a nice enough meter, are you gathering data over GPIB or something?
I probably ought to do a sequence of readings at randomized points to rule out any time-dependence.
Or, you could do a ramp up and ramp down test.
If they overlay you know there was not drift, if they do not, you can judge how much is drift,and how much is reading noise.
Comments
Comments: The Zero point on the LM358 is more of an outlier than the LTC1152, as the LM358 is not that capable very close to Zero.
Perhaps add another test point at 50mV ? - some uses might not need to go closer than 1% to the limits with precision.
If we take a nominal 5 ohm skew in P/N FET drive, then I make 22K as the 1/ 4096 point, but it is not quite that bad, as the virtial Vcc effect has a scale applied, = (Vcc-Vo)*(Vo/Vcc) and ~25%/50%/75% of FSD are useful check points
So for 2.2k we have a Virtual Vcc offset of 9.1LSB, and at 25% FSD we *3/4*1/4 to get 1.71bits of Resistive Skew error, and also at 75%, and the 50% level has the peak of 2.28 LSB (1/2*1/2)
For the 100K case, taking 20nA bias on a LM358, that is (100k*20n/3.3)/(1/4096) = 2.482 LSB { (100k*45n/3.3)/(1/4096) = 5.585 LSB, another datasheet } shift at zero, but reducing at full scale because that is the calibrate reference ?
LM358 Bias current offset numbers are better, so Rf could be chosen to (apx) work in offset mode.
Interesting that the 'special case' spike @ 8000000H never goes away, and there may be more at 25% and 75% and 12.5% etc
( A couple more test points, for those special cases ? )
Even with care to remove the slope and banana effects, those harmonic content variations will likely remain.
Note that whatever else the prop is doing will matter too, and a 0.8mV move in Virtual Vcc or Virtual GND is 1 LSB at 12 bits.
I wonder how much improvement a Balanced Amplifier design, driven from Duty and !Duty matching filters would give ?
I'd make some small changes to that
* Increase the R1 from 10K to 22K (or even 33K) - this halves the P/N resistance effects
* Shift the Cap from OP to GND, as the Prop Pin has a lot of RF content, and a LM358 is not much of an ideal opamp at 40MHz
{ I even found a paper on stopband effects here
http://postreh.com/vmichal/papers/frequency_filters_with_high_attenuation_Radio2009VMJS.pdf
* Increase the Rf values to 220k/110k (or 240k/120k), which (roughly) match the Opamp source values, to reduce bias current effects.
I'm waiting eagerly to see some comparisons of different pin and cog combinations..
You would expect the IO pins nearest the Vcc/Gnd pins to be better, and expect the QFP to be better than DIP (more bonded Pwr/Gnd), ( and QFN to be best of all ?).
It also depends on 'what else' the other pins, and cores are doing..
I am always using cog 0 or 1 for Audio and it works very well.
Then 'favoured die status' must ripple back to also apply to the COGS themselves. Which packages did you test ?
Perhaps he meant the shortest path to the Pin-Mux (es) ? - as any cog, can drive any pin, after that Mux.
Time domain skew effects will apply along the whole path, whilst resistance skew effects will be pin-bound.
Did he say which COG was worst - to give Phil a best/worse target ?
Don't quite sit right with me, there has to be something else afoot as well.
My understanding is that the audio got noisier on other COGs, and that in conjunction with COG 0, mixing LFSR noise into it helped greatly with quantization error of the samples.
Suppose a P Fet is on for N cycles of ringing, and the N FET is on for N+1/2 of a sightly different frequency.
The P Fet will average to (roughly) zero, whilst the N fet will contribute an error aka a sync rectification effect.
So total supply noise can matter.
Phil's plots show a definite bump in his one special case value. (expect others)
For serious Audio work, I like the look of this part
http://www.nuvoton.com/NuvotonMOSS/Community/ProductInfo.aspx?tp_GUID=8c8ce332-3114-4bad-b581-7bfca589c913
NAU8402 - includes a -ve rail charge pump, so works at 0V zero.
Having a natural 0V midpoint, means it should also work as a DC DAC.
Price looks good, but not easy to source yet. I see Digikey do stock a NAU8822A
Since there seemed from my prior experiments to be a singularity of sorts at frqa == $8000_0000, I wanted to see what effect the actual frqa step size had on the monotonicity and whether avoiding power-of-two values was helpful. So I did four experiments, incrementing frqa in 255 steps by $0100_0000, $0101_0000, $0101_0100, and $0101_0101. Here are the resulting graphs:
From this I would have to conclude that avoiding power-of-two frqa values is helpful to a point.
-Phil
That might eventually catch a monotonic failure.
How good is your voltmeter ? Does it read to 0.1mV or is that rounding later ?
(0.1mV is ~15.6 bits)
Those numbers indicate the useful differential linearity limit would be around
1 part in 3125, (11.6 bits) if you take the worse case peak shown, or
1 part in 8333 ( 13 bits) taking the second worst, and
1 part in 12500 (13.6 bits) on a more average pick.
Or, put another way, that an 8 bit DAC, is good to under 1/10 of a LSB in voltage.
I did some sampling with increments set to 1/4096 full-scale (12-bit precision), 256 samples straddling the "singularity" at $8000_0000, incrementing at $0010_0000 and again at $0010_0100. Here are the results:
The take-home lesson so far is this: do not just take your 12-bit value and shift it left by 20 places to set frqa! Instead, shift it left by 12, add the 12-bit value again, and shift it left another 8. Although the simpler technique is still monotonic, it's much less locally linear.
-Phil
That 'better' plot is so good, perhaps it is time for another Zoom, to 14 bits precision !!?
Did you try just adding 100H to all left shifted values ? - ie I think it may be better to keep that value small, to have the same 'repeat frame' for averaging purposes.
Clean binary values must phase lock output pulses, with counter changes/rollovers, (and that has to cause ground bounce, and skew changes) but the fractional offset ones will tend to 'average-out' any such occurrences. It will also phase lock any sync rectification effects.
I think that extra fraction means the 'repeat frame' changes to be a further 4096x slower - in one case the 'repeat frame' is CLK/4096 or 80e6/4096 = 19.531kHz and the other it is 80e6/4096/4096 = 4.76Hz, which could be closer to your meter sample rate. ie it is another average step, that is useful in the measurement setup.
It may be less useful for Audio, but there are plenty of control and calibration systems that lower averages are still fine for.
(even for Audio, it may mean less noticeable distortion)
In the push for higher precision, what about setting the meter to 6 samples a second, and maybe tweaking that fractional value to
match the 'repeat frame', which I make as 80e6/(2^32/0b000000000000000101000010) = 5.9977Hz and fixed value adder is 0x0143
I do like the idea of using MORE of the 32 bits, to create two frame rates : one for LSB and another for the longer walk through time.
I wonder if the differential output can help - tho it will only cure some contributions.
What about Pin or Cog choice ?
I doubt that my cabling (twisted-together banana-plugged test leads) is up to that level of precision. I'm frankly surprised to get 0.1 mV with my setup.
I sampled all 4096 values this time, and plotted both the step size and the absolute error. This, time, in order to minimize the effects of noise, each sample is an average of five consecutive readings. Here are the results:
-Phil
You underestimate your setup - 14 bits is ~305uv, and you are within that for most of the span - impressive.
I'm wondering if it is better to keep the fractional part fixed at (eg) 0x143, which I think gives a steady sub-frame, or to add a ratio of the value, which you are doing in those plots. A steady sub frame may be better for a known end system sample rate.
ie I think you do ((N << 12)+N)<<8, so try instead ((N<<20)+0x143 ( at 6 readings / second 6.5 digits )
Although the output is still strictly monotonic, the error veers into the 1+ LSB territory. But, hey, this is all wired on a solderless breadboard with unshielded twisted leads going to the voltmeter.
-Phil
and the absolute Non-linearity seems to have changed shape ? Seems to be good to around ~14 bits.
I wonder how good that HP Voltmeter is ?
Is that change the range-shift, or something else ?
A fixed 0x0004000 bias ( Is that 1/2 bit, or 1/4 bit? ) gives a frame of ~ 305Hz, so I'd still be curious if the bias of 143H I calculated, which gives a frame rate ~the same as the Voltmeter, improves anything, or makes no change.
Does this vary much by Cog / Pin ? (some have suggested Cog0 is better )
Does this have the better-stop-band filter change ? ( & the Charge Pump cap on pin 8 ?)
Lawson
It will be interesting to see whether the shape of that error curve remains the same across cogs and different silicon. It may be possible to introduce an offset profile to help linearise if it is common.
BTW, today I looked up the LTC1152 op amp on DigiKey. Ouch! It's expensive! I bought them years ago when I was manufacturing DACs designed to drive a shielded cable. This was one of the only ones I could find that was stable under those circumstances. It really does have some nice characteristics, and I can recommend it without reservation -- assuming the price isn't off-putting.
-Phil
That LTC1152 does look nice and "ideal", I guess we can't bemoan them for charging accordingly. I was recently surprised to find the dual version of an op amp cheaper than its single version, both having the same soic-8 package. The dual did have a lower bandwidth though.
It's a nice enough meter, are you gathering data over GPIB or something?
Or, you could do a ramp up and ramp down test.
If they overlay you know there was not drift, if they do not, you can judge how much is drift,and how much is reading noise.
That would work, too.
-Phil