Second order modulator isn't helping resolution of a DC level. The Sinc2 or Sinc3 are all that's needed. The CSV file clearly showed them vastly outdoing the basic Sinc1.
There are sophisticated technologies that use math like fourier transform, MRT for example. Or CT. Or ultra-sonic imaging. But they are often fine tuned to a given problem, so being expensive and protected by patents and hidden know-how. There is no universal solution for non trivial problems.
If we just follow textbooks, we will never understand whats going on! We have real numbers and imaginary numbers. When imaginary numbers were invented, we had two chances to interpret this:
1. There are real numbers and now we have something, that is not real, but helpful.
2. We find, there are helpful numbers, that are not real and we deduce, that real numbers were always seen useful, but are imaginary also.
If we go the second path, we can understand that numbers are just useful, we can look for any kind of numbers, that solve a problem and hopefully gained understanding and knowledge.
The Propeller is just different! And folks, using the propeller should be open minded to also break barriers in other fields. You can understand fourier transform or analysis the moment you understand that a set of values is a vector and vector calculus can be applied. Then everthing is very simple.
We will also find a way to understand, how a filter works! But not now, later. After we see, how we can not understand how a filter works.
A gap is not closed, the moment you do no longer fill that gap. It just opens a way for others to fill that gap and you stand aside. This also is true to science.
Perhaps this is silly, but we saw toggles on the DC signal.
The ratio of those over time is information. Would that not be good for another bit or two, given a reasonable sample time?
Put another way, the less delta slope or change in curvature present, the longer time needed to obtain an improved result. Secondly, the improvement factor is lowest when signal change is lowest.
It also seems like mixing a bit of noise with the DC, perhaps with the PRNG, would actually improve precision, again given appropriate time? (Force the toggle to happen consistently, rather than just at inflection points so the ratio is clear.)
Both these things seem consistant with the filter behavior presented so far.
I would love to hear a snippet of music run through these things myself. Ears can tell us a lot.
It also seems like mixing a bit of noise with the DC, perhaps with the PRNG, would actually improve precision, again given appropriate time?
That's exactly what some high precision SigmaDelta-ADC systems do.
You need a certain noise level ....
and then you need to average over at least 2^n samples plus a bit for the window.
Systems I remember were doing 2^(2*n) and then you throw the lower half of the data away.
So 4 times oversampling gives 1 more real bit
... sorry don't have the link/reference at hand
Noise shaping moves quantization noise up to higher frequencies, but always with a slope - the higher
the frequency the more noise. Thus filter side lobes are a big deal.
The higher order the noise shaping, the better it moves the noise up, but the more steeply the noise
increases with frequency (the filter lobes are more vital - the "noise" dwarfs the desired output LSB
amplitude, any filter that doesn't have solid stop-band cutoff isn't going to get rid of this large
amount of noise and increase ENOB effectively.
The time-domain corrolary of this is that the output is oscillating around, even for a steady DC input, ie
there are idle cycling patterns in the bits coming out, it is not simply pulse-density modulation (far from
it in fact). The higher the order, the more amplitude in oscillations/noise. The signal looks chaotic,
especially for > second order, where the modulator is close to the edge of stability anyway.
Thus the digital low pass filter never sees a DC input at all.
Put the output through a simple moving average and you'll get no better than without noise shaping, but
the artifacts are more noise like and not clustered around the rational mark/space ratios (as with simple
pulse density mod). In practice this just increases the artifacts.
Put the output through a good low pass brick wall filter (this means a Sinc style one), and you remove all the
noise above the cutoff frequency. This gains load of resolution (more at lower frequencies than at higher),
but a good brick wall FIR filter needs 100's or 1000's of taps, not just 64. The good news is that you then
immediately decimate by a large factor, so the filter only needs to be applied every N samples, for a suitable N.
You need all those taps, you need a good (brick wall) filter, otherwise you let all the shaped noise back into
the output (remember there's a load more energy in the noise than before).
Think -100dB stopband, 20+ poles/zeroes, that sort of filter specification.
This is not news to the people designing sigma-delta ADCs, the delay times are large in terms of the sample
clock to get a proper brick-wall.
With 200MHz sample clock, 500 tap FIR, you can expect 1.25us delay. For a 1MHz output bandwidth the output
sample rate might be 4MHz or so, so the FIR is applied every 250ns, ie the rate of tap summations is 500 per
250ns.
Here's some plots, although this is for a PWM sigma-delta emulation, not single bit, but you get to see
the waveforms better. Its intriguing that 3rd order seems less noisy than 2nd order.
The spectra are before low-pass filtering, and you can see a suitable cutoff frequency would be something
around 0.04 of Fs or lower, and how noise rises 50dB+ in a few multiples of the cutoff...
Its very non-intuitive how the extra oscillation of the 6th order over the 5th order gives nearly 20dB improvement
around the higher of my simulated test tones. Although the oscillations look chaotic, they are very precisely
correlated to cancel extremely well at lower frequencies - these signals have discrete values (4 or 5 bits
worth), the only way to get more freedom to cancel at low frequencies is to go to larger oscillations at the
higher frequencies. The nature of the sigma-delta modulation forces this to happen (instability aside!).
Table shows SNR needed for N bits.
Table is for a input range of +/- 1 V, would have to be adjusted a hair for 0..3.3 V range.
Graph shows SNR versus OSR.
So, you need at least 100 samples to get 8-bit precision with a 1st order modulator under ideal circumstances...
For DC levels, we get toggling between two 6-bit-quality values.
For 85/256-duty, we get toggling between 86 and 82.
For 86/256-duty, we get mostly 86 with an occasional 82.
For 87/256-duty, we get toggling between 90 and 86.
These 4-step level changes in a 256-step scale indicate only 6-bit quality.
...Sigh...
These values are around 1/3. We know about loss of resolution at 1/4, 1/2, and 3/4. It's probably an issue with the first-order modulator, not the filter. Bummer that most of the filter gains can't always be trusted due to non-linearity.
The real question is whether this same test is worse with a rectangular window.
Beyond Nyquist which only provides a S/N ratio improvement of 1.414 to 1, the Signal to noise ratio is basically the square root of the number of samples you take.
So to achieve 11 bits of resolution from an 8 bit resolution starting point you must have 64 samples. The Caveat is that by the time you are done with your 64th sample the original data measured could have significantly changed. Reading the data in parallel can reduce the amount of time required per sample, thus reducing the chance for change, but you will always have some sort of compromise between required sample time and resolution.
You use a slider to set a 16-bit number, the code generates a synthetic bitstream and runs it through the standard "sum over rectangular window", sinc2 and sinc3 (sinc1 is left out since it exactly duplicated the sum), and a triangle-weighted window, and a quadratic weighted window. You can change the window size (though I made it be a multiple of 4 since that's required for the quadratic weighting window).
You can see how almost everywhere all filtered options are better than the simple sum, with glaring exceptions at for example 16384 (which is 1/4). Note, though, even at 1/4, if the window size doesn't perfectly match up with the bitstream, the error in the rectangular sum is much worse than the other contenders.
You can also see how the sinc2 filter takes an extra sample to adjust to a step input, and the sinc3 filter takes 2 extra samples to adjust.
I'm wondering if someone could tell me what's going to be needed to feed an audio signal into the propeller, through the ADC and back out the DAC so I could record some samples? I've got a very high quality audio interface and can record some audio samples when I get my board Monday.
Consider the pathological 8-bit case where the input is 1/256th of full scale. There will be a single one bit in every 256-bit sample. If the shift register is only 64 bits wide, that bit may or may not be captured at any point in time. If it's not, the output will be zero. If it is, the output will equal one of the filter coefficients, depending upon its position in the shift register. In this situation, it's simply not possible to output an instantaneous value that corresponds to the desired 1/256 * Vdd. With additional filtering on the output (FIR or IIR), more past bits are "remembered" and can contribute to a higher-resolution value.
So, since we've got a simple first-order modulator, it would be standard practice to implement a Sinc2 digital filter.
The good thing about Sinc2 is that it's only two accumulators and one diff, in case the second accumulator is zero'd at the sample period start. Plus, it only needs 2/3 the accumulator size and diff size of Sinc3.
I think you still need an audio amp if you want to connect headphones...
Or, do you? Might get something with direct connection...
I bet we need an RC filter there too...
That type of RC audio filter is just a low pass filter, effectively removing any high frequency "jitter" ... I'd be curious to hear what it sounds like without any additional filtering.
I'm wondering if someone could tell me what's going to be needed to feed an audio signal into the propeller, through the ADC and back out the DAC so I could record some samples? I've got a very high quality audio interface and can record some audio samples when I get my board Monday.
Thanks!
*edit
So I guess my audio interface is only 24/44.1k
To input to an ADC pin, just take your line-level signal and capacitively couple it to an I/O pin. The ADC in the I/O pin will bias its side of the cap to VIO/2.
To output from a DAC pin, use a coupling capacitor, as well, to remove the DC.
So, since we've got a simple first-order modulator, it would be standard practice to implement a Sinc2 digital filter.
The good thing about Sinc2 is that it's only two accumulators and one diff, in case the second accumulator is zero'd at the sample period start. Plus, it only needs 2/3 the accumulator size and diff size of Sinc3.
The first SincX accumulator could be a counter. Is the main intention here reducing logic? Or putting the sole Sinc2 diff (with Integrate and Dump) into hardware?
So, since we've got a simple first-order modulator, it would be standard practice to implement a Sinc2 digital filter.
The good thing about Sinc2 is that it's only two accumulators and one diff, in case the second accumulator is zero'd at the sample period start. Plus, it only needs 2/3 the accumulator size and diff size of Sinc3.
The first SincX accumulator could be a counter. Is the main intention here reducing logic? Or putting the sole Sinc2 diff (with Integrate and Dump) into hardware?
I'm thinking about a scope mode replacement that resolves DC better than 6 bits. We won't be able to get a sample every clock, but we can run different sets of acc2's and diff's for more frequent samples.
I'd try SINC2 with 256 OSR.
Should give real 8-bits at 250 kHz (assuming 200 MHz clock)...
But since it incorporates 1 diff, which is from the prior sample, maybe we only need 128 OSR to get 8 bits. We could run two acc2's and diff's to get samples every 64 clocks, then. That would give us 8-bit samples at 4MHz if we were running 256MHz.
I'd try SINC2 with 256 OSR.
Should give real 8-bits at 250 kHz (assuming 200 MHz clock)...
But since it incorporates 1 diff, which is from the prior sample, maybe we only need 128 OSR to get 8 bits. We could run two acc2's and diff's to get samples every 64 clocks, then. That would give us 8-bit samples at 4MHz if we were running 256MHz.
The graph Rayman linked above, shows around x64 for ~50dB = 8 bits, and maybe ~ x50 for 7 bits.
I'm also not sure P2 ADC behaves as strictly first order - more likely somewhere between 1st and 2nd order, at the higher clock speeds & very low dV's that result, as the inverters used as a comparator have a LPF element, and so too do the current mirror switches.
Working against this is the noise floor, which you should also check on P2. eg Maybe 7 bits is all that is possible at > 4MHz
ie you would need to test experimentally on P2 silicon, to find the filter that matches the noise floor.
I notice those are best-case SNR, and they predict 200 samples for 98dB=16b, but the delivered 16b ADCs out there, spec ~ 87dB SNR at x256, tho they do have DNL of less than 1 LSB, so the DNL (averaged?) seem to follow the theory more closely.
Comments
If we just follow textbooks, we will never understand whats going on! We have real numbers and imaginary numbers. When imaginary numbers were invented, we had two chances to interpret this:
1. There are real numbers and now we have something, that is not real, but helpful.
2. We find, there are helpful numbers, that are not real and we deduce, that real numbers were always seen useful, but are imaginary also.
If we go the second path, we can understand that numbers are just useful, we can look for any kind of numbers, that solve a problem and hopefully gained understanding and knowledge.
The Propeller is just different! And folks, using the propeller should be open minded to also break barriers in other fields. You can understand fourier transform or analysis the moment you understand that a set of values is a vector and vector calculus can be applied. Then everthing is very simple.
We will also find a way to understand, how a filter works! But not now, later. After we see, how we can not understand how a filter works.
A gap is not closed, the moment you do no longer fill that gap. It just opens a way for others to fill that gap and you stand aside. This also is true to science.
The ratio of those over time is information. Would that not be good for another bit or two, given a reasonable sample time?
Put another way, the less delta slope or change in curvature present, the longer time needed to obtain an improved result. Secondly, the improvement factor is lowest when signal change is lowest.
It also seems like mixing a bit of noise with the DC, perhaps with the PRNG, would actually improve precision, again given appropriate time? (Force the toggle to happen consistently, rather than just at inflection points so the ratio is clear.)
Both these things seem consistant with the filter behavior presented so far.
I would love to hear a snippet of music run through these things myself. Ears can tell us a lot.
You need a certain noise level ....
and then you need to average over at least 2^n samples plus a bit for the window.
Systems I remember were doing 2^(2*n) and then you throw the lower half of the data away.
So 4 times oversampling gives 1 more real bit
... sorry don't have the link/reference at hand
Noise shaping moves quantization noise up to higher frequencies, but always with a slope - the higher
the frequency the more noise. Thus filter side lobes are a big deal.
The higher order the noise shaping, the better it moves the noise up, but the more steeply the noise
increases with frequency (the filter lobes are more vital - the "noise" dwarfs the desired output LSB
amplitude, any filter that doesn't have solid stop-band cutoff isn't going to get rid of this large
amount of noise and increase ENOB effectively.
The time-domain corrolary of this is that the output is oscillating around, even for a steady DC input, ie
there are idle cycling patterns in the bits coming out, it is not simply pulse-density modulation (far from
it in fact). The higher the order, the more amplitude in oscillations/noise. The signal looks chaotic,
especially for > second order, where the modulator is close to the edge of stability anyway.
Thus the digital low pass filter never sees a DC input at all.
Put the output through a simple moving average and you'll get no better than without noise shaping, but
the artifacts are more noise like and not clustered around the rational mark/space ratios (as with simple
pulse density mod). In practice this just increases the artifacts.
Put the output through a good low pass brick wall filter (this means a Sinc style one), and you remove all the
noise above the cutoff frequency. This gains load of resolution (more at lower frequencies than at higher),
but a good brick wall FIR filter needs 100's or 1000's of taps, not just 64. The good news is that you then
immediately decimate by a large factor, so the filter only needs to be applied every N samples, for a suitable N.
You need all those taps, you need a good (brick wall) filter, otherwise you let all the shaped noise back into
the output (remember there's a load more energy in the noise than before).
Think -100dB stopband, 20+ poles/zeroes, that sort of filter specification.
This is not news to the people designing sigma-delta ADCs, the delay times are large in terms of the sample
clock to get a proper brick-wall.
With 200MHz sample clock, 500 tap FIR, you can expect 1.25us delay. For a 1MHz output bandwidth the output
sample rate might be 4MHz or so, so the FIR is applied every 250ns, ie the rate of tap summations is 500 per
250ns.
Here's some plots, although this is for a PWM sigma-delta emulation, not single bit, but you get to see
the waveforms better. Its intriguing that 3rd order seems less noisy than 2nd order.
The spectra are before low-pass filtering, and you can see a suitable cutoff frequency would be something
around 0.04 of Fs or lower, and how noise rises 50dB+ in a few multiples of the cutoff...
Its very non-intuitive how the extra oscillation of the 6th order over the 5th order gives nearly 20dB improvement
around the higher of my simulated test tones. Although the oscillations look chaotic, they are very precisely
correlated to cancel extremely well at lower frequencies - these signals have discrete values (4 or 5 bits
worth), the only way to get more freedom to cancel at low frequencies is to go to larger oscillations at the
higher frequencies. The nature of the sigma-delta modulation forces this to happen (instability aside!).
Table shows SNR needed for N bits.
Table is for a input range of +/- 1 V, would have to be adjusted a hair for 0..3.3 V range.
Graph shows SNR versus OSR.
So, you need at least 100 samples to get 8-bit precision with a 1st order modulator under ideal circumstances...
"The established practice is to design the order of the SINC filter one above the order of the sigma-delta modulator"
The real question is whether this same test is worse with a rectangular window.
Beyond Nyquist which only provides a S/N ratio improvement of 1.414 to 1, the Signal to noise ratio is basically the square root of the number of samples you take.
So to achieve 11 bits of resolution from an 8 bit resolution starting point you must have 64 samples. The Caveat is that by the time you are done with your 64th sample the original data measured could have significantly changed. Reading the data in parallel can reduce the amount of time required per sample, thus reducing the chance for change, but you will always have some sort of compromise between required sample time and resolution.
Signal to Noise.xlsx
https://www.dropbox.com/s/ba5t62c65vc5hku/Test_ADC_Filtering - v001.zip?dl=1
You use a slider to set a 16-bit number, the code generates a synthetic bitstream and runs it through the standard "sum over rectangular window", sinc2 and sinc3 (sinc1 is left out since it exactly duplicated the sum), and a triangle-weighted window, and a quadratic weighted window. You can change the window size (though I made it be a multiple of 4 since that's required for the quadratic weighting window).
You can see how almost everywhere all filtered options are better than the simple sum, with glaring exceptions at for example 16384 (which is 1/4). Note, though, even at 1/4, if the window size doesn't perfectly match up with the bitstream, the error in the rectangular sum is much worse than the other contenders.
You can also see how the sinc2 filter takes an extra sample to adjust to a step input, and the sinc3 filter takes 2 extra samples to adjust.
thanks,
Jonathan
Thanks!
*edit
So I guess my audio interface is only 24/44.1k
Or, do you? Might get something with direct connection...
I bet we need an RC filter there too...
-Phil
The good thing about Sinc2 is that it's only two accumulators and one diff, in case the second accumulator is zero'd at the sample period start. Plus, it only needs 2/3 the accumulator size and diff size of Sinc3.
Here is how Sinc2 works:
That type of RC audio filter is just a low pass filter, effectively removing any high frequency "jitter" ... I'd be curious to hear what it sounds like without any additional filtering.
What about this:
If a Sinc3 can produce an 8-bit sample in only 16 clocks, partially using the prior two samples, it only has a history of 48 ADC output bits.
With 48 bits of history, that would only allow for maybe 5.5 bits of DC resolution, right?
Thanks. That's hard to think about. And how would those four R's be ratio'd?
To input to an ADC pin, just take your line-level signal and capacitively couple it to an I/O pin. The ADC in the I/O pin will bias its side of the cap to VIO/2.
To output from a DAC pin, use a coupling capacitor, as well, to remove the DC.
The first SincX accumulator could be a counter. Is the main intention here reducing logic? Or putting the sole Sinc2 diff (with Integrate and Dump) into hardware?
I'm thinking about a scope mode replacement that resolves DC better than 6 bits. We won't be able to get a sample every clock, but we can run different sets of acc2's and diff's for more frequent samples.
Should give real 8-bits at 250 kHz (assuming 200 MHz clock)...
But since it incorporates 1 diff, which is from the prior sample, maybe we only need 128 OSR to get 8 bits. We could run two acc2's and diff's to get samples every 64 clocks, then. That would give us 8-bit samples at 4MHz if we were running 256MHz.
-Phil
That's what Chip has done - chained sync adders, and the differentiators are slower, so they are managed in software, for the Sinc3 filter.
The graph Rayman linked above, shows around x64 for ~50dB = 8 bits, and maybe ~ x50 for 7 bits.
I'm also not sure P2 ADC behaves as strictly first order - more likely somewhere between 1st and 2nd order, at the higher clock speeds & very low dV's that result, as the inverters used as a comparator have a LPF element, and so too do the current mirror switches.
Working against this is the noise floor, which you should also check on P2. eg Maybe 7 bits is all that is possible at > 4MHz
ie you would need to test experimentally on P2 silicon, to find the filter that matches the noise floor.
I notice those are best-case SNR, and they predict 200 samples for 98dB=16b, but the delivered 16b ADCs out there, spec ~ 87dB SNR at x256, tho they do have DNL of less than 1 LSB, so the DNL (averaged?) seem to follow the theory more closely.
That's nifty.
Can you add a plot of the actual Settling Error, ie a zoomed Input value - Output Value difference ?