Reducing noise of sigma-delta ADC
ManAtWork
Posts: 2,176
Building an ADC with only a few caps and resistors plus the counters of the propeller has impressed me from the beginning. It is not only a very cheap and simple way to read analogue signals but is also very robust against switching noise because peaks are averaged out over the sampling period.
However, I was also always wondering why there was so much random noise. No matter how long the sampling period was or how the Cs and Rs were chosen the lowest 2 to 3 bits were always dithering.
Chip has found the solution (see P2 discussion here: http://forums.parallax.com/discussion/169298/adc-sampling-breakthrough). To sum it up, the feedback loop forms an oscillator that oscillates with period of several clocks. The start and the end of the sampling cuts a random part of that oscillation because the phase of the feedback loop is not synchronised to the sampling period.
If the hard cut is replaced with a smooth fade-in and fade-out the noise is reduced drastically. This could be ideally done in hardware and requires only some counters and adders. Unfortunatelly, we still have to wait until this feature becomes available in the P2. I couldn't wait and have tried to implement this in software in the P1.
sum:= (newsample0 - oldsample0) + (newsample1 - oldsample1) + (newsample2 - oldsample2) + ...
The noise reduction is probably not as good as it would be with a hardware filter because the resolution is only 4 clocks (per instruction) compared to one. But the noise is still cancelled out somehow. I have recorded some statistics. Without filtering I got 8 LSBs peak-to-peak and 1.9 LSBs RMS noise. With filtering over 16 shifted sampling windows I got 4 LSBs p-p and 0.37 RMS. With 8 samples I got 5 p-p and 0.82 RMS.
The overlapping sampling windows have the advantage of a much lower delay over increasing the sampling period by a factor of 16.
However, I was also always wondering why there was so much random noise. No matter how long the sampling period was or how the Cs and Rs were chosen the lowest 2 to 3 bits were always dithering.
Chip has found the solution (see P2 discussion here: http://forums.parallax.com/discussion/169298/adc-sampling-breakthrough). To sum it up, the feedback loop forms an oscillator that oscillates with period of several clocks. The start and the end of the sampling cuts a random part of that oscillation because the phase of the feedback loop is not synchronised to the sampling period.
If the hard cut is replaced with a smooth fade-in and fade-out the noise is reduced drastically. This could be ideally done in hardware and requires only some counters and adders. Unfortunatelly, we still have to wait until this feature becomes available in the P2. I couldn't wait and have tried to implement this in software in the P1.
adcLoop waitcnt adcTime,adcPeriod mov sample0,PHSA mov sample1,PHSA mov sample2,PHSA mov sample3,PHSA mov sample4,PHSA mov sample5,PHSA mov sample6,PHSA mov sample7,PHSA mov sample8,PHSA mov sample9,PHSA mov sampleA,PHSA mov sampleB,PHSA mov sampleC,PHSA mov sampleD,PHSA mov sampleE,PHSA mov sampleF,PHSA mov sum,#0 mov scnt,#16 movs getpos,#sample0 movs getneg,#last0 movd putold,#last0 getpos mov s,sample0 add sum,s getneg sub sum,last0 putold mov last0,s add getpos,#1 add getneg,#1 add putold,incdest djnz scnt,#getpos sar sum,#4 wrlong sum,adrResult jmp #adcLoop scnt long 0 sum long 0 s long 0 sample0 long 0 sample1 long 0 sample2 long 0 sample3 long 0 sample4 long 0 sample5 long 0 sample6 long 0 sample7 long 0 sample8 long 0 sample9 long 0 sampleA long 0 sampleB long 0 sampleC long 0 sampleD long 0 sampleE long 0 sampleF long 0 last0 long 0[16]I take 16 samples of PHSA as fast as possible in an unrolled loop. Then I calculate an average of the 16 slightly shifted sampling windows by calculating
sum:= (newsample0 - oldsample0) + (newsample1 - oldsample1) + (newsample2 - oldsample2) + ...
The noise reduction is probably not as good as it would be with a hardware filter because the resolution is only 4 clocks (per instruction) compared to one. But the noise is still cancelled out somehow. I have recorded some statistics. Without filtering I got 8 LSBs peak-to-peak and 1.9 LSBs RMS noise. With filtering over 16 shifted sampling windows I got 4 LSBs p-p and 0.37 RMS. With 8 samples I got 5 p-p and 0.82 RMS.
The overlapping sampling windows have the advantage of a much lower delay over increasing the sampling period by a factor of 16.
Comments
My understanding is this:
The sigma-delta noise reduction works by shifting the quantization noise in frequency, the fluctuations are
not oscillations, they are apparently chaotic with a broad spectrum but correlated to the signal. So what you
might think of an oscillations are actually processed quantization noise.
Noise shaping basically pushes the noise to higher frequencies (and creates more noise energy), but the noise at
low frequencies decreases. The analog feedback paths have to be carefully engineered not to oscillate in fact, as
that completely ruins things - once you have multiple orders of sigma-delta modulation in analog you are close to instability and special measures are needed to contain this. First order modulators are stable.
Once past the modulator feedback stages you can only low-pass filter, in any of the normal ways, the magic of
noise-shaping happens with feedback in the modulator. If you add a brick-wall filter after the modulator you
get the lowest total noise out, but you cannot increase the order of the modulator after the sampling point.
Some simulations I did:
In the python simulations above the order of modulation is 1,2,3, etc going down the rows, 1st column
is mildly low pass filtered reconstructed output, 2nd column is sigma-delta modulator output spectrum, 3rd column
is mildly low-pass filtered spectrum. Note the increasing freedom from noise at the lower frequencies, and
how higher orders increase the extent of that.
The ragged bottom left trace is the highest quality, within the frequency range of interest (remember this is
mildly LPF'd so that some of the dither breaks through)
[Column 3 top row is an anomoly, its the input signal spectrum.]
You can also see the total noise rising at the high frequencies in the signal traces and the spectra with increasing
order of modulator.
If the feedback loop was ideal it should give a perfect stream of bits and the count value should match the average voltage level independant of sample time. For example half the supply voltage should result in a 0101010... bit pattern.
In the real world the feedback loop oscillates at a random frequency resulting in bit patterns like 01001111001110000... The average over a long perod of time is still correct but there's some amount of noise that depends on the exact time where the counter starts and stops.
serves as the quantizer. By feeding back to the capacitor you implement 1st-order modulation.
You can add more components for higher order, basically more integrators and feedback resistors, though
usually active circuitry is needed for useful results. Actually that's non-ideal, you get low-frequency artifacts when you are close to a mark/space integer ratio, ie
the quantization noise is a voltage dependent tone close to the 1:1 ratio. sigma-delta modulation prevents this from happening, and makes the noise more gaussian like.
Oscillators can only oscillate at a definite frequency, not a random frequency
BTW all this noise analysis is before any counting stage, its fundamental to the signal, counting is just one
way to do low pass filtering (well filtering and decimation to be more precise). Typical strategies use counting/
summing to get a first decimation to a more reasonable frequency, then FIR or IIR filter with more desirable
response (and which compensates for the sinc(x) rolloff of the counting pass)
One issue with higher order modulation is that the decimation steps have to be much more careful to
suppress aliasing of noise back down into the ultimate passband, as there is a lot of high frequency noise
so even a tiny proportion of it aliasing down to DC will destroy your noise performance there.
My intention here was to get the best possible out of what we already have, the P1. Adding some lines of code cost nearly nothing and it's nice to get a bit lower noise. Of course, it's still much worse than it could be with a state-of-the-art ADC design. But who cares? It's cheap, it works, and now it's a little bit better.
Mike
Mike
I'd guess the biggest variation will be coming from the execution time of going from resetting PHSA to tristating the output. The charge time is from when INPUT() tristates the output but the counter starts sometime earlier.
Try adding a DIRA(PIN) = 0 before the WHILE or before the PHSA = 0. That should help at least.
Mike
The alternative is electrical variations. Eg: The 10 Volts is not stable. Thermal changes. Electrical interference.
this is RC decay, not sigma delta. With RC decay the information is in the amount of time until the comperator input flips. There's only one state change, not multiple sample bits. If that single information is wrong then all you can do is to start over. You can't filter only one data point. If it's not stable you have to look at the hardware.
One would think this is a stable network and should be easy to measure consistently but it is not.
This is similar to how an ADC works except there is no sample and hold circuit.
Mike
I just tried it with a ceramic capacitor of 1 nF (your 1 uF is way too big).
I get values around 990, which is a 10bit resolution and the variation is max 4 digits, mostly it changes only between 991 and 992, if I hold my breath and don't move I can see the same value 10 times in a row.
With 1uF you get something like 17bit resolution and it's a heavy load for the pin, no wonder that the values varies a lot.
So the P1 hardware is good for 10..12 bits, the main problem with the SigmaDelta ADC is the 80 MHz clock. If you lower the system frequency you can get quite stable results, also without windowing.
The P1 counters miss a prescaler for such things.
Andy