ADC Sampling Breakthrough

2456736

Comments

  • cgracey wrote: »
    ...
    Because the ADC duty cycle ranges from ~16% to ~84%, ~1/3 the range is of no use within the power rails. So, you'd need to oversample by 1/(2/3), or ~50%. So, that's 2^12 * 1.5, or 6,144 clocks plus 32 for the taper. That's 6,176 clocks.

    If we have smart pin filtering, you can pick up a filtered sample every 6,176 clocks.
    Are you saying the filter only adds 32 clocks to the 6144 needed to quantize the window to 12 bits ?

    How many bits do you think that improved the SFDR ?

    Maybe you need to feed this into some spectrum software to confirm your dynamic range ?
  • evanhevanh Posts: 5,913
    edited November 20 Vote Up0Vote Down
    cgracey wrote: »
    With a second-order filter, wouldn't we need a 2nd-order integrator? I still have no idea how these things work.
    Sorry, went out for while.

    Ah, nope, from reading, recommended optimal arrangement is have the filter/accumulator at least one order higher than the modulator/integrator.

    EDIT: You'll note that diagram is labelled as a Sinc3 filter. When I first saw that stated it was a huge relief to me because before that I didn't have the faintest what the maths was saying in terms of implementation.

    Money is a placeholder for cooperation
  • cgraceycgracey Posts: 10,549
    edited November 20 Vote Up0Vote Down
    jmg wrote: »
    cgracey wrote: »
    ...
    Because the ADC duty cycle ranges from ~16% to ~84%, ~1/3 the range is of no use within the power rails. So, you'd need to oversample by 1/(2/3), or ~50%. So, that's 2^12 * 1.5, or 6,144 clocks plus 32 for the taper. That's 6,176 clocks.

    If we have smart pin filtering, you can pick up a filtered sample every 6,176 clocks.
    Are you saying the filter only adds 32 clocks to the 6144 needed to quantize the window to 12 bits ?

    How many bits do you think that improved the SFDR ?

    Maybe you need to feed this into some spectrum software to confirm your dynamic range ?

    It just filters away the disruption caused by the chunkiness of the <7-clock cycling in the ADC bit stream patterns. This results in a two-bit improvement across the board. After this improvement, we are totally clean until we hit the 1/f noise starting at 13 bits.

    The filter can be thought of as something that just processes the first 32 and last 32 bit samples. Everything in-between is counted normally (inc on 1).

    Nobody seems to understand how simple this filter is. It will probably grow the smart pins by only 2%.
  • evanh wrote: »
    cgracey wrote: »
    With a second-order filter, wouldn't we need a 2nd-order integrator? I still have no idea how these things work.
    Sorry, went out for while.

    Ah, nope, from reading, recommended optimal arrangement is have the filter/accumulator at least one order higher than the modulator/integrator.

    EDIT: You'll note that diagram is labelled as a Sinc3 filter. When I first saw that stated it was a huge relief to me because before that I didn't have the faintest what the maths was saying in terms of implementation.

    I still have no idea why it should work. I just don't get the principal, except in the vaguest sense.
  • I assume it works because it's the generic way to do it, afaik.

    It could be simulated from the same streamer data.

    Money is a placeholder for cooperation
  • evanh wrote: »
    I assume it works because it's the generic way to do it, afaik.

    It could be simulated from the same streamer data.

    But what is the principal?
  • It's similar to what you've done, ie second-order filtering. So the result should look at least as good, probably better because it's less limited.

    Money is a placeholder for cooperation
  • evanh wrote: »
    It's similar to what you've done, ie second-order filtering. So the result should look at least as good, probably better because it's less limited.

    Since we are dealing with a bit stream here, and not integral values, do we need that wide of an accumulator? How do we go from 1 bit to 16 bits?
  • cgraceycgracey Posts: 10,549
    edited November 20 Vote Up0Vote Down
    I think the big promise of that second-order wide filter is that we start getting a lot more heavy data samples in time. But, then, I don't see where that extra information can come from.
  • I've just thought to look at the P123 board. I note it has four suitable inputs. I'll tie a 15k resistor from ADC0 to GND.

    I'll try duplicating what you've already done, but save the bitstream data and see if comparisons can be made.

    Money is a placeholder for cooperation
  • evanh wrote: »
    I've just thought to look at the P123 board. I note it has four suitable inputs. I'll tie a 15k resistor from ADC0 to GND.

    I'll try duplicating what you've already done, but save the bitstream data and see if comparisons can be made.

    That would be awesome. Meanwhile, I'll read something about SINC filters.
  • cgracey wrote: »
    It just filters away the disruption caused by the chunkiness of the <7-clock cycling in the ADC bit stream patterns. This results in a two-bit improvement across the board. After this improvement, we are totally clean until we hit the 1/f noise starting at 13 bits.

    The filter can be thought of as something that just processes the first 32 and last 32 bit samples. Everything in-between is counted normally (inc on 1).

    Nobody seems to understand how simple this filter is. It will probably grow the smart pins by only 2%.
    - but if you only process the first 32 and last 32 bits, that only applies to a tiny fraction of the energy ?
    32/6144 = 0.520% of the samples.
    I'm not seeing how a claimed 2-bit improvement can come from doing nothing at all to 99% of the samples ?
  • Evanh, to simulate an ADC bitstream at 8-bit quality, this is all you do:

    (1) Take your voltage value (0..255) and add it into an 8-bit accumulator
    (2) The carry output from the add is the ADC bit
    (3) goto (1)

    for $40 you will get 0001000100010001...
    for $80 you will get 0101010101010101...
    for $C0 you will get 0111011101110111...

    That synthesized ADC bitstream is all you need to feed your SINC filter. No need to hook up anything. Work on the platform of your choice.
  • jmg wrote: »
    cgracey wrote: »
    It just filters away the disruption caused by the chunkiness of the <7-clock cycling in the ADC bit stream patterns. This results in a two-bit improvement across the board. After this improvement, we are totally clean until we hit the 1/f noise starting at 13 bits.

    The filter can be thought of as something that just processes the first 32 and last 32 bit samples. Everything in-between is counted normally (inc on 1).

    Nobody seems to understand how simple this filter is. It will probably grow the smart pins by only 2%.
    - but if you only process the first 32 and last 32 bits, that only applies to a tiny fraction of the energy ?
    32/6144 = 0.520% of the samples.
    I'm not seeing how a claimed 2-bit improvement can come from doing nothing at all to 99% of the samples ?

    The cyclical patterns in the ADC output bitstream present a nasty situation, where they are cycling in less than 7 bit spans. For continuous sampling, this doesn't matter, but for discrete sampling, your sample gets corrupted by what it first and last accumulates. That cruft needs to be low-passed to stop it from causing unpredictable noise in the two LSBs.
  • cgracey wrote: »
    That synthesized ADC bitstream is all you need to feed your SINC filter. No need to hook up anything. Work on the platform of your choice.

    I probably want some noise. I'll give that a try though, thanks.

    Money is a placeholder for cooperation
  • cgracey wrote: »
    The cyclical patterns in the ADC output bitstream present a nasty situation, where they are cycling in less than 7 bit spans. For continuous sampling, this doesn't matter, but for discrete sampling, your sample gets corrupted by what it first and last accumulates. That cruft needs to be low-passed to stop it from causing unpredictable noise in the two LSBs.

    I can see a soft-close, soft-open linear weighted switch, but I remain unconvinced that has actually gained you 2 LSBs improvement in ADC performance.
    Imagine those readings are seriously bad, with 0% or 100% return values, they only account for 0.5% of the ADC answer, so they can only give 99.5%~100.5% of total possible outcomes.
  • I'll try to make some answers, JMG.
    Money is a placeholder for cooperation
  • cgraceycgracey Posts: 10,549
    edited November 21 Vote Up0Vote Down
    jmg wrote: »
    cgracey wrote: »
    The cyclical patterns in the ADC output bitstream present a nasty situation, where they are cycling in less than 7 bit spans. For continuous sampling, this doesn't matter, but for discrete sampling, your sample gets corrupted by what it first and last accumulates. That cruft needs to be low-passed to stop it from causing unpredictable noise in the two LSBs.

    I can see a soft-close, soft-open linear weighted switch, but I remain unconvinced that has actually gained you 2 LSBs improvement in ADC performance.
    Imagine those readings are seriously bad, with 0% or 100% return values, they only account for 0.5% of the ADC answer, so they can only give 99.5%~100.5% of total possible outcomes.

    I look at it differently.

    Without the filtering, for a steady-voltage input, samples will always span four levels.

    With the filtering, for a steady-voltage input, the samples settle at one level. That's 4x the certainty, which counts for two bits.

    And it doesn't matter how long the sample run is, although past 6k bits, the 1/f noise becomes influential.
  • cgraceycgracey Posts: 10,549
    edited November 21 Vote Up0Vote Down
    evanh wrote: »
    cgracey wrote: »
    That synthesized ADC bitstream is all you need to feed your SINC filter. No need to hook up anything. Work on the platform of your choice.

    I probably want some noise. I'll give that a try though, thanks.

    I wouldn't worry about noise, at first. Just see if you can get something sensible going under ideal conditions.
  • cgracey wrote: »
    With the filtering, for a steady-voltage input, the samples settle at one level. That's 4x the certainty, which counts for two bits.
    - but the 'filtering' only applies to the leading and trailing 32 bits, the noise in all those other bits, passes straight through.
    cgracey wrote: »
    And it doesn't matter how long the sample run is, although past 6k bits, the 1/f noise become influential.

    Noise is always there, spread over all the samples.
  • cgraceycgracey Posts: 10,549
    edited November 20 Vote Up0Vote Down
    jmg wrote: »
    cgracey wrote: »
    With the filtering, for a steady-voltage input, the samples settle at one level. That's 4x the certainty, which counts for two bits.
    - but the 'filtering' only applies to the leading and trailing 32 bits, the noise in all those other bits, passes straight through.
    cgracey wrote: »
    And it doesn't matter how long the sample run is, although past 6k bits, the 1/f noise become influential.

    Noise is always there, spread over all the samples.

    Well, you tell me how it works, then. All I know is that it improves sample quality so that contiguous values are output all the way into 13 bits.

    8K sample without filtering of first 16 and last 16 bits:2018-11-20%2001.40.13.jpg

    8K sample WITH filtering of first 16 and last 16 bits:2018-11-20%2001.39.33.jpg

    (I'm just doing 16 bits, not 32 here.)
    1294 x 1029 - 243K
    1126 x 891 - 181K
  • Jmg, in the pictures above, it looks like we are getting a 1-bit improvement. Maybe de-noising is a better explanation than more bits.
  • Ok going back to the code, I initially misunderstood how it was working. It doesn't store state data like a digital filter might, its just weighting the samples as they are acquired. So yes the LE requirement might not be too bad

    The technique could be extended to a 'crossfade' from one sample to another, potentially

    Part of me wonders why this filter works as effectively as it does. Perhaps we should look at the clumpiness of the low level bitstream to understand the underlying issues better
  • jmgjmg Posts: 12,620
    edited November 20 Vote Up0Vote Down
    cgracey wrote: »
    Jmg, in the pictures above, it looks like we are getting a 1-bit improvement. Maybe de-noising is a better explanation than more bits.

    I'm still unconvinced about filtering, but maybe you have found some ADC artifact. eg Maybe there is an enable settling time effect, that this is fixing.
    In that case :

    If you remove the trailing filter, does it make any difference ?
    If you replace the leading filter with a simpler, blanking delay, how does that look ?

    For real ADC measurements, you need to get a serious DAC and plot ADC ideal against ADC actual.

    eg TI have an 18b 5us DAC, with a 50MHz SPI interface, that has Eval Board DAC9881EVM for $77 - that should have a noise floor way below anything the P2 can manage internally.

    Or an EVAL-AD5680DBZ for $55, but the specs are not as good as DAC9881

    Addit:
    There are also Audio DACs that seem to go to DC, but with no specific gain/drive for DC levels, so they may need a companion 'good ADC' to calibrate.
    eg TI has a PCM5102A that seems to have low cost eval boards. This DAC claims 384kHz sampling, up to 50MHz SCK, accepts 16/24/32b DAC streams.
    That's sounding a useful device for P2 to be able to drive anyway.
  • cgraceycgracey Posts: 10,549
    edited November 20 Vote Up0Vote Down
    jmg wrote: »
    If you remove the trailing filter, does it make any difference ?
    If you replace the leading filter with a simpler, blanking delay, how does that look ?

    It needs filtering at both the front and back to work. Without both, you get lots of noise.
  • Jmg, the ADC is running continuously. I am just grabbing sample sets and processing them on the fly. There's no enable or delay anywhere. We are just dealing with the raw recorded bitstream.
  • I would not call this a filter. It is in fact a window function with a trapezoid window, like used for FFT. This reducies the leakage effect and gives therefore more accurate results.
    The integrator is still linear and not exponential like in classical filters.
  • I was reading about "sinc" filters...
    It's just a square moving average. Turns out that this is a sinc function in the frequency domain.

    Looks like a lot of people size the window such that it cuts out 50 or 60 Hz...
    Prop Info and Apps: http://www.rayslogic.com/
  • evanhevanh Posts: 5,913
    edited November 20 Vote Up0Vote Down
    First-order is square. Second-order is triangular.
    650 x 589 - 60K
    Money is a placeholder for cooperation
  • I am lost when it comes to understanding analog. I can lay out a pcb to be quiet for analog, but that's it.

    Something seems amiss when you can effectively remove a group of first samples and last samples and end up with a superior result. Since you say the sampling is free running and has been before you start, the something is upsetting the results. Otherwise all the results would be noisy and you could just take any window of samples and you would get the same/similar results.

    So the question is rather, what is causing those first and last sample groups to be poor?

    What you have found is the result of a problem. Now it's time to find out the why. It's not the final silicon yet, so a workaround currently isn't the solution.

    Just my observation.
    My Prop boards: P8XBlade2, RamBlade, CpuBlade, TriBlade
    Prop OS (also see Sphinx, PropDos, PropCmd, Spinix)
    Website: www.clusos.com
    Prop Tools (Index) , Emulators (Index) , ZiCog (Z80)
Sign In or Register to comment.