Modeling the ADC may be superfluous, actually, Evanh.
What I am really curious about is how many clocks does it take to get a sample of N-bit quality, and then how many past samples influence the current sample.
The differing parameters means each graph scales according to whatever readings are produced. I'm not rescaling anything yet. Note: Discontinuous sampling doesn't get close to the same bit depth as full blown continuous Sinc2.
That latest discontinuous Sinc2 method is producing peak values of just over 8k. So 13-bit depth from 256 clocks per reading. Fully continuous is 16-bit depth from 256 clocks.
and then how many past samples influence the current sample.
My chosen method for doing single discontinuous sampling is to subdivide the 256 clocks into 4 chunks of 64 clocks. Run them as 4 continuous chunks. Discard the first two results then, finally, add the remaining two together.
By leaving the smartpin disabled, Chip is able to use a Streamer to capture every clocked bit into a buffer in hubram, packed as 32 bits per longword, and then apply different effects.
That's just what I'm asking for, having original data. I could try to do the job on P1, but in this case I do not know for sure, if it can be tranfered to the P2.
So it would be very helpfull to have the data stream, so I can make experiments here. If I understand right, Chip can not transfer data to a file on the pc.
We can do this stream, Erna. OzProp showed the ADC data streams for VIO and GIO using his logic analyzer. I think at the time the focus was on the variable staggered start, rather than the ADC data stream itself. It may take a few days...
it's ok, as soon as I see a data stream I can try to figure out, if there is certain character of the noise. We will see, hopefully it is. And it is a very basic experiment I never did...
The same expectation here; sounds very interesting having many data streams available, in order to try some thoughtful analysis at their contents.
If possible, it'll be good to include at least some grouped samples, showing simultaneous captured twits and nibbles data, of two and four closely-coupled ADCs (pertaining to the same GIO/VIO group).
Despite some (not so) minute differences that could exist between their internal components and the several ways they can be affected by incident noise sources, many interesting patterns could emerge from those analysis, beyound the ones already noticed by Chip and others.
The solely concern will be related to the assembly order of samples, when creating any resulting files; some previous agreement between the ones intending to collect them could someway ease any post-proccessing, in order to split them appart.
The actual ADC hardware is modulating the bitstream. This bitstream appears as a digital pin IN data bit every sysclock. By leaving the smartpin disabled, Chip is able to use a Streamer to capture every clocked bit into a buffer in hubram, packed as 32 bits per longword, and then apply different effects.
The differing parameters means each graph scales according to whatever readings are produced. I'm not rescaling anything yet. Note: Discontinuous sampling doesn't get close to the same bit depth as full blown continuous Sinc2.
That latest discontinuous Sinc2 method is producing peak values of just over 8k. So 13-bit depth from 256 clocks per reading. Fully continuous is 16-bit depth from 256 clocks.
Okay, sounds great, but when you say 13-bit or 16-bit depth from 256 samples, are you able to resolve every single possible 13-bit or 16-bit value from 256 samples? This seems, to me, the true criteria for being able to claim we've doubled the bit depth. That's what I'm totally not clear on.
All the Tukey7/32 values are even and five of them are exact (others are 2.14 and 29.86), therefore excellent logic-wise but probably too short to be useful.
A Chi-Square goodness of fit total was calculated, the sum of (Rounded-Exact)^2/Exact. As the largest error possible is 0.5, this sum is completely distorted by errors in very low values which have no effect in practice and any errors for rounded values of 0 and 1 have been ignored.
I am thinking that for general purposes, this windowing can be accomplished IN SERIES with the bitstream.
In any of these windowing schemes, the end number of possible whole-bit contributions is around length/2 for either tapered end. This could be computed on the fly for summing ADC operations, since bit order is not important. For Goertzel, you can't toy with the positions of bits. This would mean that no final right-shift would be needed. Rounding could be handled in series, too.
I am thinking that for general purposes, this windowing can be accomplished IN SERIES with the bitstream.
In any of these windowing schemes, the end number of possible whole-bit contributions is around length/2 for either tapered end. This could be computed on the fly for summing ADC operations, since bit order is not important. For Goertzel, you can't toy with the positions of bits. This would mean that no final right-shift would be needed. Rounding could be handled in series, too.
So no Tukey window in hardware? I don't mind, if so. If the window is sliding by one sample at a time, then only one bit has to be removed in one ramp and one added in the other. A table of increments between the adjacent ramp values, not the actually values, is what would be needed.
EDIT:
On second thoughts last sentence is probably wrong.
I am thinking that for general purposes, this windowing can be accomplished IN SERIES with the bitstream.
In any of these windowing schemes, the end number of possible whole-bit contributions is around length/2 for either tapered end. This could be computed on the fly for summing ADC operations, since bit order is not important. For Goertzel, you can't toy with the positions of bits. This would mean that no final right-shift would be needed. Rounding could be handled in series, too.
So no Tukey window in hardware? I don't mind, if so. If the window is sliding by one sample at a time, then only one bit has to be removed in one ramp and one added in the other. A table of increments between the adjacent ramp values, not the actually values, is what would be needed.
Before we can know which way is best to proceed, we need to understand what Evanh is working on. It might be that we have an additional 8-bit bus from each smart pin conveying eight bits of ADC sample per clock.
All the Tukey7/32 values are even and five of them are exact (others are 2.14 and 29.86), therefore excellent logic-wise but probably too short to be useful.
These smaller ones need to be tested and quantified, to get a sense of where the best Logic/gain trade off it.
I was wondering about the simpler ones, like 7/32, if they can be be applied at a lower clock, if more coverage is needed ?
eg a 7/32 may be compact enough to have tolerable logic cost, and /2 or /3 could tune the X coverage to what is needed, at low cost.
and then how many past samples influence the current sample.
My chosen method for doing single discontinuous sampling is to subdivide the 256 clocks into 4 chunks of 64 clocks. Run them as 4 continuous chunks. Discard the first two results then, finally, add the remaining two together.
That discard decision can be done downstream - the hardware does not have to apply it.
eg In Chip's MUX Auto-zero-scale-fit mode, he would set for a sample count of 3 for each GND/VIO, but someone may want better AC performance and not worry about DC, so they would prefer nothing discarded & they would skip the Auto stuff..
What is the actual logic cost of the Sinc2 you are testing here ? - how many wide-adders etc per smart pin ?
Should smart pins be paired for cases where someone wants highest ADC specs, to save logic ?
eg Smart Pins are paired by users now for Precise Duty Cycle & Reciprocal Counting.
Before we can know which way is best to proceed, we need to understand what Evanh is working on. It might be that we have an additional 8-bit bus from each smart pin conveying eight bits of ADC sample per clock.
Kind of a sample history shift register, with new samples being added to one end, while the older ones are being discarded at the other?
Okay. Our ADC puts out 0000001 to 0111111, as duty over a 7-bit span. To model that, ramp the NCO from 1/7th of 100% to 6/7th of 100%.
I believe freq settings would range from 1/7 * $1_0000_0000 to 6/7 * $1_0000_0000.
Pondering this clumping effect more, it sounds similar to the SysCLK outpacing the ADC engine, not surprising, as you cannot expect the ADC sense amp to be flat out to 250MHz with 4mV swings.
Ringing and bounce above 30MHz would appear as clumping.
If you cannot reduce the ADC sample clock, perhaps a simple bitstream majority voter can be applied to lower the effective sample rate, and (hopefully) scrub that HF noise effect ?
Logic cost of that is 2 small counters.
Comments
Nothing complicated. The graph is just a direct plot of filtered readings. I render it to a huge bitmap just to eyeball for ripples.
It's all my code, btw.
EDIT: Here's an example of the ramping bitstream: https://forums.parallax.com/discussion/download/124347/rampbitstream.bin
Ok. I'm not at my computer, so I can't look at the file, but does the duty cycle range from about 1/7th to 6/7th?
I probably should have tried a slower ramp with a smaller NCO. Tighter spread that way.
1 bit high in 50k bits?
Okay. Our ADC puts out 0000001 to 0111111, as duty over a 7-bit span. To model that, ramp the NCO from 1/7th of 100% to 6/7th of 100%.
I believe freq settings would range from 1/7 * $1_0000_0000 to 6/7 * $1_0000_0000.
What I am really curious about is how many clocks does it take to get a sample of N-bit quality, and then how many past samples influence the current sample.
I really want to understand this concept.
That latest discontinuous Sinc2 method is producing peak values of just over 8k. So 13-bit depth from 256 clocks per reading. Fully continuous is 16-bit depth from 256 clocks.
My chosen method for doing single discontinuous sampling is to subdivide the 256 clocks into 4 chunks of 64 clocks. Run them as 4 continuous chunks. Discard the first two results then, finally, add the remaining two together.
That's just what I'm asking for, having original data. I could try to do the job on P1, but in this case I do not know for sure, if it can be tranfered to the P2.
So it would be very helpfull to have the data stream, so I can make experiments here. If I understand right, Chip can not transfer data to a file on the pc.
If possible, it'll be good to include at least some grouped samples, showing simultaneous captured twits and nibbles data, of two and four closely-coupled ADCs (pertaining to the same GIO/VIO group).
Despite some (not so) minute differences that could exist between their internal components and the several ways they can be affected by incident noise sources, many interesting patterns could emerge from those analysis, beyound the ones already noticed by Chip and others.
The solely concern will be related to the assembly order of samples, when creating any resulting files; some previous agreement between the ones intending to collect them could someway ease any post-proccessing, in order to split them appart.
I missed that. Plz ignore my bitstream comment.
Okay, sounds great, but when you say 13-bit or 16-bit depth from 256 samples, are you able to resolve every single possible 13-bit or 16-bit value from 256 samples? This seems, to me, the true criteria for being able to claim we've doubled the bit depth. That's what I'm totally not clear on.
All the Tukey7/32 values are even and five of them are exact (others are 2.14 and 29.86), therefore excellent logic-wise but probably too short to be useful.
A Chi-Square goodness of fit total was calculated, the sum of (Rounded-Exact)^2/Exact. As the largest error possible is 0.5, this sum is completely distorted by errors in very low values which have no effect in practice and any errors for rounded values of 0 and 1 have been ignored.
I am thinking that for general purposes, this windowing can be accomplished IN SERIES with the bitstream.
In any of these windowing schemes, the end number of possible whole-bit contributions is around length/2 for either tapered end. This could be computed on the fly for summing ADC operations, since bit order is not important. For Goertzel, you can't toy with the positions of bits. This would mean that no final right-shift would be needed. Rounding could be handled in series, too.
So no Tukey window in hardware? I don't mind, if so. If the window is sliding by one sample at a time, then only one bit has to be removed in one ramp and one added in the other. A table of increments between the adjacent ramp values, not the actually values, is what would be needed.
EDIT:
On second thoughts last sentence is probably wrong.
Tukey or trapezoid could be used.
I was wondering about the simpler ones, like 7/32, if they can be be applied at a lower clock, if more coverage is needed ?
eg a 7/32 may be compact enough to have tolerable logic cost, and /2 or /3 could tune the X coverage to what is needed, at low cost.
eg In Chip's MUX Auto-zero-scale-fit mode, he would set for a sample count of 3 for each GND/VIO, but someone may want better AC performance and not worry about DC, so they would prefer nothing discarded & they would skip the Auto stuff..
What is the actual logic cost of the Sinc2 you are testing here ? - how many wide-adders etc per smart pin ?
Should smart pins be paired for cases where someone wants highest ADC specs, to save logic ?
eg Smart Pins are paired by users now for Precise Duty Cycle & Reciprocal Counting.
Kind of a sample history shift register, with new samples being added to one end, while the older ones are being discarded at the other?
Pondering this clumping effect more, it sounds similar to the SysCLK outpacing the ADC engine, not surprising, as you cannot expect the ADC sense amp to be flat out to 250MHz with 4mV swings.
Ringing and bounce above 30MHz would appear as clumping.
If you cannot reduce the ADC sample clock, perhaps a simple bitstream majority voter can be applied to lower the effective sample rate, and (hopefully) scrub that HF noise effect ?
Logic cost of that is 2 small counters.