Welcome to the Parallax Discussion Forums, sign-up to participate.

- 101.5K All Categories
- 812 Announcements
- 51 Propeller Code
- 22 PASM2/Spin2 (P2)
- 4 PASM/Spin (P1)
- 14 BASIC (for Propeller)
- 61 Forth
- 10 C/C++
- 2.8K Propeller 2
- 27.6K Propeller 1
- 18.9K BASIC Stamp
- 9 micro:bit
- 21.1K General Discussion
- 2K Learn with BlocklyProp
- 8.2K Robotics
- 124 Customer Projects
- 3.3K Accessories

## Comments

12,202Nothing complicated. The graph is just a direct plot of filtered readings. I render it to a huge bitmap just to eyeball for ripples.

It's all my code, btw.

EDIT: Here's an example of the ramping bitstream: https://forums.parallax.com/discussion/download/124347/rampbitstream.bin

13,688Ok. I'm not at my computer, so I can't look at the file, but does the duty cycle range from about 1/7th to 6/7th?

12,202I probably should have tried a slower ramp with a smaller NCO. Tighter spread that way.

13,6881 bit high in 50k bits?

13,68813,68812,20213,688Okay. Our ADC puts out 0000001 to 0111111, as duty over a 7-bit span. To model that, ramp the NCO from 1/7th of 100% to 6/7th of 100%.

I believe freq settings would range from 1/7 * $1_0000_0000 to 6/7 * $1_0000_0000.

13,688What I am really curious about is how many clocks does it take to get a sample of N-bit quality, and then how many past samples influence the current sample.

I really want to understand this concept.

13,68812,202That latest discontinuous Sinc2 method is producing peak values of just over 8k. So 13-bit depth from 256 clocks per reading. Fully continuous is 16-bit depth from 256 clocks.

12,202My chosen method for doing single discontinuous sampling is to subdivide the 256 clocks into 4 chunks of 64 clocks. Run them as 4 continuous chunks. Discard the first two results then, finally, add the remaining two together.

1,59112,2021,672That's just what I'm asking for, having original data. I could try to do the job on P1, but in this case I do not know for sure, if it can be tranfered to the P2.

So it would be very helpfull to have the data stream, so I can make experiments here. If I understand right, Chip can not transfer data to a file on the pc.

4,4361,6721,328If possible, it'll be good to include at least some grouped samples, showing simultaneous captured twits and nibbles data, of two and four closely-coupled ADCs (pertaining to the same GIO/VIO group).

Despite some (not so) minute differences that could exist between their internal components and the several ways they can be affected by incident noise sources, many interesting patterns could emerge from those analysis, beyound the ones already noticed by Chip and others.

The solely concern will be related to the assembly order of samples, when creating any resulting files; some previous agreement between the ones intending to collect them could someway ease any post-proccessing, in order to split them appart.

10,222I missed that. Plz ignore my bitstream comment.

13,688Okay, sounds great, but when you say 13-bit or 16-bit depth from 256 samples, are you able to resolve every single possible 13-bit or 16-bit value from 256 samples? This seems, to me, the true criteria for being able to claim we've doubled the bit depth. That's what I'm totally not clear on.

1,808All the Tukey7/32 values are even and five of them are exact (others are 2.14 and 29.86), therefore excellent logic-wise but probably too short to be useful.

A Chi-Square goodness of fit total was calculated, the sum of (Rounded-Exact)^2/Exact. As the largest error possible is 0.5, this sum is completely distorted by errors in very low values which have no effect in practice and any errors for rounded values of 0 and 1 have been ignored.

1,80813,688I am thinking that for general purposes, this windowing can be accomplished IN SERIES with the bitstream.

In any of these windowing schemes, the end number of possible whole-bit contributions is around length/2 for either tapered end. This could be computed on the fly for summing ADC operations, since bit order is not important. For Goertzel, you can't toy with the positions of bits. This would mean that no final right-shift would be needed. Rounding could be handled in series, too.

1,808So no Tukey window in hardware? I don't mind, if so. If the window is sliding by one sample at a time, then only one bit has to be removed in one ramp and one added in the other. A table of increments between the adjacent ramp values, not the actually values, is what would be needed.

EDIT:

On second thoughts last sentence is probably wrong.

13,688Tukey or trapezoid could be used.

13,68814,876These smaller ones need to be tested and quantified, to get a sense of where the best Logic/gain trade off it.I was wondering about the simpler ones, like 7/32, if they can be be applied at a lower clock, if more coverage is needed ?

eg a 7/32 may be compact enough to have tolerable logic cost, and /2 or /3 could tune the X coverage to what is needed, at low cost.

14,876eg In Chip's MUX Auto-zero-scale-fit mode, he would set for a sample count of 3 for each GND/VIO, but someone may want better AC performance and not worry about DC, so they would prefer nothing discarded & they would skip the Auto stuff..

What is the actual logic cost of the Sinc2 you are testing here ? - how many wide-adders etc per smart pin ?

Should smart pins be paired for cases where someone wants highest ADC specs, to save logic ?

eg Smart Pins are paired by users now for Precise Duty Cycle & Reciprocal Counting.

1,328Kind of a sample history shift register, with new samples being added to one end, while the older ones are being discarded at the other?

14,876Pondering this clumping effect more, it sounds similar to the SysCLK outpacing the ADC engine, not surprising, as you cannot expect the ADC sense amp to be flat out to 250MHz with 4mV swings.

Ringing and bounce above 30MHz would appear as clumping.

If you cannot reduce the ADC sample clock, perhaps a simple bitstream majority voter can be applied to lower the effective sample rate, and (hopefully) scrub that HF noise effect ?

Logic cost of that is 2 small counters.