As there is a phase shift of one when the derivate is done over two samples, I now made the derivative over 3 what helps to better overlay the original and filtered curve by just delaying the original signal by one sample.
That gives a smoother filtered signal and a noise with sharper slopes.
What to do with that? There are different mechanisms, that produce noise. We see, that reading a noise signal and then differentiate and integrate removes high frequency noise. This is likely, what a conventional sigma delta adc does, that measures a bitstream and applies a filter. The point with the Prop-ADC is, that we freely can determine the characteristics of the filter in reading the counter more or less often and make a multi stage delta/int filter chain. We still have access to all the taps in the filter and can closely couple complex control loops to the manyfold of signals, we get from a single measurement.
So it's on us, what we see as information and what the noise is.
If we measure this signal, it's PWM-ed "back-emf" of a bldc motor:
we can extract these signals with help of the P1!
Imagine what we will do with the P2! People say: it will be a lot, fantastic progress, never seen before, will make the propeller GREAT finally. Scientists are still incredulously skeptical. But we will see, very soon, maybe sooner than anybody could expect, what the real greatness of the propeller is!
That only works, if the law generating the signal generates a cubic spline. In the case of the adc there seems to be some temperature drift, there is shot noise and it looks as if jitter has some influence.
As this data has 16 bit resolution, that so the sampling rate is clkfreq/2^16 the question is, how to filter the signals high frequency noise introducing minimal phase shift.
With splines, instability increases with the number of nodes, expressed as degree. 2nd degree curves have three nodes. Those are the analytical curves: conics and simpler arcs. 3rd degree and above are splines. It is rare, in design, to exceed 7 degree.
Detail increases with node points.
Truth is, just about anything can be represented by a spline, but as curve degree goes up, stability goes down, and it's exponential after a handful of nodes.
That means extrapolation is far less useful too.
An alternatve is to limit curve degree, say 7th or below, and keep a rolling data set for curve extrapolation, essentially making new splines that share all but a node or two. Drop old ones, keep last one and current one to extropolate with
I only know splines from CAD systems, the reason is, with very few data points you describe a nice looking curved line or surface nicely. But in this case we have equidistant data points that do not coincide with the signal. And we have to fit the data to a whatever signal, we do not know. Every single moment we have to decide, if the value 5 is correct or not. So having no forcast, we just look to the past to see where we came from to guess, where we are. And what we know from the past is just an earlier guess. Nothing to build a stable ..I just miss the word..
Indeed, there is no need for lookahead in principle, it is just, that the value at the filter output represents a value in the past, so there a phase lack. Now if you could predict sample values, just one or two, you could feed these values into the filter and so reduce phase lack. From the viewpoint of the predictor, the past is our present.
No if your filter is a gaussian one with the center located in present, the predicted values will be multiplied with smaller factors, the farer they are, the less precise they are, the less influence they have.
But, an predictor doesn't look to the future, but to the past and from and only from the past the future is predicted. And now and then, we have an October surprise ;-)
Oh, I forgot to mention 2nd degree curves are accurate for interpolation. If the data fits, confidence in predictive behavior can be very high. Outliers can be detected.
3rd and above are less accurate, and are exponentially less predictive. Higher confidence only really happens as the look ahead is a small percentage of curve distance. Say 5 percent ay lower degree. Who knows above 7.
I think, the original question was: why the ADC shows this noise figure at 16Bit resolution. My opinion is: it is just the limit one can reach. So: how can the noise figure be improved. Chip showed, he can gain one bit measuring the ground and vio. That moment the data rate is going down, what again increases noise. And now we think about how to filter the signal, for example, measure 12 Bit resolution and average the values to get 16 bit. I made some experiments with PhiPi's data in the p1-thread and just transfered the results to this application. The filter I tested just needs shift and add, worst case divide by 3, so very likely it can run in realtime as 16 bit resolution takes 64k clocks
Might there be any value in summing up the ADC bits into two separate accumulators, where even-clock bits sum into the first accumulator and odd-clock bits sum into the second accumulator? For that matter, might 16 accumulators, each summing every 16th bit, offer something?
It's bed time now, so only a short reply: As I understand the decimation and sin(x)/x ? filters in conventional sigma delta adcs the target is to have high resolution in time and amplitude with minimal phase error. To compete sampling adcs, the sampling rate has to be constant. With the "primitive" prop adc, which has no internal data processing features, we can read the bit stream at any rate, even overlapping, even varying, what is extremely useful. And as we need many clocks to have appropriate resolution and have cores to implement filters, I believe, we don't need more hardware for now.
Some research work has to be done to gain a better and more simple understanding of what a "filter" does and how control loop actually work.
One curious thing is the VIO GIO relative % bands.
At 55000 counts, 5 grid steps in Y is roughly 1% (0.2%/grid) and most of the dY's are within 2 bands, so inside 0.4% drifts
At 11000~12500, each grid step is 0.9~0.8%, but a dX of up to 4 grids is common for over 3% drifts. Some of the pins fit dY inside 1 grid, but not many, and it is not easy to predict which ones will.
With a balanced SDM, (50% threshold, same overscales) you would expect close to the same % errors, on the GIO and VIO test-connections.
The uncalibrated error bands (total plot spreads on graph sheet) are ~ 30% in X(GIO) and 5.6% in Y(VIO)
The SDM error bands should be resistor ratio set, and I would have expected the ratio matching on the same die, to be rather better, plus that does not explain the GIO/VIO difference.
It should be possible to measure the resistor ratio matching, by measuring the uA into GND/VIO ?
eg a 135k pulldown, with 100nF parallel cap for noise filtering, and a good multimeter should read
RT=135k;
MP=RT*1.65/(330k+RT) = 0.479V = Mean expected
LR=RT*1.65/(330k/1.1+RT) = 0.512 10% low spread
HR=RT*1.65/(330k*1.1+RT) = 0.447 10% high spread
HR-MP = -0.0317
MP-LR = -0.0330
So roughly 1mV is 1k Ohms of variation(0.3%), and 33K(10%) changes here ~ 33mV
(I'm still wary of those CMOS inverters for threshold detect oscillating at the threshold)
I wrote some code for interpolating a dataset years ago. The original spline functions came from Numerical Recipes in C.
It's a cubic spline function that could be converted to use fixed point arithmetic. You give it N input samples and tell it how many output samples you want. You can either compress a data set (input 10 samples and ask for 5 points) or extrapolate (input 5 samples and ask for 10 points).
If you think 7 is the right number, it's just a simple matter of allocating a few fixed buffers and making the input buffer a moving window of samples.
I think, the original question was: why the ADC shows this noise figure at 16Bit resolution. My opinion is: it is just the limit one can reach. So: how can the noise figure be improved. Chip showed, he can gain one bit measuring the ground and vio. That moment the data rate is going down, what again increases noise.
Sort of. Presuming the primary noise source is thermal effects, then we have a reasonable chance of compensating in software without sacrificing sample rate.
Brian/Lachlan,
Need a run of temperature ranges now. I don't suppose either of you have a tiny thermocouple you could wedge under the exposed thermal pad for monitoring the temperature. Actually, there's a hole in the PCB at the centre of the pad right? Melting the solder there and inserting the probe right to the pad would be ideal I think.
Brian/Lachlan,
Need a run of temperature ranges now. I don't suppose either of you have a tiny thermocouple you could wedge under the exposed thermal pad for monitoring the temperature. Actually, there's a hole in the PCB at the centre of the pad right? Melting the solder there and inserting the probe right to the pad would be ideal I think.
I'm reposting the graph of this after now correcting a mismatch of frequencies and added markers and title:
Wow, look how the 20MHz tags-end, vary with Graph locus.
LL Quadrant have 20MHz as MIN, and UR quadrant has 20MHz as Max
If we take MHz as indicating temperature, that's not good new for analog precision readings, as a single temperature correction is not going to work on all pins.
It may be better news for the Use-Analog-errors-as-Temperature idea, but the huge variations between parts will make that a calibrate nightmare, and the noise is likely way too much for sensible temperature readings.
ie Some temperature can already be inferred from requested MHz, so a chip-reading needs to improve on that. Diode sense seems to be universally used elsewhere, with quite good temperature results given the rather loose diode spec most use os 'any 2N3904'
And that part has notably rearranged spread from the other part that Oz measured too. I'm not making any judgements until I see some thermal comparisons.
The graph isn't really very helpful just yet, imho. It just looks cool. We really need to repeat those runs ten times or more at the same temperature, on the same part, to see if each point is stable. That would be useful info.
The reason I say this is because both sysclock and specific pin number can be considered static parameters with a simple zero and span calibration. Dynamic changes during sampling are the ones of concern. If there is non-thermal random noise then it's game over for what I'm up to. But if thermal is the be-all here then we've got a decent chance to track it well with a neighbouring ADC. Maybe even more precise than Chip's solution because both ADC's can run in parallel this way. Therefore the thermal compensation can be applied to the matched sample, case by case in real time.
That's just voltage vs current bias. Still gonna want that thermocouple attached to build any relationship.
Sense Diode temperature behaviour is quite well known, but yes, the exact mV/°C of the P2 process will need to be calibrated/confirmed at some stage.
Ambient calibrate should be good enough for initial proof of concept testing.
I haven't made any sense of that pdf, but no matter how good the diode might be, I wouldn't trust the ADC with providing a temperature measurement for the moment. Not without some matching external measurements of the thermal pad.
@ozpropdev, can you post (or dropbox or pm) that neat test code you used for the 20 to 340 MHz sweep, and I'll run it on board 2?
Here it is Lachlan.
Thans Brian. Here's the results from Board2.
I'm reposting the graph of this after now correcting a mismatch of frequencies and added markers and title:
Thanks for adding those 20 MHz labels. I was reflecting on how odd this 'pull towards centroid' thing is - what other process does that with increasing perturbances (temperature) ?
I talked with Ozprop about how he did the measurements from his board, and we both had a somewhat similar setup - boards on an odd angle to the bench, but he had a small CPU fan, and I had a big oscillating desk fan. It occurs to me we need to 'get better' on this front. I was thinking maybe of sandwiching a P2D2 between two big blocks of aluminum, which we can pre-charge up or down in temperature, to keep the die as stable as possible for the duration of the test.
At some point we also need to introduce variable 1v8 rail, but not just yet. There's enough variables already.
.... But also if you want some particular tests run, just sing out
Does anyone have two P2D2s ?
It would be nice to check the P2's operation from a Clipped Sine Oscillator.
Those spec > 0.8V p-p and are more filtered square than a real clipped sine, but clipped sine explains that you need an amplifier.
FWIR, the output impedance was around 220 ohms.
A test setup of a divided square wave from one P2, (eg 1K & 270R ) AC coupled (1nF) into a second P2 XIN, would allow MHz tests, using the first P2's PLL
A load of 20pF is tr of ~ 9.7ns, so looking ok for 20~30MHz tests, 10pF would be a range of 40~60MHz etc
Common standard values are 19.2MHz, 26MHz, 38.4MHz, 48MHz, 52MHz
Thanks for adding those 20 MHz labels. I was reflecting on how odd this 'pull towards centroid' thing is - what other process does that with increasing perturbances (temperature) ?
Yeah, that just strange..
The variation between pins is also quite a surprise, as in the custom layout, I assume these cells are cut/paste copied, so should be very similar in all aspects.
If there any info around on how much P1 varies between pins, running ADCs ?
Comments
That gives a smoother filtered signal and a noise with sharper slopes.
What to do with that? There are different mechanisms, that produce noise. We see, that reading a noise signal and then differentiate and integrate removes high frequency noise. This is likely, what a conventional sigma delta adc does, that measures a bitstream and applies a filter. The point with the Prop-ADC is, that we freely can determine the characteristics of the filter in reading the counter more or less often and make a multi stage delta/int filter chain. We still have access to all the taps in the filter and can closely couple complex control loops to the manyfold of signals, we get from a single measurement.
So it's on us, what we see as information and what the noise is.
If we measure this signal, it's PWM-ed "back-emf" of a bldc motor:
we can extract these signals with help of the P1!
Imagine what we will do with the P2! People say: it will be a lot, fantastic progress, never seen before, will make the propeller GREAT finally. Scientists are still incredulously skeptical. But we will see, very soon, maybe sooner than anybody could expect, what the real greatness of the propeller is!
As this data has 16 bit resolution, that so the sampling rate is clkfreq/2^16 the question is, how to filter the signals high frequency noise introducing minimal phase shift.
Detail increases with node points.
Truth is, just about anything can be represented by a spline, but as curve degree goes up, stability goes down, and it's exponential after a handful of nodes.
That means extrapolation is far less useful too.
An alternatve is to limit curve degree, say 7th or below, and keep a rolling data set for curve extrapolation, essentially making new splines that share all but a node or two. Drop old ones, keep last one and current one to extropolate with
A lot depends on how far one needs to look ahead. If it is only a point or two, a rolling scheme can make sense. Basically, low degree curves.
High degree ones will make noise as much as they may hit at good data expectations.
No if your filter is a gaussian one with the center located in present, the predicted values will be multiplied with smaller factors, the farer they are, the less precise they are, the less influence they have.
But, an predictor doesn't look to the future, but to the past and from and only from the past the future is predicted. And now and then, we have an October surprise ;-)
3rd and above are less accurate, and are exponentially less predictive. Higher confidence only really happens as the look ahead is a small percentage of curve distance. Say 5 percent ay lower degree. Who knows above 7.
Some research work has to be done to gain a better and more simple understanding of what a "filter" does and how control loop actually work.
One curious thing is the VIO GIO relative % bands.
At 55000 counts, 5 grid steps in Y is roughly 1% (0.2%/grid) and most of the dY's are within 2 bands, so inside 0.4% drifts
At 11000~12500, each grid step is 0.9~0.8%, but a dX of up to 4 grids is common for over 3% drifts.
Some of the pins fit dY inside 1 grid, but not many, and it is not easy to predict which ones will.
With a balanced SDM, (50% threshold, same overscales) you would expect close to the same % errors, on the GIO and VIO test-connections.
The uncalibrated error bands (total plot spreads on graph sheet) are ~ 30% in X(GIO) and 5.6% in Y(VIO)
The SDM error bands should be resistor ratio set, and I would have expected the ratio matching on the same die, to be rather better, plus that does not explain the GIO/VIO difference.
It should be possible to measure the resistor ratio matching, by measuring the uA into GND/VIO ?
eg a 135k pulldown, with 100nF parallel cap for noise filtering, and a good multimeter should read
RT=135k;
MP=RT*1.65/(330k+RT) = 0.479V = Mean expected
LR=RT*1.65/(330k/1.1+RT) = 0.512 10% low spread
HR=RT*1.65/(330k*1.1+RT) = 0.447 10% high spread
HR-MP = -0.0317
MP-LR = -0.0330
So roughly 1mV is 1k Ohms of variation(0.3%), and 33K(10%) changes here ~ 33mV
(I'm still wary of those CMOS inverters for threshold detect oscillating at the threshold)
It's a cubic spline function that could be converted to use fixed point arithmetic. You give it N input samples and tell it how many output samples you want. You can either compress a data set (input 10 samples and ask for 5 points) or extrapolate (input 5 samples and ask for 10 points).
If you think 7 is the right number, it's just a simple matter of allocating a few fixed buffers and making the input buffer a moving window of samples.
Here's the C code: http://www.dainst.com/info/programs/maf_cub/maf_cub.c
Here's the code ported to PHP and made into a VE curve fitter for the Megasquirt: http://www.dainst.com/info/html/vebin.php
Sort of. Presuming the primary noise source is thermal effects, then we have a reasonable chance of compensating in software without sacrificing sample rate.
I'm reposting the graph of this after now correcting a mismatch of frequencies and added markers and title:
Need a run of temperature ranges now. I don't suppose either of you have a tiny thermocouple you could wedge under the exposed thermal pad for monitoring the temperature. Actually, there's a hole in the PCB at the centre of the pad right? Melting the solder there and inserting the probe right to the pad would be ideal I think.
Wow, look how the 20MHz tags-end, vary with Graph locus.
LL Quadrant have 20MHz as MIN, and UR quadrant has 20MHz as Max
If we take MHz as indicating temperature, that's not good new for analog precision readings, as a single temperature correction is not going to work on all pins.
It may be better news for the Use-Analog-errors-as-Temperature idea, but the huge variations between parts will make that a calibrate nightmare, and the noise is likely way too much for sensible temperature readings.
ie Some temperature can already be inferred from requested MHz, so a chip-reading needs to improve on that. Diode sense seems to be universally used elsewhere, with quite good temperature results given the rather loose diode spec most use os 'any 2N3904'
The reason I say this is because both sysclock and specific pin number can be considered static parameters with a simple zero and span calibration. Dynamic changes during sampling are the ones of concern. If there is non-thermal random noise then it's game over for what I'm up to. But if thermal is the be-all here then we've got a decent chance to track it well with a neighbouring ADC. Maybe even more precise than Chip's solution because both ADC's can run in parallel this way. Therefore the thermal compensation can be applied to the matched sample, case by case in real time.
Sense Diode temperature behaviour is quite well known, but yes, the exact mV/°C of the P2 process will need to be calibrated/confirmed at some stage.
Ambient calibrate should be good enough for initial proof of concept testing.
definitely! But also if you want some particular tests run, just sing out
Thanks for adding those 20 MHz labels. I was reflecting on how odd this 'pull towards centroid' thing is - what other process does that with increasing perturbances (temperature) ?
I talked with Ozprop about how he did the measurements from his board, and we both had a somewhat similar setup - boards on an odd angle to the bench, but he had a small CPU fan, and I had a big oscillating desk fan. It occurs to me we need to 'get better' on this front. I was thinking maybe of sandwiching a P2D2 between two big blocks of aluminum, which we can pre-charge up or down in temperature, to keep the die as stable as possible for the duration of the test.
At some point we also need to introduce variable 1v8 rail, but not just yet. There's enough variables already.
Does anyone have two P2D2s ?
It would be nice to check the P2's operation from a Clipped Sine Oscillator.
Those spec > 0.8V p-p and are more filtered square than a real clipped sine, but clipped sine explains that you need an amplifier.
FWIR, the output impedance was around 220 ohms.
A test setup of a divided square wave from one P2, (eg 1K & 270R ) AC coupled (1nF) into a second P2 XIN, would allow MHz tests, using the first P2's PLL
A load of 20pF is tr of ~ 9.7ns, so looking ok for 20~30MHz tests, 10pF would be a range of 40~60MHz etc
Common standard values are 19.2MHz, 26MHz, 38.4MHz, 48MHz, 52MHz
The variation between pins is also quite a surprise, as in the custom layout, I assume these cells are cut/paste copied, so should be very similar in all aspects.
If there any info around on how much P1 varies between pins, running ADCs ?