How much memory for averages?
realolman1
Posts: 55
in Propeller 1
Suppose I was to average the input of an ADC in the following way:
I would add the input to a total and then at the end of a period of time, divide that total by the number of additions... If I ended up doing this thousands of times , would I need any special amount of memory allotted? So far as I can see, there would only be a few variables involved... the input value, the running total, the number of values added together, and the end result... the averaged values.
So it shouldn't take any special amount of memory should it?
thank you
I would add the input to a total and then at the end of a period of time, divide that total by the number of additions... If I ended up doing this thousands of times , would I need any special amount of memory allotted? So far as I can see, there would only be a few variables involved... the input value, the running total, the number of values added together, and the end result... the averaged values.
So it shouldn't take any special amount of memory should it?
thank you
Comments
That's right. Worst case is four variables. Could get away with three variables if you stored the average value in the sum of values, so: total = total/count would divide the sum of all the readings by the number of values and store it back in total.
Edit: Could reduce it further since the number of readings to average is usually a constant. Typically the individual readings would be added to the sum in a loop and the divide by the number of values done immediately after exiting the loop.
The Basic Stamps have a small number of variables so storing the number of readings to average as a constant in the program rather than a variable makes sense. The same is true for any micro that uses a relatively large eeprom for programs and a small ram for variable data.
Let's say we take an ADC reading, call it "sample". And we have an average value we are maintaining, call it "average".Then:
Set the new value of average to (0.1 * sample) + (0.9 * average)
In this way, given a constant stream of samples of equal value the average will exponentially approach the true average value. Give a time varying stream of samples the average will hover around the true average.
Change the 0.1 and 0.9 to 0.2 and 0.8 and the average will respond more quickly. Or change to 0.01 and 0.99 and it will respond more slowly.
More generally we could write:
new average = (a * sample) + (1 - a) * old average
Where "a" is a factor between 0 and 1 that determines the time constant. The speed at which it responds. Effectively "a" is determining how many historic samples are contributing the the average value.
Depends what you want to do of course but if you collect and sum a large number of samples, say 1000, then divide that sum by 1000 you are having to wait for 1000 sample times before you get an new average value. The average values you get will make step changes every 1000 samples. A bit clunky.
With the exponential averaging you get a continuously varying average value.
Of course there are many other ways to get an average: https://en.wikipedia.org/wiki/Moving_average
Oh and use the modulus operator to roll over that counter rather than testing against 4.
Thus saving a bunch of longs by not needing a big array of readings and a bunch of byte codes as well.
You've just had your first lesson in the subject of digital filters!
I'll point out that there is two major categories, Finite Impulse Response (FIR) and Infinite Impulse Response (IIR). An average method falls into the FIR category, JonnyMac's is a FIR, while Heater's one falls into the IIR category (It has an internal feedback that maintains a decay rate).
IIRs are less memory hungry and generally less processing demanding than FIR, while FIRs offer a sharper cut-off frequency.
Two methods in this object are employed: gt which compares two unsigned longs, and div, which divides a 64-bit double long, represented as two 32-bit longs. When computing the sum, the current value of the lower 32 bits is compared to the new value after adding the reading. If it's lower, there's been a carry, so 1 is added to the upper 32 bits.
In the demo program, an asterisk (*) is printed every time a carry occurs. At the end of 100,000 iterations, the 64-bit sum is divided by the count (100,000) and the average printed. Because the "readings" are just random numbers ranging from 0 to 5,999,999, you would expect the average to be around 3,000,000. Here's the output from the program:
I've attached this program as an archive that includes the umath object.
-Phil
If you only have a thousand things to measure then I guess the common average of "sum N things and divide by N" may mean something.
But when dealing with signals conceptually they run forever. We cannot sum an infinite number of samples and divide by infinity. Even if we could it's a bit too long to wait for a result than I would like
If we batch that infinite stream of samples into batches of a thousand, or whatever, then we get an average result every thousand sample periods.
I would not call any of those a "true average". That average depends on which thousand samples you take. It also makes non-contiguous jumps from one batch to the next.
All in all, I think the exponential average is as "true" as any.
All depends what you want to do with this mythical average number I guess.
All of these so-called "averages" have their place. My post was just to demonstrate how one of them could be computed with the Propeller when the sum of a bunch of readings might overflow a 32-bit long.
-Phil
As I said, if you only have N values then the school book average is perhaps something "true". In some sense.
When we have a continuous stream of values then there are many ways to get an average. Which one of them is "true"? Or more importantly which one has the best meaning for your application? As you say 'All of these so-called "averages" have their place'.
Even the "box car" is suspect. It's results depends on how many boxes you decide to maintain.
Then we have things like the average of a symmetric AC signal. Which is always zero no matter what you do. Hence RMS.
Also, I'll point out, Phil, a "boxcar" is the running average filter.
I took Phil's advice and learned to use ring buffers and (as usual) Phil was right; ring buffers are really useful.
In Phil's recent post he mentioned using a boxcar average. A boxcar average uses a ring buffer. When a new value is to be added to the boxcar, the oldest value is subtracted from the total and the new value is added to the total. The ring buffer is used so the oldest value can be retreived.
In case you're interesting in using a boxcar average, here's the averaging code I use in one of my current projects.
The code could be simplified a bit if I used a buffer of longs instead of a buffer of word sized values. Since I'm using words, the values have to be sign extended before being subtracted from the total.
As has been previously mentioned, using power of two sized buffers can make the math a bit easier. Using a power of two buffer size allows one to shift bits rather than divide when computing the average from the total.
One advantage of averaging words is the total is less likely to have a 32-bit rollover problem.
Here's the code I use to initialize the averaging buffer.
FYI the variables with numeric prefixes can be accessed as registers. I include the register number in the name to make code maintenance easier.
Here's the buffer defined in a DAT section.
Here are some constants I use with this boxcar filter.
One thing I like about this implementation of the boxcar filter is the size of the filter can be changed on the fly.
To change the size of the boxcar buffer the value "filterBits236" (aka register #236) is changed to a value from 0 to 7. When this variable/register is changed, the program triggers the execution of the "InitBoxcarFilter" method. In order to reduce the complexity of the code a bit, I don't maintain the contents of the FIFO buffer across the size transition. I just fill the buffer with the current average value. This is a bit of a trade off in accuracy which is acceptable in my current project.
Boxcar may well be the running average filter. But even then it's output depends on how many boxes you have. The boxcar average of the last 10 samples may be very different than the boxcar average of the last 100 or 1000.
Which one is the "true"average? It's for you to define.
Question is what is it one is trying to do? A 1 volt sinusoid superimposed on a 10 volt level has an average of 10 volts over a long enough time span. Any technique that only takes into account a finite amount of history will give you a different result if the sinusoid period is long compared to that history.
Variables
buffer - a binary multiple sized array (example 256)
index - current read/write position in the buffer
accumulator - total sum of all values in the buffer (subtract end element when adding new element)
average = accumulator>>8
btw, the "end" element can be treated as the value that you are replacing with the new element.
new average = (a * sample) + (1 - a) * old average
It has an infinite impulse response, since the influence of samples taken eons ago never truly goes to zero -- excepting, of course, the effects of integer arithmetic or floating-point precision factors.
-Phil
Maybe I've misinterpreted but to me that's what makes it a box, AKA a rectangular window function. It's no longer a box if those conditions are not true.
-Phil
The strength of a low-pass filter is usually expressed in terms of the cut-off frequency and, relatedly, the stronger the filter the more lag it'll introduce. This pretty much defines your filter history in both FIR and IIR. However, FIR is a crisp delay while IIR is usually defined as the -3db threshold.
Digitally, sample rate and bit depth are factors that play into the signal-to-noise ratio rather than filter strength.
It's amazing how effective this trick is though. All new digital communications designs are built from the ground up on this basic idea. Modems can now dynamically track around interference. It has allowed more and more robust and higher performing radio links.
F for Finite, is exactly as it seems - The filter output only responds to any input for a finite time period. This is achieved by having a limited working set of historical samples.
I for Infinite, is also as expected - The filter output keeps responding to any input indefinitely. This is achieved by using the previous output value as part of the new output value, ie: There is internal feedback.
I was such a folk some years back. I found a nice circuit for a three-way cross-over built with opamps. Thinking it was an interesting approach I proceeded to implement it in software. I figured out how to create the integrators, summing elements and whatever it had in there by modeling the capacitors, resistors and gain elements. I checked out it's response curves and phase shifts, amazingly it matched the spec's of the original analog circuit! A friend of mine created a plug-in for what ever sound mangling program he had on his PC and announced it worked very well with his speaker setup.
All of that might have been a bit easier if I had ever heard of FIR and IIR !
You see, the last time I had any formal exposure to such things there was no FIR and IIR. We just had filters. Filters were made with capacitors, resistors and inductors.
(Yeah, I know an analog filter is a IIR)
The problem for any beginner is that if they find a typical intro to digital filters they immediately get hit with gobbledygook like "linear time-invariant systems" and a pile of scary looking maths. But they just wanted an average...
I thank you all for your replies, much of which is over my head, but I did catch several things that are new to me that I found to be very interesting and useful, and I will be using them to the extent I am able in the future.
My interest in starting this thread was trying to graph the input from the Sigma Delta ADC, and having my graph jump around instead of being somewhat smooth because it seems to me that the resolution is too large. ... I started another thread for that... That, and (although I have read some threads) re: the amount of stack space needed for cogs, I don't really understand it.
Thank you for the education
It was just a bit overwhelming for what I had in mind when I started the thread, not knowing anything about all these potential solutions, ... an average solution for an average guy , indeed
Ha, yes. Give these dogs a bone and watch them tug on it in all directions
Hopefully it's a bit more fun and informative than the simple answer to your question which would have been just:
"No".