Moving average question from a Beau Schwabe reply.
Bits
Posts: 414
I am trying to understand this code better from an old thread. In post #5 Beau Schwabe mentions this.
I cant understand what the variables are and how to implement them into my code. What is DataBase? is it an array? what is samples? etc...
In my application I have a variable that ranges 0 - 65535 labeled "power". I get this number 5 times a second and need to filter it.
Some clarity would be appreciated.
DataBase = DataBase - Average + InputData Average = DataBase / Samples
I cant understand what the variables are and how to implement them into my code. What is DataBase? is it an array? what is samples? etc...
In my application I have a variable that ranges 0 - 65535 labeled "power". I get this number 5 times a second and need to filter it.
Some clarity would be appreciated.
Comments
Read post #2 of the same "old thread". It may take a couple times of reading it over, but it appears that Beau explained it pretty well.
Bruce
Every time you stuff a new sample into the array you get the average back. You can see why the index needs to be in the global space: this value needs to be persistent between calls to the method.
...takes 80% of the current value and adds it to 20% of the new sample.
Thanks for trying to help Johnny mac but I cant use that object it contains too much filler that is confusing me even more. In fact your example code seems to be a bit clearer except that I can not get it to work. I bet this seems easy to you fellas yet I am pulling my hair out. One of those days my head wont soak this in easily. !!!!
My main problem is that I cant figure out how to take an average when the window is not yet full.
Next I cant take an average once the window is completely filled because I need to toss away the older values and add the newer ones. Ill post an example as soon as I can visualize it.
I am not getting the results I would expect.
This is what I think I should get with each index
1. (100) / 1 = 100
2. (100 + 200) / 2 = 150
3. (100 + 200 + 100) / 3 = 133
4. (100 + 200 + 100 + 400) / 4 = 200
And this is what I am getting
1. 25
2. 75
3. 100
4. 175
DataBase := DataBase - Average + InputData
Average := DataBase / SAMPLES
Note that if the subsequent values of inputData all equal the first sample, DataBase will always remain at the same multiple of that and of course average will always be the same. If your program does not initialize the value of DataBase and starts it out at zero, it will gradually approach the value of InputData * SAMPLES, but that may be many many sample later (quantitatively an exponential). As the InputData fluctuates or changes, the value of DataBase and Average follow along gradually.
This is an example of what is called an IIR (infinite impulse response) filter, because the effect of the first (or any) reading never completely disappears if the accumulator has infinite resolution. The filter Jon was talking in post #3 is a FIR (finite impulse response) filter, because the effect of any particular input reading drops out completely after one cycle through the array. In that case too, you can properly initialize the filter by setting every single element of the array to the value of the first sample.
John Abshier
Ahhhhh - that explains it. Ill try to whip something up after a bit.
[Edit] I see that John explained what I was working on in code.
Oh boy i am ready to write code.
Basically think of Database as a Pool, a single variable that represents an accumulation of a certain number of samples that you set. As long as the accumulated value of samples does not exceed the holding "bit capacity" of the "pool" then it will work.
Ok, so how does it work?
Say the database has an accumulated value of 1000 and you have your number of samples set to 10
The "Average" is the Database / Samples (1000/10) or a value of 100
To add a new sample to the database, first subtract the average from the Database or "pool"
So if the Average is 100, the database value now becomes 900
Next, add the new sample to the database. Suppose that the value of the new sample is 80
Now the database becomes 980
Calculating the new average ... Database / Samples ... you get 98 now as your average.
The process continues for every NEW sample you want to introduce into the database.
The purpose of this type of low-pass software filter is to eliminate the need for storing the data in an array and keeping a "window average" on a continuous data stream.
As Beau set it up, it is governed by an iterated difference equation (yikes, if you don't like it, let your eye slide across to the text!),
That is, step zero initializes the accumulator Y with N times value of the first sample, then the iteration is applied to subsequent samples. It says, add the new value to the accumulator, but subtract a little (the prior average!) from the accumulator to keep it in balance.
The equation can be simplified by a substitution, plugging the blue equation back into the main equation, so that it depends only on the accumulation and the new input value.
The final average in blue is now just an auxiliary calculation, and there is no need to calculate it at each step unless it is needed as a output. Also, you may not want Ak at all. Suppose you use the multiplier N=10. In that case, the value of Yk is 10 times the average, and you can print it out with a decimal point yy.y, so in effect you are interpolating between the readings. 6 readings at 95 mixed with 4 readings at 96 should give 95.6 if you keep the decimal point, but it will be 95 when subjected to an integer divide. I recommend doing it that way instead of going back to Ak at each step.
The larger the value of N, the more sluggish the response, but smoother, that is to say, the output takes a long time to respond to changes, but it irons out fluctuations.
In the above, the new reading contributes 1/N and the old accumulation contributes (N-1)/N. In general, the proportions can be adjusted as follows, using the factor M: (M<N). The specific example below shows N=16 and M=4. In that case, at each step, the accumulation contributes 3/4 and the new reading contributes 1/4.
You might think this is equivalent to the following, without multiplying by N to start:
Now Yk is exactly the average, no more multiple of N. That formula is okay in floating point, but it is not true when the numbers are integers. Integer divisions drop the remainders. Doing the math on an accumulator that hovers around N times the average has an effect something like using floating point. The internal state of the accumulator maintains higher precision. Also, you gain the advantage of interpolation to get extra decimal points of precision if the process warrants it. Again, highly recommended if you are doing this kind of smoothing and want better precision without going to floating point.
The rate of convergence is determined by the power series,
The graph shows the rates for various ratios.
The yellow horizontal line is the factor 1/e = 1 / 2.72 = 0.37 that often comes up in RCtime and frequency calculations. The green, red and blue lines show the curves for 1/2, 9/10 and 99/100 as powers of k. For example, the red line for (9/10)k crosses the 1/e line at about k=10 iterations.
To translate that to actual time units, suppose your Prop is executing that line of code 1000 times a second, 1ms per iteration. Then in time units the time constant is 10 milliseconds, and that is how long it take for the filter to respond to a step change at the input. In terms of traditional frequency response, the -3dB filter response frequency will be16 Hz. (1 / 2*pi*t).
Note that the ratio 1/10 could come from either (N=10, M=1) or from (N=100, M=10) or any other combination of N and M that gives (N - M) / N = 1/10. The time constant is the same. The difference there is only in the precision of the accumulator. You can choose the time constant and the precision independently.
Impressive explanation Tracy I am having fun learning all this.