Shop OBEX P1 Docs P2 Docs Learn Events
Moving average question from a Beau Schwabe reply. — Parallax Forums

Moving average question from a Beau Schwabe reply.

BitsBits Posts: 414
edited 2012-02-13 08:26 in Propeller 1
I am trying to understand this code better from an old thread. In post #5 Beau Schwabe mentions this.

DataBase = DataBase - Average + InputData
Average = DataBase / Samples

I cant understand what the variables are and how to implement them into my code. What is DataBase? is it an array? what is samples? etc...

In my application I have a variable that ranges 0 - 65535 labeled "power". I get this number 5 times a second and need to filter it.

Some clarity would be appreciated.

Comments

  • idbruceidbruce Posts: 6,197
    edited 2012-02-10 18:25
    Bits

    Read post #2 of the same "old thread". It may take a couple times of reading it over, but it appears that Beau explained it pretty well.

    Bruce
  • JonnyMacJonnyMac Posts: 9,197
    edited 2012-02-10 18:48
    Sometimes looking at a solution from a different angle helps. I recently coded a commercial HVAC system that used a rolling average of sensor readings in the calculations. In the globals VAR space you need an array to hold your values, and an index into that array. You'll define a constant (RA_WINDOW) that tells how big your array is. The code becomes pretty simple:
    pub rolling_avg(newsample) | i
    
      samples[raidx] := newsample                                   ' add new sample
      raidx := ++raidx // RA_WINDOW                                 ' update index
      
      repeat i from 0 to RA_WINDOW-1                                ' accumulate samples
        result += samples[i]
      result /= RA_WINDOW                                           ' average
    


    Every time you stuff a new sample into the array you get the average back. You can see why the index needs to be in the global space: this value needs to be persistent between calls to the method.
  • BRBR Posts: 92
    edited 2012-02-10 19:02
    If you want to look at a demo of a typical moving average filter, you might look at this object.
  • BitsBits Posts: 414
    edited 2012-02-11 07:09
    I think I understand it to some degree. I am a bit lost on how he is keeping track of the size of the sample (it must be indexed some way). And not sure if his code will run with negative numbers.
  • JonnyMacJonnyMac Posts: 9,197
    edited 2012-02-11 08:21
    In that object you are given two choices for the number of samples to average: 4 or 16, and the samples can be any values up to a long. One caution: you must ensure that with n samples of maximum size (for your application) you will not have an overflow when you sum all samples. Note that the code uses SAR (~>) to handle the division; this preserves the sign so it should work with negative numbers.
  • JonnyMacJonnyMac Posts: 9,197
    edited 2012-02-11 08:33
    Are you wanting to filter by rolling average? Another [very simple] way to do digital filtering is to take x% of the current value and add it to y% of the new sample(x% + y% = 100%). The greater the x% (current value) the faster the response of the filter, and vice versa.
    value := (value * 8 / 10) + (new_sample * 2 / 10)
    


    ...takes 80% of the current value and adds it to 20% of the new sample.
  • BitsBits Posts: 414
    edited 2012-02-11 08:37
    JonnyMac wrote: »
    In that object you are given two choices for the number of samples to average: 4 or 16, and the samples can be any values up to a long. One caution: you must ensure that with n samples of maximum size (for your application) you will not have an overflow when you sum all samples. Note that the code uses SAR (~>) to handle the division; this preserves the sign so it should work with negative numbers.

    Thanks for trying to help Johnny mac but I cant use that object it contains too much filler that is confusing me even more. In fact your example code seems to be a bit clearer except that I can not get it to work. I bet this seems easy to you fellas yet I am pulling my hair out. One of those days my head wont soak this in easily. !!!!

    My main problem is that I cant figure out how to take an average when the window is not yet full.

    Next I cant take an average once the window is completely filled because I need to toss away the older values and add the newer ones. Ill post an example as soon as I can visualize it.
  • BitsBits Posts: 414
    edited 2012-02-11 08:50
    Perhaps I am doing something wrong take a look and see.
    CON                                                          _clkmode = xtal1 + pll16x                            
      _xinfreq = 5_000_000
    
    Obj
      Ser:"fullduplexserial"
      
    var
      Long tester[ 15 ] 
      Long Samples[ 100 ],raidx,RA_WINDOW 
       
    Pub start   | i ,rest
      ser.start(31,30,0,19200)
       
      {just data for avg}
      
      tester[ 0 ] := 100
      tester[ 1 ] := 200   
      tester[ 2 ] := 100    
      tester[ 3 ] := 400
      tester[ 4 ] := 400
      tester[ 5 ] := 200      
      tester[ 6 ] := 100
      tester[ 7 ] := 300  
      tester[ 8 ] := 200     
      tester[ 9 ] := 300
    
      RA_WINDOW := 4 {window size?}
      raidx := 0
      ser.str(string("start ",13)) {debug}
      pause(3000)
      
      i := 0
      repeat 10   
          pause(1000)            
          rest := rolling_avg(tester[ i ])
          i++         
          ser.str(string("result "))  
          ser.dec(rest)
          ser.tx(13)
          ser.tx(13)      
    
    
    pub rolling_avg(newsample) : out | i
    
      samples[ raidx ] := newsample                                 
      raidx := ++raidx // RA_WINDOW                               
      
      repeat i from 0 to RA_WINDOW-1                           
        out += samples[ i ]
      out /= RA_WINDOW
        
    Pub pause(a)
             
      Waitcnt((clkfreq / 1_000) * a + cnt)
    


    I am not getting the results I would expect.
    This is what I think I should get with each index

    1. (100) / 1 = 100
    2. (100 + 200) / 2 = 150
    3. (100 + 200 + 100) / 3 = 133
    4. (100 + 200 + 100 + 400) / 4 = 200

    And this is what I am getting
    1. 25
    2. 75
    3. 100
    4. 175
  • Tracy AllenTracy Allen Posts: 6,664
    edited 2012-02-11 09:50
    In Beau's filter, there is not array. Both DataBase and Average are simple variables, let them be longs, and SAMPLES might as well be a constant. The first time through, set the value of DataBase equal to SAMPLES times the first sample reading. Subsequently use the formula as given,

    DataBase := DataBase - Average + InputData
    Average := DataBase / SAMPLES

    Note that if the subsequent values of inputData all equal the first sample, DataBase will always remain at the same multiple of that and of course average will always be the same. If your program does not initialize the value of DataBase and starts it out at zero, it will gradually approach the value of InputData * SAMPLES, but that may be many many sample later (quantitatively an exponential). As the InputData fluctuates or changes, the value of DataBase and Average follow along gradually.

    This is an example of what is called an IIR (infinite impulse response) filter, because the effect of the first (or any) reading never completely disappears if the accumulator has infinite resolution. The filter Jon was talking in post #3 is a FIR (finite impulse response) filter, because the effect of any particular input reading drops out completely after one cycle through the array. In that case too, you can properly initialize the filter by setting every single element of the array to the value of the first sample.
  • John AbshierJohn Abshier Posts: 1,116
    edited 2012-02-11 09:50
    The problem is initialization. You should get what you expect from sample 5 onward. There are ways to solve this problem, but it makes the code more complicated. Your main program could just ignore the first second of samples.

    John Abshier
  • BitsBits Posts: 414
    edited 2012-02-11 09:57
    The problem is initialization. You should get what you expect from sample 5 onward. There are ways to solve this problem, but it makes the code more complicated. Your main program could just ignore the first second of samples.

    John Abshier

    Ahhhhh - that explains it. Ill try to whip something up after a bit.
  • JonnyMacJonnyMac Posts: 9,197
    edited 2012-02-11 09:58
    If you want to average on fewer values than your sample window size you must keep track of that. The attached demo shows you how (you need to add another global and adjust the method that does the averaging). After you run this things should become clear. Note that it uses my version of FDS so that it can do right-justified printing of decimal values; this makes the display cleaner (see image).

    [Edit] I see that John explained what I was working on in code.
  • BitsBits Posts: 414
    edited 2012-02-11 22:09
    thanks I got it !!!! Now I am not the stupid gal at the bar,
    Oh boy i am ready to write code.
  • Beau SchwabeBeau Schwabe Posts: 6,568
    edited 2012-02-11 22:10
    Wow, sorry Bits, This post was completely off of my radar. It seems as though it has been explained fairly well ... I like Tracys' take on it.

    Basically think of Database as a Pool, a single variable that represents an accumulation of a certain number of samples that you set. As long as the accumulated value of samples does not exceed the holding "bit capacity" of the "pool" then it will work.

    Ok, so how does it work?

    Say the database has an accumulated value of 1000 and you have your number of samples set to 10

    The "Average" is the Database / Samples (1000/10) or a value of 100

    To add a new sample to the database, first subtract the average from the Database or "pool"
    So if the Average is 100, the database value now becomes 900

    Next, add the new sample to the database. Suppose that the value of the new sample is 80
    Now the database becomes 980

    Calculating the new average ... Database / Samples ... you get 98 now as your average.

    The process continues for every NEW sample you want to introduce into the database.

    The purpose of this type of low-pass software filter is to eliminate the need for storing the data in an array and keeping a "window average" on a continuous data stream.
  • Tracy AllenTracy Allen Posts: 6,664
    edited 2012-02-12 20:49
    Maybe this topic of simple smoothing warrants a long-winded tutorial on the background math! It is very useful sometimes, and important to know what to expect or how to get the effect you want.

    As Beau set it up, it is governed by an iterated difference equation (yikes, if you don't like it, let your eye slide across to the text!),

    equ1.png


    That is, step zero initializes the accumulator Y with N times value of the first sample, then the iteration is applied to subsequent samples. It says, add the new value to the accumulator, but subtract a little (the prior average!) from the accumulator to keep it in balance.

    The equation can be simplified by a substitution, plugging the blue equation back into the main equation, so that it depends only on the accumulation and the new input value.

    equ2.png


    The final average in blue is now just an auxiliary calculation, and there is no need to calculate it at each step unless it is needed as a output. Also, you may not want Ak at all. Suppose you use the multiplier N=10. In that case, the value of Yk is 10 times the average, and you can print it out with a decimal point yy.y, so in effect you are interpolating between the readings. 6 readings at 95 mixed with 4 readings at 96 should give 95.6 if you keep the decimal point, but it will be 95 when subjected to an integer divide. I recommend doing it that way instead of going back to Ak at each step.

    The larger the value of N, the more sluggish the response, but smoother, that is to say, the output takes a long time to respond to changes, but it irons out fluctuations.

    In the above, the new reading contributes 1/N and the old accumulation contributes (N-1)/N. In general, the proportions can be adjusted as follows, using the factor M: (M<N). The specific example below shows N=16 and M=4. In that case, at each step, the accumulation contributes 3/4 and the new reading contributes 1/4.

    eqn3.png


    You might think this is equivalent to the following, without multiplying by N to start:

    equ4.png


    Now Yk is exactly the average, no more multiple of N. That formula is okay in floating point, but it is not true when the numbers are integers. Integer divisions drop the remainders. Doing the math on an accumulator that hovers around N times the average has an effect something like using floating point. The internal state of the accumulator maintains higher precision. Also, you gain the advantage of interpolation to get extra decimal points of precision if the process warrants it. Again, highly recommended if you are doing this kind of smoothing and want better precision without going to floating point.

    The rate of convergence is determined by the power series,

    powerLaw.png


    The graph shows the rates for various ratios.
    SmoothingFilter.png


    The yellow horizontal line is the factor 1/e = 1 / 2.72 = 0.37 that often comes up in RCtime and frequency calculations. The green, red and blue lines show the curves for 1/2, 9/10 and 99/100 as powers of k. For example, the red line for (9/10)k crosses the 1/e line at about k=10 iterations.
    To translate that to actual time units, suppose your Prop is executing that line of code 1000 times a second, 1ms per iteration. Then in time units the time constant is 10 milliseconds, and that is how long it take for the filter to respond to a step change at the input. In terms of traditional frequency response, the -3dB filter response frequency will be16 Hz. (1 / 2*pi*t).

    Note that the ratio 1/10 could come from either (N=10, M=1) or from (N=100, M=10) or any other combination of N and M that gives (N - M) / N = 1/10. The time constant is the same. The difference there is only in the precision of the accumulator. You can choose the time constant and the precision independently.
    326 x 60 - 3K
    316 x 61 - 2K
    73 x 45 - 2K
    354 x 175 - 14K
    400 x 320 - 27K
    175 x 55 - 3K
  • BitsBits Posts: 414
    edited 2012-02-13 07:38
    Wow, thanks everyone!

    Impressive explanation Tracy I am having fun learning all this.
  • Tracy AllenTracy Allen Posts: 6,664
    edited 2012-02-13 08:26
    I also meant to add that if you skip the initialization step, the filter still works, but that is like presenting it with a step function input from zero up to the level of your readings. It takes it one time constant to come to the 1/e point. And another time constant to (1/e)2 and so on. That is an IIR filter. An uninitialized FIR filter takes exactly the amount of time that it takes to fill the array and the initial condition drops completely from memory.
Sign In or Register to comment.