Smoothing out Data
SRLM
Posts: 5,045
I'm probably going to use some infrared on my next project, and am going to use the method of a duty sweep to get a range of distances. However, I noticed that the numbers seem to jump around quite a bit, even with the object being stationary. Is there a simple, Propeller based solution that smooth out the numbers? I seem to remember reading about something where the second to last number is imposed on the most recent, which then becomes the second, and so on; but it seemed very mathematical and was difficult to understand.
What would be ideal is a general solution that can work on any sort of numeric range of values, with a variable that can determine how "smooth" to make it. Any suggestions?
Post Edited (SRLM) : 10/23/2008 5:43:29 PM GMT
What would be ideal is a general solution that can work on any sort of numeric range of values, with a variable that can determine how "smooth" to make it. Any suggestions?
Post Edited (SRLM) : 10/23/2008 5:43:29 PM GMT
Comments
Please add a topic to your post.
There is a simple iterative averaging technique that works for streaming data that I have used in the past. I call it "Window averaging". Basically it takes a portion of the data or "window" determined by the number of samples specified and produces an average of just that window.
'Read RAW data value for InputData Here
DataBase = DataBase - Average + InputData
Average = DataBase / Samples
This assumes that Average starts out as Zero, and ALL of the raw data reads are positive in nature. The 'DataBase' is a variable that represents the window you are averaging. The maximum number of Samples is limited as to not cause an overflow in the DataBase. For example if DataBase is defined as a WORD, and the RAW data value does not exceed 2000, then the largest value that the Sample variable can be set to would be 32, since 65535/2000 is 32.7675 ... if the RAW data value did not exceed 300 then the maximum number of Samples to fit in a single WORD variable would be 218. 65535/300 = 218.45
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Beau Schwabe
IC Layout Engineer
Parallax, Inc.
new := (old + input)/2
old := new
Where old is initialized to 0 (or if you know in advance what the average is, initialize it to that) The effect this has on the value is to give the current input 1/2 weight, the previous input 1/4 weight, previous before that 1/8th weight, 1/16th weight, 1/32nd weight,....
If you add all the weights together, it approaches a total weight of 1 which means theres no scaling error in the formula (technically due to rounding errors the total weight is very slightly less than 1, but with a 32 bit accumulator this difference is negligable).
If you initialize old to 0, it is best to perform the filtering for a few iterations before you start using the data that is being spit out, since it needs to compute a running average (before the average is achieved the algorithm will artificially attenuate the data stream).
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Paul Baker
Propeller Applications Engineer
Parallax, Inc.
Post Edited (Paul Baker (Parallax)) : 10/23/2008 6:32:50 PM GMT
You can adjust the weights to "slow down" or "speed up" the filter, as follows:
····output := (input * m + output * n) / (m + n)
-Phil
Addendum: Here is a thread that describes a "median filter" which works better when there is the occasional wild reading, e.g. from noise bursts. You can also obtain a median, then apply an IIR filter to it to smooth out transitions from one median value to another.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
'Just a few PropSTICK Kit bare PCBs left!
Post Edited (Phil Pilgrim (PhiPi)) : 10/23/2008 7:01:07 PM GMT
Method #1
and
Method #2
One is fixed to comparing only two data points while the other can cover a large number of data points.
If Method#1 is set so that it also has two data points or samples, then BOTH outputs are identical... See "2.JPG"
As you increase the number of data points, the output data becomes smoother... See "4.JPG, 6.JPG, and 8.JPG"
I have included an interactive Excel spreadsheet where you can change the number of samples to visualize how much affect it has on the signal.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Beau Schwabe
IC Layout Engineer
Parallax, Inc.
Post Edited (Beau Schwabe (Parallax)) : 10/23/2008 11:22:51 PM GMT
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
·"I have always wished that my computer would be as easy to use as my telephone.· My wish has come true.· I no longer know how to use my telephone."
- Bjarne Stroustrup
it would be interesting to see the difference in hardware filtering with a cap and the software filtering methods suggested.· i think software filtering would be better, its much easier to adjust·software than hardware.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
·"I have always wished that my computer would be as easy to use as my telephone.· My wish has come true.· I no longer know how to use my telephone."
- Bjarne Stroustrup
I usually use this which is similar (more to Phil and exactly the same of Beau#2 if k=0.5):
or
where k is just the smooting factor but if
where t is the time from the last call or better if t is constant (the calling time is scheduled) then F is the cutoff frequency of tle·LP filter
that degrades the signal. If noise is from the power supply, appropriate caps help a lot. If noise is
60Hz or 120Hz hum from lights or induced currents, shielding (optical and/or electrical) can help
reduce that. Note that reflected light from TV screens or computer monitors can be a significant
source of noise. Incandescent bulbs generate a ton of modulated infrared, too.
Next, if you are bouncing infrared, consider modulating the source and demodulating the return;
this will help reduce noise a great deal (assuming there isn't a similar source of modulated
signal in your environment).
http://tom.pycke.be/mav/71/kalman-filtering-of-imu-data
http://tom.pycke.be/mav/92/kalman-demo-application
Kalman filtering is basically a way of using a noisy but accurate measurement to correct an estimate of current state.· For balancing robots, it's used to correlate noisy accelerometer readings of 'absolute angle' with a slowly drifting 'estimate of absolute angle' computed by summing gyro readings over time.
Jason
·
avg += (new - avg) >> avg_scale
If avg_scale = 0, instant update. If avg_scale = 1, this behaves just like Paul's original posted version (updating 1/2 the way from the old average to the new value). If avg_scale = 2, we update only 1/4 the way from the old average to the new value, etc. No multiplications or divisions, so nice and quick, and you can easily change the smoothing factor (at the cost of a limited number of smoothing factors, of course).
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
lonesock
Piranha are people too.
Aside from a great topic of discussion, why would someone eternally commit processor cycles to compensate for a known deficiency in electrical design.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
A wise man told me; "All electronics are made to work by magic smoke.
Don't ever let it out as it's·very difficult·to get it back in."
Slowly, the smoothed out data made it's way upward. So, the IIR filter is okay, and I think it will work (along with the caps), but I'm still going to test the others.
You could use a band pass filter tuned right to the IR frequency.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Signature space for rent, only $1.
Send cash and signature to CannibalRobotics.
Attached is a 5 point rolling average from Beau's raw-data spreadsheet.
Essentially it works as follows:
-> Take reading
-> Put into a the next spot in a array of 5 (for example).· When you get to the last element, wrap around to use the first again.
-> Average the array values (sum and divide by size of array)
-> Repeat
Surpised nobody has mentioned it.
James
-Phil
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Andrew Williams
WBA Consulting
PowerTwig Dual Output Power Supply Module
My Prop projects: Reverse Geo-Cache Box, Custom Metronome, Micro Plunge Logger
The number of averaged values should be a power of 2 to simplify the division. It makes no sense to have a finer granularity.
So let's say, the maximal number of samples is 256, than MaxAvrg = 8
The accumulator will grow up to maximum value of input << MaxAvrg, therefor the input should nor exceed the value accu-range ~> MaxAvrg
Let the actual number of samples be 64, so Log2Samples = 6
When the first sample (NewInput) arrives, the accumulator should be initialized to:
NewInput << (MaxAvrg - Log2Samples)
After this, the filter function is applied:
The trick with this implementation of an IIR-filter is, that you can change the time constant on the fly without having to wait for the signal to settle.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
cmapspublic3.ihmc.us:80/servlet/SBReadResourceServlet?rid=1181572927203_421963583_5511&partName=htmltext
Hello Rest Of The World
Hello Debris
Install a propeller and blow them away
see also this object
and the various filter examples and links in this thread
James
I had good results smoothing RPM with the:
average := average * 7 / 8 + data / 8
question! How do I implement it in PASM?
James
-Phil
Or, for less truncation error:
Post Edited (Phil Pilgrim (PhiPi)) : 7/31/2010 6:16:08 PM GMT
J