Float Math Accuracy
rotorai
Posts: 2
in Propeller 1
I am getting unexpected results from simple float addition. I am using the code below to accumulate milliseconds and would expect all results of Time to be a factor of 0.001. What I am getting is close but not exact, see actual results below code. Can anyone explain?
OBJ
F : "Float32"
.
.
.
PRI Clock | Counts
Counts := CNT
repeat
Waitcnt(Counts += clkfreq/1000)
Time := F.Fadd(0.001,Time)
Actual results of Time Polled at ~0.1 sec
0.1
0.2030002
0.3049996
0.4079983
0.509997
0.6129957
0.7149944
OBJ
F : "Float32"
.
.
.
PRI Clock | Counts
Counts := CNT
repeat
Waitcnt(Counts += clkfreq/1000)
Time := F.Fadd(0.001,Time)
Actual results of Time Polled at ~0.1 sec
0.1
0.2030002
0.3049996
0.4079983
0.509997
0.6129957
0.7149944
Comments
-Phil
in addition will generate perhaps 25 lsb's of error - the error goes with the square root
of the number of operations (assuming errors are independent, which isn't really true).
Its much worse if the floating point doesn't use unbiased rounding (round to even), since
the bias scales with the number of operations, not the square root.
Too much information for most, or too technical for many people just getting started.
The simple explaination is that computers just do two kinds of interger math - signed and unsigned. And multiplication and divide are done only in Base-2.
Internal floating-point calculations really bog down due to requiring a lot of computer code to execute properly (and this is why math co-processors exist).
Also, the range of integers is limited by the number of 8 bit, so the range of 32 bit is vast (which is helpful), while 8 bit is somewhat limited.
So everything runs faster if you just consider floating-point to be a user interface conversion task and exclude it from internal calculations. Conversions to and from floating-point should be done as part of the i/o.
Of course, these limitations do make some forms of math more challenging. Getting the square root of a number is awkward. Trig also can be awkward. That's why the Propeller provides log tables for these.
And if you want to solve polynomial equations, you really might consider Excel or an HP calculator as a better tool.
What's funny is that the windows calculator did give the correct answer, so it must do math differently than excel.
Bean
Linux has Open Office which includes a similar spreadsheet. But if one wanted real number crunching power within a program, maybe Linux on a Raspberry Pi could support a Propeller.
The Propeller teaches floating-point very well. But if one requires speed with a lot of interations, it may be best to offload to a Linux SoC.
F32 (like most floating point libraries for the Propeller) isn't particularly careful about rounding, so that probably is the root of his problem. GCC's routines do round to nearest even, so they do a lot better. My point is just that when someone gets bad numbers out of a program using floating point, they shouldn't just throw up their hands and say "floating point is inherently inaccurate". There are plenty of other potential sources of error, ranges from bugs in the program to bugs in the floating point implementation. The latter are particularly annoying -- we should demand good quality floating point routines on all platforms, but too often people are willing to put up with inaccuracy because "that's just the way floating point is". It doesn't have to be!
Having the cross-compiler in-between resolve all the accuracy issues is a naive solution. Programmers have been trying to get this right for decades now and it is still not quite perfected.
On the other hand, 2 dollar MCU's come with floating point now a days. Like the STM32. Learners can do useful things with them using floating point. Learners can program them in languages like Python, Javascript, C#. It's rather like the old 8 bit days when kids were learning to program in BASIC, with floating point.
Raw speed is not everything, getting the job done quickly and easily might be more attractive.
That's why the Prop II will have floating point support.
Of course, like the kids of the 1980's, those that hit the limits of what they can do with a high level language and floats and are keen enough will learn to overcome the limits with assembler and ints.
It's all good.
It cannot lose track of those smaller values. So floating point would not work. That's why I ended up using two longs per value. One holds the whole value and one holds the fractional.
I have better than 9.9 digits, so I can add 1E-9 to 1E9, then subtract 1E9 and get 1E-9 as the answer.
I happen to only need addition, multiplication and negate. So the code overhead is really small and fast. I would only have to add reciprocal to round out the 4 basic functions.
Bean
Yep, that's a typical case that trips up kids, who start out assuming float takes care of everything.
Problem is that taking care of fixed point numbers, especially if you need multiple longs or whatever to handle it, is a pain in assembler, it's a pain in a high level language. All of a sudden you are having to call functions to do arithmetic rather than just use the language operators "+", "*", etc.
It's not fair to expect "learners" to have to deal with that. They will learn, when they hit the limits.
I am really wondering why my HP calculator is fine, but computer programs are inaccurate. After all it has programing that does just about any advanced maths.
I know it has a few known bugs, but accumulated error doesn't seem to be occuring.
For sure it has the same issues as IEEE Standard 754 standard floating point used today. Perhaps in a different way.
Or does it really use some big number format that is only limited by memory space? I doubt it.
What is 0.1 + 0.2 on your HP?
And realize that if anyone one is "cheating" by exploiting the rounding errors no one will care even if they notice. Conversely if the number range overflows there is something very wrong with the currency and we have bigger problems to worry about than counting tenths of pennies.
I have seen accountants spend all day checking and rechecking their books to find where there is a 1 penny error. The cost of doing that is a far bigger hit on the economy and human well being than just saying "f'it".
I think some calculators do math in decimal (or BCD) using some 99 digits of memory for each value.
Bean
"The HP 9G calculator uses up to ten digits for output, and up to 24 digits during calculations."
Admittedly more than the 15 decimal digits of IEEE 64 bit floats.
All the same problems really. Just smaller
-Phil
I like 64 bit fixed point because it is really fast. An add is only 2 pasm instructions. Admittedly it take 2 longs to store a value so it takes twice as much storage.
Bean
My long lost love is the HP-41C/CV/CX series. And that seemed to use a 56 bit internal numeric format!!! And the HP Saturn chip(which I was not familiar with) went to 64bit format for the CPU.
http://www.hpmuseum.org/techcpu.htm
http://www.hpmuseum.org/
As far as why 64 bit might be useful... that would provide more memory addresses for more RAM. If you are doing serious number crunching, you might need that. But HP seems to have been on to something very refined.
Fixed 64 bit for number crunching on the Propeller appeals to me.... especially since I now rediscovered that my favorite HP calculator used 56 bits.