Shop OBEX P1 Docs P2 Docs Learn Events
Float Math Accuracy — Parallax Forums

Float Math Accuracy

I am getting unexpected results from simple float addition. I am using the code below to accumulate milliseconds and would expect all results of Time to be a factor of 0.001. What I am getting is close but not exact, see actual results below code. Can anyone explain?

OBJ

F : "Float32"
.
.
.

PRI Clock | Counts

Counts := CNT

repeat
Waitcnt(Counts += clkfreq/1000)
Time := F.Fadd(0.001,Time)

Actual results of Time Polled at ~0.1 sec

0.1
0.2030002
0.3049996
0.4079983
0.509997
0.6129957
0.7149944

Comments

  • Tenths, hundredths, thousandths, etc., cannot be represented exactly in floating point. That why you're seeing those errors accumulate. It would be better to do your math with integers, then divide at the very end, only for the display.

    -Phil
  • Phil is right. In binary, fractions of 10 look like fractions of 3 in decimal.
  • Thanks Phil! That is how I corrected it but was hoping there was a way to have it remain a float.
  • To avoid the accumulation error, you would need to choose a power-of-two value (e.g. 1/1024). But, in a case like this, working with integers tends to be more practical (and faster).
  • If you want to leave it in floating point, you could add whole numbers, then multiply by 0.001 or divide by 1000.0 before displaying or using the number. Float would be able to handle 23 bits worth of whole digit precision before it started losing accuracy. (about 12 million)
  • It's true that floating point math can't directly represent powers of 10 (like 0.1), but the error is much smaller than what you've shown above -- there must be some other issue. Either there's a problem in the code that collects the results, or else the float library you're using is not rounding results correctly for printing. The PropGCC library has very careful floating point routines, and here's what I get for a similar program:
    0.010000
    0.020000
    0.030000
    0.040000
    0.050000
    0.060000
    0.070000
    0.080000
    
    Here's the exact code I ran:
    #include <stdio.h>
    #include <propeller.h>
    
    #define ITERATIONS 30
    
    float Times[ITERATIONS];
    
    int
    main()
    {
        unsigned int Counts;
        unsigned int delay = CLKFREQ / 1000;
        float Time = 0.0f;
        int i, j;
        
        printf("Running %d iterations\n", ITERATIONS);
        
        Counts = CNT;
    
        for (i = 0; i < ITERATIONS; i++) {
            // collect every 10th run for effective delay of 0.1s
            for (j = 0; j < 10; j++) {
                waitcnt(Counts += delay);
                Time += 0.001;
            }
            Times[i] = Time;
        }
        for (i = 0; i < ITERATIONS; i++) {
            printf("%f\n", Times[i]);
        }
        return 0;
    }
    

  • No, its to be expected when adding 0.001 several hundred times, the rounding errors
    in addition will generate perhaps 25 lsb's of error - the error goes with the square root
    of the number of operations (assuming errors are independent, which isn't really true).

    Its much worse if the floating point doesn't use unbiased rounding (round to even), since
    the bias scales with the number of operations, not the square root.
  • Ah, a great opportunity to mention what every computer scientist (and wannabe) needs to know about floating point arithmetic: https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2015-12-05 06:58
    @guenthert
    Too much information for most, or too technical for many people just getting started.

    The simple explaination is that computers just do two kinds of interger math - signed and unsigned. And multiplication and divide are done only in Base-2.

    Internal floating-point calculations really bog down due to requiring a lot of computer code to execute properly (and this is why math co-processors exist).

    Also, the range of integers is limited by the number of 8 bit, so the range of 32 bit is vast (which is helpful), while 8 bit is somewhat limited.

    So everything runs faster if you just consider floating-point to be a user interface conversion task and exclude it from internal calculations. Conversions to and from floating-point should be done as part of the i/o.

    Of course, these limitations do make some forms of math more challenging. Getting the square root of a number is awkward. Trig also can be awkward. That's why the Propeller provides log tables for these.

    And if you want to solve polynomial equations, you really might consider Excel or an HP calculator as a better tool.
  • BeanBean Posts: 8,129
    You have be careful with excel too. I was doing some code recently that used 32.32 fixed point math, and some of the values excel was given were off by a couple values for a really large value with a really small fractional part.

    What's funny is that the windows calculator did give the correct answer, so it must do math differently than excel.

    Bean
  • I still use a Hewlett-Packard calculator for anything that requires accurate number crunching. I simply hold their product in high regard.

    Linux has Open Office which includes a similar spreadsheet. But if one wanted real number crunching power within a program, maybe Linux on a Raspberry Pi could support a Propeller.

    The Propeller teaches floating-point very well. But if one requires speed with a lot of interations, it may be best to offload to a Linux SoC.
  • Mark_T wrote: »
    No, its to be expected when adding 0.001 several hundred times, the rounding errors
    in addition will generate perhaps 25 lsb's of error - the error goes with the square root
    of the number of operations (assuming errors are independent, which isn't really true).

    Its much worse if the floating point doesn't use unbiased rounding (round to even), since
    the bias scales with the number of operations, not the square root.

    F32 (like most floating point libraries for the Propeller) isn't particularly careful about rounding, so that probably is the root of his problem. GCC's routines do round to nearest even, so they do a lot better. My point is just that when someone gets bad numbers out of a program using floating point, they shouldn't just throw up their hands and say "floating point is inherently inaccurate". There are plenty of other potential sources of error, ranges from bugs in the program to bugs in the floating point implementation. The latter are particularly annoying -- we should demand good quality floating point routines on all platforms, but too often people are willing to put up with inaccuracy because "that's just the way floating point is". It doesn't have to be!
  • I should clarify here that I'm don't mean to single out F32 (or Float32, any other particular Spin implementation) as being particularly "bad". F32 is fast and certainly good enough for many purposes. One does have to be aware of the limitations of one's floating point library. It would probably be a nice thing if someone were to port the GCC code (or some other IEEE compliant library) to Spin to provide a more compatible option for Spin users.
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2015-12-07 12:57
    I just wish a lot of learners would realize that there are big gains in speed and simplicity of code by avoiding floating point if at all possible. Floating point is mainly preferred math for humans; microcontrollers have a different means of holding precision.

    Having the cross-compiler in-between resolve all the accuracy issues is a naive solution. Programmers have been trying to get this right for decades now and it is still not quite perfected.
  • Heater.Heater. Posts: 21,230
    I'm kind of with you on that floating point objection. As my old boss said "If you think you need floating point to solve your problem then you don't understand the problem"

    On the other hand, 2 dollar MCU's come with floating point now a days. Like the STM32. Learners can do useful things with them using floating point. Learners can program them in languages like Python, Javascript, C#. It's rather like the old 8 bit days when kids were learning to program in BASIC, with floating point.

    Raw speed is not everything, getting the job done quickly and easily might be more attractive.

    That's why the Prop II will have floating point support.

    Of course, like the kids of the 1980's, those that hit the limits of what they can do with a high level language and floats and are keen enough will learn to overcome the limits with assembler and ints.

    It's all good.





  • BeanBean Posts: 8,129
    In the work I'm doing the program will often add very small values to very large values.

    It cannot lose track of those smaller values. So floating point would not work. That's why I ended up using two longs per value. One holds the whole value and one holds the fractional.

    I have better than 9.9 digits, so I can add 1E-9 to 1E9, then subtract 1E9 and get 1E-9 as the answer.

    I happen to only need addition, multiplication and negate. So the code overhead is really small and fast. I would only have to add reciprocal to round out the 4 basic functions.

    Bean
  • Heater.Heater. Posts: 21,230
    Bean,

    Yep, that's a typical case that trips up kids, who start out assuming float takes care of everything.

    Problem is that taking care of fixed point numbers, especially if you need multiple longs or whatever to handle it, is a pain in assembler, it's a pain in a high level language. All of a sudden you are having to call functions to do arithmetic rather than just use the language operators "+", "*", etc.

    It's not fair to expect "learners" to have to deal with that. They will learn, when they hit the limits.

  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2015-12-07 18:28
    The world needs a stable currency denominated in Hexidecimal. But I suspect governments would mandate Hexidecimal Floating Point and we would end up worse off.

    I am really wondering why my HP calculator is fine, but computer programs are inaccurate. After all it has programing that does just about any advanced maths.

    I know it has a few known bugs, but accumulated error doesn't seem to be occuring.
  • Heater.Heater. Posts: 21,230
    How many bits does your HP calculator use for mantissa and exponent?

    For sure it has the same issues as IEEE Standard 754 standard floating point used today. Perhaps in a different way.

    Or does it really use some big number format that is only limited by memory space? I doubt it.

    What is 0.1 + 0.2 on your HP?




  • Heater.Heater. Posts: 21,230
    What the world needs is for banks and book keepers to use 64 bit floating point in their calculations.

    And realize that if anyone one is "cheating" by exploiting the rounding errors no one will care even if they notice. Conversely if the number range overflows there is something very wrong with the currency and we have bigger problems to worry about than counting tenths of pennies.

    I have seen accountants spend all day checking and rechecking their books to find where there is a 1 penny error. The cost of doing that is a far bigger hit on the economy and human well being than just saying "f'it".

  • BeanBean Posts: 8,129
    edited 2015-12-07 19:01
    I am really wondering why my HP calculator is fine, but computer programs are inaccurate. After all it has programing that does just about any advanced maths.

    I know it has a few known bugs, but accumulated error doesn't seem to be occuring.

    I think some calculators do math in decimal (or BCD) using some 99 digits of memory for each value.
    Bean

  • Heater.Heater. Posts: 21,230
    From the HP web site:

    "The HP 9G calculator uses up to ten digits for output, and up to 24 digits during calculations."

    Admittedly more than the 15 decimal digits of IEEE 64 bit floats.

    All the same problems really. Just smaller :)


  • In the class I'm teaching, I've told the kids a little white lie: "The Propeller can't do floating point, so figure out how to solve it with integers." In the long run, they'll be better programmers for it, and I don't have to explain to beginners why 0.1 + 0.1 <> 0.2.

    -Phil
  • Float is generally terrible for absolute precision, but very good with relative precision. If you need to compute atomic-scale differences between planet radii, it's probably not the right answer, but if you're computing the distance between atoms, OR between planets, it works just fine. The Prop is capable of fairly quick floating point - The overhead of Spin slows it down by quite a bit, but using C/C++ is significantly faster, and using the "stream mode" of Float32 makes it faster still.
  • BeanBean Posts: 8,129
    Like anything you need to know what you need, and what is available. And choose the best fit.

    I like 64 bit fixed point because it is really fast. An add is only 2 pasm instructions. Admittedly it take 2 longs to store a value so it takes twice as much storage.

    Bean
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2015-12-08 09:02
    Heater. wrote: »
    How many bits does your HP calculator use for mantissa and exponent?

    For sure it has the same issues as IEEE Standard 754 standard floating point used today. Perhaps in a different way.

    Or does it really use some big number format that is only limited by memory space? I doubt it.

    What is 0.1 + 0.2 on your HP?

    I suspect it is all proprietary HP. Mine is an HP-50g. But there is a whole culture of reverse engineering and hacks HP calculators that might provide specifics. I have to admit that my understanding of floating-point on computers is very shallow.

    My long lost love is the HP-41C/CV/CX series. And that seemed to use a 56 bit internal numeric format!!! And the HP Saturn chip(which I was not familiar with) went to 64bit format for the CPU.

    http://www.hpmuseum.org/techcpu.htm

    http://www.hpmuseum.org/

    As far as why 64 bit might be useful... that would provide more memory addresses for more RAM. If you are doing serious number crunching, you might need that. But HP seems to have been on to something very refined.

    Fixed 64 bit for number crunching on the Propeller appeals to me.... especially since I now rediscovered that my favorite HP calculator used 56 bits.
Sign In or Register to comment.