Shop OBEX P1 Docs P2 Docs Learn Events
Floating point in PropGCC — Parallax Forums

Floating point in PropGCC

ypapelisypapelis Posts: 99
edited 2015-02-17 04:58 in Propeller 1
I am a big confused about the 'under-the-hood' operation of floating point when using C in SimpleIDE/PropGCC. In the old SPIN days (which I have long abandoned for C), there were libraries to explicitly implement floating point math operations and it was pretty clear if these 'functions' operated on the same cog or you could allocate one or two cogs to do them math. It was awkward but it was clear. I can't find anything that mentions how this is supposed to work in SimpleIDE.

On the surface, it all works ok. I can declare floats and use inline math and print them using printf() but it seems to take a bit over 90 microSeconds to do an addition/assignment as per the piece of code below (used a scope to check the timing). Furthermore, I can check the number of free cogs and there are 7 free cogs which means these operations are not utilizing a separate cog. I read some old posts making mention of fpucog.c and a related function but that function and/or file is nowhere to be found in the current SimpleIDE (last version I used is the version 1 release candidate).

Does anyone know if it is possible to enhance floating point performance in C by allocating a COG to it when using SimpleIDE or if there is a library to do the math operations through function calls? I am not too excited about taking old SPIN libraries and converting them using SPIN2C so that is not really an option for now.

Example code:
#include <propeller.h>

volatile float sum = 0;

int main()                                    // Main function
{
  float x = 0.02f;
  
  DIRA |= (1<<11);
  while(1)  {
    x = sum;
    OUTA |= (1<<11);
    sum = sum + x;
    OUTA ^= (1<<11);    
  }  
}

Comments

  • evanhevanh Posts: 15,921
    edited 2015-02-14 14:38
    Simple answer will be PropGCC treats it just like any other CPU without FPU hardware - it inline emulates. This was difficult to do without LMM and co. but is no big deal any longer.

    I have no idea what it would take to integrate a faster alternative that "just works".
  • ersmithersmith Posts: 6,053
    edited 2015-02-15 08:38
    Here's a link to the fpucog program that moves the floating point operations to their own cog: http://forums.parallax.com/showthread.php/142523-using-a-COG-as-a-floating-point-coprocessor

    The original floating point emulation for PropGCC was written in C and not propeller specific, so it was quite slow. Newer PropGCC releases have optimized assembly language which is a lot faster. I don't think SimpleIDE has updated to that PropGCC version yet, so in the meantime you can use fpucog.c (which has pretty much the same code as the new PropGCC, and also can use another COG).

    Also, make sure you check the option for 32 bit doubles if you really need performance (and don't need 64 bit precision and/or strict C standards compliance). 32 bit floating point is much faster than 64 bit.
  • DavidZemonDavidZemon Posts: 2,973
    edited 2015-02-15 10:03
    ersmith wrote: »
    Also, make sure you check the option for 32 bit doubles if you really need performance (and don't need 64 bit precision and/or strict C standards compliance). 32 bit floating point is much faster than 64 bit.

    Since the original question is answered, I don't feel too bad drawing this way off topic. Why was this feature implemented? Why aren't users just informed "use float, not double"? Why was the solution to completely disable 64-bit floating point numbers rather than educate users on the difference between `float` and `double` and let them choose for themselves?

    The only guess is have deals with importing libraries. If I go download the source code for library X and use it with PropGCC, then the existing solution does not require that I modify the source code to use `float` instead of `double`. Seems like an extreme edge case, that anything would import directly onto a Propeller without any other source code modifications. Is this the only the reason or is there something else?
  • Dave HeinDave Hein Posts: 6,347
    edited 2015-02-15 12:00
    I think the issue is that most of the standard floating point library functions use double, and not float. The 32-bit double option avoids the extra compute that's used for 64-bit doubles.
  • jmgjmg Posts: 15,173
    edited 2015-02-15 15:20
    Why was the solution to completely disable 64-bit floating point numbers rather than educate users on the difference between `float` and `double` and let them choose for themselves?

    I suspect that choice comes down to library size.
    In an ideal world, you can support both and users access either, but in the finite-resource world of MCU's libraries have a significant size, relative to the overall system and a Dual or twin library is an overhead most would want to avoid.

    That said, there may well be corner cases where both 32b and 64b would be useful, and the resource consumed would be tolerated.
  • ersmithersmith Posts: 6,053
    edited 2015-02-15 16:35
    Since the original question is answered, I don't feel too bad drawing this way off topic. Why was this feature implemented? Why aren't users just informed "use float, not double"? Why was the solution to completely disable 64-bit floating point numbers rather than educate users on the difference between `float` and `double` and let them choose for themselves?
    Double is the default for floating point numbers ("1.0" is double, for float you have to cast or type "1.0f"). It's also the default type for all the math functions, and in the absence of prototypes float arguments are promoted to double. For varargs functions like printf float arguments are always promoted to double (there's no way to printf a float without it being promoted to double). So while it's possible to write a program with "float" only, it is awkward and somewhat difficult, particularly for beginners. It's much easier to use the option to make float and double the same size (32 bits).

    Another fair question is why that option isn't the default. Actually I think it is the default in SimpleIDE, but for the C compiler proper we wanted the default to be standards-compliant, and the C standard has requirements on double that aren't satisfied by 32 bits (double has to support a certain number of decimal digits of accuracy).

    Note also that in PropGCC "float" is always 32 bits, and "long double" is always 64 bits, so it's possible to force the precision of your variables regardless of which compiler option is used. Only the size of "double" is affected by the -m32bit-double flag.
  • jmgjmg Posts: 15,173
    edited 2015-02-15 17:31
    ersmith wrote: »
    Note also that in PropGCC "float" is always 32 bits, and "long double" is always 64 bits, so it's possible to force the precision of your variables regardless of which compiler option is used. Only the size of "double" is affected by the -m32bit-double flag.

    So you can mix now ?
    What are the relative Library overheads of using
    * only Float,
    * only long double, and
    * a mix of the two ?
  • ersmithersmith Posts: 6,053
    edited 2015-02-16 06:15
    jmg wrote: »
    So you can mix now ?
    What are the relative Library overheads of using
    * only Float,
    * only long double, and
    * a mix of the two ?

    Actually you could always mix -- long double has always been 64 bits. The floating point libraries in the 1.0 version of PropGCC (which SimpleIDE uses) are pretty large, so it's best to just stick with the default size (float and double for -m32bit-double, double and long double otherwise). In later versions of PropGCC the libraries are much smaller -- the core floating point routines are small enough to be overlaid in COG memory. So it's not so bad to use the "other" size for simple calculations. If you start pulling in math functions like cos and sin with the "wrong" size (e.g. using cosl for long double when double is 32 bits, or mixing both cos and cosf when double is 64 bits) then you'll approximately double the library footprint, which will hurt.
  • ersmithersmith Posts: 6,053
    edited 2015-02-16 06:16
    The 32 bit versions of everything are a bit smaller, since the code only has to manipulate 1 register per floating point value. I'd guess the 64 bit library routines are probably 1.5x as large as the equivalent 32 bit routines.
  • DavidZemonDavidZemon Posts: 2,973
    edited 2015-02-16 07:12
    ersmith wrote: »
    Double is the default for floating point numbers ("1.0" is double, for float you have to cast or type "1.0f"). It's also the default type for all the math functions, and in the absence of prototypes float arguments are promoted to double. For varargs functions like printf float arguments are always promoted to double (there's no way to printf a float without it being promoted to double). So while it's possible to write a program with "float" only, it is awkward and somewhat difficult, particularly for beginners. It's much easier to use the option to make float and double the same size (32 bits).

    Thanks! Very helpful
  • ypapelisypapelis Posts: 99
    edited 2015-02-16 12:58
    ersmith wrote: »
    Here's a link to the fpucog program that moves the floating point operations to their own cog: http://forums.parallax.com/showthread.php/142523-using-a-COG-as-a-floating-point-coprocessor

    The original floating point emulation for PropGCC was written in C and not propeller specific, so it was quite slow. Newer PropGCC releases have optimized assembly language which is a lot faster. I don't think SimpleIDE has updated to that PropGCC version yet, so in the meantime you can use fpucog.c (which has pretty much the same code as the new PropGCC, and also can use another COG).

    Also, make sure you check the option for 32 bit doubles if you really need performance (and don't need 64 bit precision and/or strict C standards compliance). 32 bit floating point is much faster than 64 bit.

    Thank you, that was exactly what I was looking for. It seems to improve performance by about a factor of 5, including the case of using transcendental functions, that's great. Hope this makes it into the SimpleIDE release soon.
  • jmgjmg Posts: 15,173
    edited 2015-02-16 13:20
    ersmith wrote: »
    In later versions of PropGCC the libraries are much smaller -- the core floating point routines are small enough to be overlaid in COG memory. So it's not so bad to use the "other" size for simple calculations. If you start pulling in math functions like cos and sin with the "wrong" size (e.g. using cosl for long double when double is 32 bits, or mixing both cos and cosf when double is 64 bits) then you'll approximately double the library footprint, which will hurt.

    So I think you are saying the 32b and 64b core floating point, will both fit into one COG, meaning there is no real LIB size or speed change for using 64b in a couple of places.
    or is that 'overlaid' dynamic reload (slower), rather than 'both can fit' ?
  • ersmithersmith Posts: 6,053
    edited 2015-02-17 04:58
    jmg wrote: »
    So I think you are saying the 32b and 64b core floating point, will both fit into one COG, meaning there is no real LIB size or speed change for using 64b in a couple of places.
    or is that 'overlaid' dynamic reload (slower), rather than 'both can fit' ?

    They don't fit at the same time, so it is a dynamic overlay. And not all of the code goes into the overlay, some is run in LMM space (like any other C code). Frequently used helper functions (e.g. to split a float or double into sign, exponent, and mantissa) are in the overlay. For 32 bit some of the core math functions (like add and multiply) also fit in the overlay; the 64 bit code is too big, although I think the multiply and divide inner loops are in there.

    Bottom line: there is a cost for using two different float sizes. It's relatively smaller in the newest PropGCC, something like "your code will be 2K bigger if you use 64 bits" instead of "your code will be 5K bigger is you use 64 bits".
Sign In or Register to comment.