External FPU?

Hi all...

I’ve been using uM-FPU64s from MIcromega, but due to the owner’s sudden death, those chips are no longer available.

Does anyone know of anything similar? I’m designing something new, and it doesn’t make a lot of sense to design something that includes a chip that’s no longer available.

Thanks!

Jamie

Comments

  • 15 Comments sorted by Date Added Votes
  • Jamie,

    If you design your product with the Propeller, you will not need an external FPU. There are objects for the Prop that handle floating point math in one of it's eight internal processors.

    -Phil
    “Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away. -Antoine de Saint-Exupery
  • The FPU Jamie mentions does 64-bit floating point math, are there any objects that support that?
    Tulsa, OK

    My OBEX objects:
    AGEL: Another Google Earth Logger
    DHT11 Sensor

    I didn't do it... and I promise not to do it again!
  • mikeologistmikeologist Posts: 315
    edited February 6 Vote Up0Vote Down
    wmosscrop wrote: »
    The FPU Jamie mentions does 64-bit floating point math, are there any objects that support that?

    Any Cortex M4 >=48MHz would be able to do the same job using multiprecision math and the single instruction 32 bit multiply.
    An M3 is also capable and cheaper, but it will be a bit slower.

    The C compiler for propeller should have native support for double precision floating point math, possibly quad.

    It's a great question, I will play around with it today and post results today or tomorrow.
    Any com port in a storm.
    Floating point numbers will be our downfall; count on it.
    Imagine a world without hypothetical situations.
  • I'm wondering why anyone needs 64 bit floats on any embedded system built with the likes of a Propeller?
  • To hopefully avoid a repeat of past conversations...

    http://forums.parallax.com/discussion/130579/floating-point
  • Propeller GCC certainly supports 64 bit doubles (natively). I've also written stand-alone 64 bit floating point code, although at the moment it only has a C interface. The code itself is in PASM (indeed in a .spin file that's designed to be run through spin2cpp) so it should be very straightforward to adapt it to Spin. That code is attached.
  • That's a good old thread, Searith.

    Where I said "As my old boss used to say "If you think you need floating point to solve the problem you don't understand the problem":)"

    Which I stand by today.
  • ersmith wrote: »
    Propeller GCC certainly supports 64 bit doubles (natively). I've also written stand-alone 64 bit floating point code, although at the moment it only has a C interface. The code itself is in PASM (indeed in a .spin file that's designed to be run through spin2cpp) so it should be very straightforward to adapt it to Spin. That code is attached.

    Sweet, saved me the time
    Any com port in a storm.
    Floating point numbers will be our downfall; count on it.
    Imagine a world without hypothetical situations.
  • Tracy AllenTracy Allen Posts: 6,175
    edited February 7 Vote Up0Vote Down
    A note in memoriam for Cam Thompson, developer of the uM-FPU64 and also early Propeller supporter, author of the first Prop Floating Point library and other OBEX objects (One I'm currently using in support of One-Wire devices). He passed in August 2015 at the way too young age of 59.
  • My condolences to Cam Thompson's family and friends.

    It's weird reading that Propeller floating point thread since I was one of the contributors. I eventually did an inverse kinematics project on the Propeller using fixed point with binary radians and the sine table from ROM. I had too write all of the trig functions in terms of sine and inverse sine. Inverse sine was done using a binary search of the table values.
  • Heater. wrote: »
    "If you think you need floating point to solve the problem you don't understand the problem":)"

    Which I stand by today.

    I think this is true in many cases, but not all. Working with anything in logarithmic or exponential scale, like normalized vector components, is a great fit for float, and much less so for fixed point. Many modern HDR image formats use the mantissa / exponent format because human brightness perception isn't linear.

    So I'd amend that statement to, "you don't understand the problem, or you understand it very well."

    (stirs pot, runs away)
  • ElectrodudeElectrodude Posts: 1,146
    edited February 8 Vote Up0Vote Down
    JasonDorie wrote: »
    Heater. wrote: »
    "If you think you need floating point to solve the problem you don't understand the problem":)"

    Which I stand by today.

    I think this is true in many cases, but not all. Working with anything in logarithmic or exponential scale, like normalized vector components, is a great fit for float, and much less so for fixed point. Many modern HDR image formats use the mantissa / exponent format because human brightness perception isn't linear.

    So I'd amend that statement to, "you don't understand the problem, or you understand it very well."

    (stirs pot, runs away)

    Instead of using a weird format somewhere between logarithmic and linear (i.e. floats), why not just drop the mantissa and have a fractional exponent, i.e. use a fully logarithmic form? I don't think addition of numbers in logarithmic form should be all that much harder than it is for floats - you just have a zero length mantissa.

    How are normalized vector components logarithmic or exponential? -- why is float better for them? Since no individual component's magnitude can exceed 1, I would think fixed point would be a good fit for them because they have a limited range. Sure, if one component is near 1 its angular precision will be very bad, but isn't this made up for by the fact that the other components' angular precision is best at those points?

    The only good use I can think of for floating point is a general-purpose scientific calculator, which inherently doesn't understand your specific problem because it's designed for any problem, and therefore has to resort to floating point.
  • I don't think addition of numbers in logarithmic form should be all that much harder than it is for floats - you just have a zero length mantissa.

    Forgive me for being dense, but how do you add two numbers represented in logarithmic form? I'm not talking about multiplying, which is the result of adding two logarithms.
    Tulsa, OK

    My OBEX objects:
    AGEL: Another Google Earth Logger
    DHT11 Sensor

    I didn't do it... and I promise not to do it again!

  • "If you think you need floating point to solve the problem, you don't understand the problem. If you really do need floating point then you have a problem you do not understand"

    I always liked this quote. Would you extend this to the use of negative numbers as well? (algorithmically, for a given embedded control task)
    I am the Master, and technology my slave.
  • Heater.Heater. Posts: 20,220
    edited February 20 Vote Up0Vote Down
    The_Master,
    Would you extend this to the use of negative numbers as well?
    Depends which day you ask me.

    Clearly negative numbers are an invention of lunatics. How could I possibly have a negative number of sheep in my field? For example.

    Similarly, zero is pretty suspect. If I have no sheep in my field why do I need a number for it? The whole idea of zero took a long time for humans to accept. The Roman civilization did not have one.

    But then... when I my son was very young and I was reading book about numbers to him at bedtime, he grabbed my arm, looked at me like I'm an idiot and said "No daddy, that is not four ducks (we had just turned the page from 3 ducks), that is just another duck". I was dumbfounded, did not know what to say to that, because it's true. "It's time for us to sleep" I said, and put the lights out.

    So, as my son pointed out, even positive integers are a fiction.

    At the bottom of it all we only have 0 and 1. Or use whatever other symbol. We have a distinction between one thing and another thing, one state and some other state, etc.

    Spencer-Brown in his mathematics book, The Laws of Form, starts from the single axiom "Let there be a distinction". If I recall correctly he gets up to proving such things as 2 + 2 = 4 by the end of the book. https://en.wikipedia.org/wiki/Laws_of_Form

    So already, just using integers (positive, negative and zero) we are in a world of conceptual pain.

    As a practical matter, in computers, integers as an abstraction that work fine. Integers are 100% accurate and predictable. Until the abstraction breaks down. Add two numbers that have a too big a result and your computer will lie to you. Got to be careful about that.

    If negative integers bother you, no problem, just move everything up the number line until they are all positive.

    But then, we have floats....

    Turns out there are infinitely many more real numbers than integers. https://en.wikipedia.org/wiki/Kurt_Gödel

    Trying to map an infinite number of things on to the handful of bits we have in computer variables is the road to hell.

    In short. No. I would not extend that quote from my old boss to the use of negative numbers.
Sign In or Register to comment.