Shop OBEX P1 Docs P2 Docs Learn Events
External FPU? — Parallax Forums

External FPU?

Hi all...

I’ve been using uM-FPU64s from MIcromega, but due to the owner’s sudden death, those chips are no longer available.

Does anyone know of anything similar? I’m designing something new, and it doesn’t make a lot of sense to design something that includes a chip that’s no longer available.

Thanks!

Jamie

Comments

  • Jamie,

    If you design your product with the Propeller, you will not need an external FPU. There are objects for the Prop that handle floating point math in one of it's eight internal processors.

    -Phil
  • The FPU Jamie mentions does 64-bit floating point math, are there any objects that support that?
  • mikeologistmikeologist Posts: 337
    edited 2018-02-06 18:35
    wmosscrop wrote: »
    The FPU Jamie mentions does 64-bit floating point math, are there any objects that support that?

    Any Cortex M4 >=48MHz would be able to do the same job using multiprecision math and the single instruction 32 bit multiply.
    An M3 is also capable and cheaper, but it will be a bit slower.

    The C compiler for propeller should have native support for double precision floating point math, possibly quad.

    It's a great question, I will play around with it today and post results today or tomorrow.
  • Heater.Heater. Posts: 21,230
    I'm wondering why anyone needs 64 bit floats on any embedded system built with the likes of a Propeller?
  • To hopefully avoid a repeat of past conversations...

    http://forums.parallax.com/discussion/130579/floating-point
  • Propeller GCC certainly supports 64 bit doubles (natively). I've also written stand-alone 64 bit floating point code, although at the moment it only has a C interface. The code itself is in PASM (indeed in a .spin file that's designed to be run through spin2cpp) so it should be very straightforward to adapt it to Spin. That code is attached.
  • Heater.Heater. Posts: 21,230
    That's a good old thread, Searith.

    Where I said "As my old boss used to say "If you think you need floating point to solve the problem you don't understand the problem":)"

    Which I stand by today.
  • ersmith wrote: »
    Propeller GCC certainly supports 64 bit doubles (natively). I've also written stand-alone 64 bit floating point code, although at the moment it only has a C interface. The code itself is in PASM (indeed in a .spin file that's designed to be run through spin2cpp) so it should be very straightforward to adapt it to Spin. That code is attached.

    Sweet, saved me the time
  • Tracy AllenTracy Allen Posts: 6,656
    edited 2018-02-07 07:05
    A note in memoriam for Cam Thompson, developer of the uM-FPU64 and also early Propeller supporter, author of the first Prop Floating Point library and other OBEX objects (One I'm currently using in support of One-Wire devices). He passed in August 2015 at the way too young age of 59.
  • My condolences to Cam Thompson's family and friends.

    It's weird reading that Propeller floating point thread since I was one of the contributors. I eventually did an inverse kinematics project on the Propeller using fixed point with binary radians and the sine table from ROM. I had too write all of the trig functions in terms of sine and inverse sine. Inverse sine was done using a binary search of the table values.
  • Heater. wrote: »
    "If you think you need floating point to solve the problem you don't understand the problem":)"

    Which I stand by today.

    I think this is true in many cases, but not all. Working with anything in logarithmic or exponential scale, like normalized vector components, is a great fit for float, and much less so for fixed point. Many modern HDR image formats use the mantissa / exponent format because human brightness perception isn't linear.

    So I'd amend that statement to, "you don't understand the problem, or you understand it very well."

    (stirs pot, runs away)
  • ElectrodudeElectrodude Posts: 1,614
    edited 2018-02-08 20:16
    JasonDorie wrote: »
    Heater. wrote: »
    "If you think you need floating point to solve the problem you don't understand the problem":)"

    Which I stand by today.

    I think this is true in many cases, but not all. Working with anything in logarithmic or exponential scale, like normalized vector components, is a great fit for float, and much less so for fixed point. Many modern HDR image formats use the mantissa / exponent format because human brightness perception isn't linear.

    So I'd amend that statement to, "you don't understand the problem, or you understand it very well."

    (stirs pot, runs away)

    Instead of using a weird format somewhere between logarithmic and linear (i.e. floats), why not just drop the mantissa and have a fractional exponent, i.e. use a fully logarithmic form? I don't think addition of numbers in logarithmic form should be all that much harder than it is for floats - you just have a zero length mantissa.

    How are normalized vector components logarithmic or exponential? -- why is float better for them? Since no individual component's magnitude can exceed 1, I would think fixed point would be a good fit for them because they have a limited range. Sure, if one component is near 1 its angular precision will be very bad, but isn't this made up for by the fact that the other components' angular precision is best at those points?

    The only good use I can think of for floating point is a general-purpose scientific calculator, which inherently doesn't understand your specific problem because it's designed for any problem, and therefore has to resort to floating point.
  • I don't think addition of numbers in logarithmic form should be all that much harder than it is for floats - you just have a zero length mantissa.

    Forgive me for being dense, but how do you add two numbers represented in logarithmic form? I'm not talking about multiplying, which is the result of adding two logarithms.

  • "If you think you need floating point to solve the problem, you don't understand the problem. If you really do need floating point then you have a problem you do not understand"

    I always liked this quote. Would you extend this to the use of negative numbers as well? (algorithmically, for a given embedded control task)
  • Heater.Heater. Posts: 21,230
    edited 2018-02-20 21:31
    The_Master,
    Would you extend this to the use of negative numbers as well?
    Depends which day you ask me.

    Clearly negative numbers are an invention of lunatics. How could I possibly have a negative number of sheep in my field? For example.

    Similarly, zero is pretty suspect. If I have no sheep in my field why do I need a number for it? The whole idea of zero took a long time for humans to accept. The Roman civilization did not have one.

    But then... when I my son was very young and I was reading book about numbers to him at bedtime, he grabbed my arm, looked at me like I'm an idiot and said "No daddy, that is not four ducks (we had just turned the page from 3 ducks), that is just another duck". I was dumbfounded, did not know what to say to that, because it's true. "It's time for us to sleep" I said, and put the lights out.

    So, as my son pointed out, even positive integers are a fiction.

    At the bottom of it all we only have 0 and 1. Or use whatever other symbol. We have a distinction between one thing and another thing, one state and some other state, etc.

    Spencer-Brown in his mathematics book, The Laws of Form, starts from the single axiom "Let there be a distinction". If I recall correctly he gets up to proving such things as 2 + 2 = 4 by the end of the book. https://en.wikipedia.org/wiki/Laws_of_Form

    So already, just using integers (positive, negative and zero) we are in a world of conceptual pain.

    As a practical matter, in computers, integers as an abstraction that work fine. Integers are 100% accurate and predictable. Until the abstraction breaks down. Add two numbers that have a too big a result and your computer will lie to you. Got to be careful about that.

    If negative integers bother you, no problem, just move everything up the number line until they are all positive.

    But then, we have floats....

    Turns out there are infinitely many more real numbers than integers. https://en.wikipedia.org/wiki/Kurt_Gödel

    Trying to map an infinite number of things on to the handful of bits we have in computer variables is the road to hell.

    In short. No. I would not extend that quote from my old boss to the use of negative numbers.
  • ElectrodudeElectrodude Posts: 1,614
    edited 2018-02-22 04:25
    wmosscrop wrote: »
    I don't think addition of numbers in logarithmic form should be all that much harder than it is for floats - you just have a zero length mantissa.

    Forgive me for being dense, but how do you add two numbers represented in logarithmic form? I'm not talking about multiplying, which is the result of adding two logarithms.

    Floats kind of already do this when you add them: you take the one with the lower exponent and shift the mantissa left and increment the exponent until the exponents are the same, then you add the mantissas. Since a purely logarithmic representation of a number is just a float with a zero bit mantissa and a base less than two (unless you really don't care about precision at all), I intuitively thought that it'd be similar.

    It turns out that it's not so easy - you need to find a way to deal with the non-unity mantissa on the input that got shifted. I just did some algebra and found that you'd need an efficient way to compute log(1+exp(x)). Fortunately, that function is zero for most negative numbers and x for most positive numbers, and a good enough approximation for the curved middle part probably exists. So this is possible, but is probably still harder than it is for normal floats with a non-zero-bit mantissa.

    But what practical problem actually needs numbers with exponential range? As I stated in my previous comment, I think Jason Dorie's normalized vectors are a poor example.
  • Floats are not a logarithmic notation. The so-called mantissa is simply the value of the number shifted by the amount of the characteristic. There's no logarithm involved.

    -Phil
  • Heater.Heater. Posts: 21,230
    Wait a minute.

    If I start incrementing the exponent field of a floating point number then the value represented by that float rises exponentially.

    If I were to add together the exponent fields of two floats (which have the same mantissa) I would be doing a multiply.

    That sounds like a logarithmic relationship to me. Depending which way round you look at it. There is a reason it is called the exponent. The inverse of exponentiation is the logarithm.

    As you say though, as a practical matter it just comes down to shifting.
  • But what practical problem actually needs numbers with exponential range? As I stated in my previous comment, I think Jason Dorie's normalized vectors are a poor example.

    OK.

    Values that can range from +/-1E15 to +/-1E-15 with at least 9 digit accuracy.

    This is for a real-world application in which users can, in effect, create data fields for both large quantities and small measurements. The application itself does not have any way of "knowing" what type of data is being stored. It has to be completely generic.

  • heater wrote:
    If I were to add together the exponent fields of two floats (which have the same mantissa) I would be doing a multiply.
    Um, no. You'd certainly get the order of magnitude of the product by adding the exponents. But you still have to explicitly multiply the "mantissas" to get the final result -- unless, of course, both "mantissas" were equal to one.

    -Phil
  • Wait a minute.
    Um, no.

    Way above my pay grade, but this might get interesting :)
  • Heater.Heater. Posts: 21,230
    edited 2018-02-22 20:54
    Is my high school maths failing me in my dotage?

    If I have two numbers x to the power n and x to the power m. Perhaps written as x^n and x^m.

    Then if I want to multiply them I have x^n * x^m
    Which is x ^ (n + m)

    For example:

    9 ^ 3 = 729
    9 ^ 10 = 3486784401

    9 ^ 3 * 9 ^10 = 2541865828329
    9 ^ (3 +10) = 2541865828329

    x is the mantissa. n and m are the exponents.

    Clearly I have multiplied them by simply adding the exponents.

    This all works provided the mantissa of the two numbers being multiplied are the same. Which is what I specified above.

    The case of both mantissa being equal to one is great. The answer is always 1. No matter what the exponents!

    Of course, if the mantissa are different then, as you say, there is some multiplying to do.




  • YanomaniYanomani Posts: 1,524
    edited 2018-02-22 22:53
    Here, I must agree with Heater and I'm adding a ln-based example, just to throw logarithms into the mix:

    e ^ ln(83) = 83

    e ^ ln(57) = 57

    e ^ (ln(83) + ln(57)) = 4731

    83 * 57 = 4731
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2018-02-22 23:36
    Heater,

    Yes, but floating-point numbers aren't expressed that way. It's more like a * 2^m and b * 2^n. Here's their product:

    a * b * 2^(m+n).

    So even if a and b are the same, they still have to be multiplied.
    ___________________

    Yanomani,

    Yes, but we're talking about floating-point numbers, not logarithms.

    -Phil
Sign In or Register to comment.