External FPU?
jcpole
Posts: 92
in Accessories
Hi all...
I’ve been using uM-FPU64s from MIcromega, but due to the owner’s sudden death, those chips are no longer available.
Does anyone know of anything similar? I’m designing something new, and it doesn’t make a lot of sense to design something that includes a chip that’s no longer available.
Thanks!
Jamie
I’ve been using uM-FPU64s from MIcromega, but due to the owner’s sudden death, those chips are no longer available.
Does anyone know of anything similar? I’m designing something new, and it doesn’t make a lot of sense to design something that includes a chip that’s no longer available.
Thanks!
Jamie
Comments
If you design your product with the Propeller, you will not need an external FPU. There are objects for the Prop that handle floating point math in one of it's eight internal processors.
-Phil
Any Cortex M4 >=48MHz would be able to do the same job using multiprecision math and the single instruction 32 bit multiply.
An M3 is also capable and cheaper, but it will be a bit slower.
The C compiler for propeller should have native support for double precision floating point math, possibly quad.
It's a great question, I will play around with it today and post results today or tomorrow.
http://forums.parallax.com/discussion/130579/floating-point
Where I said "As my old boss used to say "If you think you need floating point to solve the problem you don't understand the problem":)"
Which I stand by today.
Sweet, saved me the time
It's weird reading that Propeller floating point thread since I was one of the contributors. I eventually did an inverse kinematics project on the Propeller using fixed point with binary radians and the sine table from ROM. I had too write all of the trig functions in terms of sine and inverse sine. Inverse sine was done using a binary search of the table values.
I think this is true in many cases, but not all. Working with anything in logarithmic or exponential scale, like normalized vector components, is a great fit for float, and much less so for fixed point. Many modern HDR image formats use the mantissa / exponent format because human brightness perception isn't linear.
So I'd amend that statement to, "you don't understand the problem, or you understand it very well."
(stirs pot, runs away)
Instead of using a weird format somewhere between logarithmic and linear (i.e. floats), why not just drop the mantissa and have a fractional exponent, i.e. use a fully logarithmic form? I don't think addition of numbers in logarithmic form should be all that much harder than it is for floats - you just have a zero length mantissa.
How are normalized vector components logarithmic or exponential? -- why is float better for them? Since no individual component's magnitude can exceed 1, I would think fixed point would be a good fit for them because they have a limited range. Sure, if one component is near 1 its angular precision will be very bad, but isn't this made up for by the fact that the other components' angular precision is best at those points?
The only good use I can think of for floating point is a general-purpose scientific calculator, which inherently doesn't understand your specific problem because it's designed for any problem, and therefore has to resort to floating point.
Forgive me for being dense, but how do you add two numbers represented in logarithmic form? I'm not talking about multiplying, which is the result of adding two logarithms.
"If you think you need floating point to solve the problem, you don't understand the problem. If you really do need floating point then you have a problem you do not understand"
I always liked this quote. Would you extend this to the use of negative numbers as well? (algorithmically, for a given embedded control task)
Clearly negative numbers are an invention of lunatics. How could I possibly have a negative number of sheep in my field? For example.
Similarly, zero is pretty suspect. If I have no sheep in my field why do I need a number for it? The whole idea of zero took a long time for humans to accept. The Roman civilization did not have one.
But then... when I my son was very young and I was reading book about numbers to him at bedtime, he grabbed my arm, looked at me like I'm an idiot and said "No daddy, that is not four ducks (we had just turned the page from 3 ducks), that is just another duck". I was dumbfounded, did not know what to say to that, because it's true. "It's time for us to sleep" I said, and put the lights out.
So, as my son pointed out, even positive integers are a fiction.
At the bottom of it all we only have 0 and 1. Or use whatever other symbol. We have a distinction between one thing and another thing, one state and some other state, etc.
Spencer-Brown in his mathematics book, The Laws of Form, starts from the single axiom "Let there be a distinction". If I recall correctly he gets up to proving such things as 2 + 2 = 4 by the end of the book. https://en.wikipedia.org/wiki/Laws_of_Form
So already, just using integers (positive, negative and zero) we are in a world of conceptual pain.
As a practical matter, in computers, integers as an abstraction that work fine. Integers are 100% accurate and predictable. Until the abstraction breaks down. Add two numbers that have a too big a result and your computer will lie to you. Got to be careful about that.
If negative integers bother you, no problem, just move everything up the number line until they are all positive.
But then, we have floats....
Turns out there are infinitely many more real numbers than integers. https://en.wikipedia.org/wiki/Kurt_Gödel
Trying to map an infinite number of things on to the handful of bits we have in computer variables is the road to hell.
In short. No. I would not extend that quote from my old boss to the use of negative numbers.
Floats kind of already do this when you add them: you take the one with the lower exponent and shift the mantissa left and increment the exponent until the exponents are the same, then you add the mantissas. Since a purely logarithmic representation of a number is just a float with a zero bit mantissa and a base less than two (unless you really don't care about precision at all), I intuitively thought that it'd be similar.
It turns out that it's not so easy - you need to find a way to deal with the non-unity mantissa on the input that got shifted. I just did some algebra and found that you'd need an efficient way to compute log(1+exp(x)). Fortunately, that function is zero for most negative numbers and x for most positive numbers, and a good enough approximation for the curved middle part probably exists. So this is possible, but is probably still harder than it is for normal floats with a non-zero-bit mantissa.
But what practical problem actually needs numbers with exponential range? As I stated in my previous comment, I think Jason Dorie's normalized vectors are a poor example.
-Phil
If I start incrementing the exponent field of a floating point number then the value represented by that float rises exponentially.
If I were to add together the exponent fields of two floats (which have the same mantissa) I would be doing a multiply.
That sounds like a logarithmic relationship to me. Depending which way round you look at it. There is a reason it is called the exponent. The inverse of exponentiation is the logarithm.
As you say though, as a practical matter it just comes down to shifting.
OK.
Values that can range from +/-1E15 to +/-1E-15 with at least 9 digit accuracy.
This is for a real-world application in which users can, in effect, create data fields for both large quantities and small measurements. The application itself does not have any way of "knowing" what type of data is being stored. It has to be completely generic.
-Phil
Way above my pay grade, but this might get interesting
If I have two numbers x to the power n and x to the power m. Perhaps written as x^n and x^m.
Then if I want to multiply them I have x^n * x^m
Which is x ^ (n + m)
For example:
9 ^ 3 = 729
9 ^ 10 = 3486784401
9 ^ 3 * 9 ^10 = 2541865828329
9 ^ (3 +10) = 2541865828329
x is the mantissa. n and m are the exponents.
Clearly I have multiplied them by simply adding the exponents.
This all works provided the mantissa of the two numbers being multiplied are the same. Which is what I specified above.
The case of both mantissa being equal to one is great. The answer is always 1. No matter what the exponents!
Of course, if the mantissa are different then, as you say, there is some multiplying to do.
e ^ ln(83) = 83
e ^ ln(57) = 57
e ^ (ln(83) + ln(57)) = 4731
83 * 57 = 4731
Yes, but floating-point numbers aren't expressed that way. It's more like a * 2^m and b * 2^n. Here's their product:
a * b * 2^(m+n).
So even if a and b are the same, they still have to be multiplied.
___________________
Yanomani,
Yes, but we're talking about floating-point numbers, not logarithms.
-Phil