New patent recieved for "Solving the Floating Point Error Problem"
thej
Posts: 232
in Propeller 2
Comments
No, I got it right the first time. Memory protection is indeed a good example: In a real-time environment, an exception is the last thing you want triggered. Exceptions of any type should always be left to debugging alone when it comes to real-time processing.
So, his proposal is fine for simulations but useless for deployment.
Back when I worked on military radar, huge phased array radars that could get aircraft range, range, bearing and altitude out to 250 miles, everything was done with fixed point arithmetic.
Once again I have to quote the project manager on that project:
"If you think you need to use floating point to solve the problem then you don't understand the problem. If you do need floating point to solve the problem then you have a problem you don't understand"
So floating point has its point, just is used way to often for the stated reasons.
And there was this other guy @Heater. found, who made a very interesting alternative to standard floating point representation? Forgot the name but was quite understandable.
Enjoy!
Mike
That sounds like the UNUM stuff I posted by AMD chief engineer John Gustafson.
http://forums.parallax.com/discussion/166008/john-gustafson-presents-beyond-floating-point-next-generation-computer-arithmetic#latest
J
but, but,butbut... I've tried to find the conversion from light years to centimeters... it depends on whose calculator you use:)
Seems to be about 60 bits. seems like for large numbers... you could use multiples of light years as the "power" and then just use 256 bits for whatever is left over.
IF we want to have more precision in our physics... we absolutely need more precision in our numbers.
Ban floating point numbers!!!
altogether now:
Floating point has analytical advantages it its ability to naturally traverse scales but it isn't a do everything magic bullet.
How does that help?
This is is not a new idea. Although an efficient way of doing it in hardware may be.
John Gustafson mentions it in his discussion of unums:
As far as I understand him the problem with this is that after any significant calculation your error estimate is so wide that the result you have is useless.
Perhaps it helps to know that in advance if you are not thinking hard about your calculation.
j
Or, it's far less accurate than you would assume.
The Patriot can also be used to take out aircraft, so it not only protects but can also take lives. However, an intercept with the Patriot with regards to a missile, or a SCUD in the 1990's, is considered a success only when the Patriot can take out the warhead along with the rocket section of the missile. The issue was that the Patriot could take out the rocket portion of the missile but leave the warhead intact which could fall on friendly troops or populated areas. This happened too often so more precision was needed to ensure the warhead was taken out.
Exactly. So use fixed-point, since it gets you more precision in the same variable size (at the expense of dynamic range, which isn't needed here).
Actually, it was a fixed point issue that resulted in the correction as is stated in the "1991 Patriot missile failure". However, it is a bit debatable whether or not fixed-point is more accurate than floating point. But, floating point with a processor that supports it will have better performance.
"It turns out that the cause was an inaccurate calculation of the time since boot due to computer arithmetic errors. Specifically, the time in tenths of second as measured by the system's internal clock was multiplied by 1/10 to produce the time in seconds. This calculation was performed using a 24 bit fixed point register. In particular, the value 1/10, which has a non-terminating binary expansion, was chopped at 24 bits after the radix point. The small chopping error, when multiplied by the large number giving the time in tenths of a second, led to a significant error."
So, I conclude that the problem was faulty arithmetic and it makes no difference if they were using fixed point of floating point. Had they working a power of 2 rather than using decimal 10 there would have been no loss of accuracy.
Basically they did not have enough bits anyway. A 24 bit integer will only count seconds up about 194 hours. So they had to reboot the thing everyday to be sure the time was right.
You know exactly how many bits you have there and what they are used for.
LOL, but it seems they have a companion also swimming against current trends.
He talks about some new advances he has made and about it being turned in to real hardware but the most interesting thing he said was at the end.
At 59m18s he says, "The standard will be a lot like RISC-V. ...everything I do is open source by the way. There's no intellectual property ownership what so ever here. It's given away under the MIT open source license so if you wanted to give it away for free yourself, if you want to modify it and give it away, that's fine. If you want to sell it, that's fine. Just don't sue me." and "I'm giving it away like an academic."
Sounds like a nice addition to P3 if you ask me.
J
unums/posits are obviously going to entirely replace floats for computing reals in the future. It's a no-brainer.
Out of context, that plain reads wrongly. He is talking about the delivery/copyright model rather than the technology/architecture.
It'll certainly be interesting to compare against the CORDIC for logic count. Not that he gave any figures but it sounds like it might be that effective.