Why is military equipment that's protecting human lives using floating-point math? The planet is only so big. A 3D (lat, long, alt) tuple of 32-bit fixed-point ints has enough precision to represent the position of a missile anywhere on the planet to a fraction of an inch.
That said, I supposed it's not unlike memory protection. Where a significant amount of formalised software infrastructure support has to be built around it for it to function usefully as a production solution.
No, I got it right the first time. Memory protection is indeed a good example: In a real-time environment, an exception is the last thing you want triggered. Exceptions of any type should always be left to debugging alone when it comes to real-time processing.
So, his proposal is fine for simulations but useless for deployment.
Almost exactly down to cm if my arithmetic is good.
Back when I worked on military radar, huge phased array radars that could get aircraft range, range, bearing and altitude out to 250 miles, everything was done with fixed point arithmetic.
Once again I have to quote the project manager on that project:
"If you think you need to use floating point to solve the problem then you don't understand the problem. If you do need floating point to solve the problem then you have a problem you don't understand"
This might be true on Earth, but if you start calculating distances between planets or solar systems you run out of digits pretty fast while keeping a reasonable accuracy.
So floating point has its point, just is used way to often for the stated reasons.
And there was this other guy @Heater. found, who made a very interesting alternative to standard floating point representation? Forgot the name but was quite understandable.
And there was this other guy @Heater. found, who made a very interesting alternative to standard floating point representation? Forgot the name but was quite understandable.
Mike
This might be true on Earth, but if you start calculating distances between planets or solar systems you run out of digits pretty fast while keeping a reasonable accuracy.
So floating point has its point, just is used way to often for the stated reasons.
And there was this other guy @Heater. found, who made a very interesting alternative to standard floating point representation? Forgot the name but was quite understandable.
Enjoy!
Mike
but, but,butbut... I've tried to find the conversion from light years to centimeters... it depends on whose calculator you use:)
Seems to be about 60 bits. seems like for large numbers... you could use multiples of light years as the "power" and then just use 256 bits for whatever is left over.
IF we want to have more precision in our physics... we absolutely need more precision in our numbers.
This might be true on Earth, but if you start calculating distances between planets or solar systems you run out of digits pretty fast while keeping a reasonable accuracy.
So floating point has its point, just is used way to often for the stated reasons.
Floats fail in those scenarios even quicker than ints do because all the spacial motions are accumulating sums at the defined smallest scale but applied to absolute coordinates that contain full scale representation. Integers are the natural fit for this.
Floating point has analytical advantages it its ability to naturally traverse scales but it isn't a do everything magic bullet.
So the "invention" under discussion is supposed to keep track of your error as your calculation proceeds.
How does that help?
This is is not a new idea. Although an efficient way of doing it in hardware may be.
John Gustafson mentions it in his discussion of unums:
As far as I understand him the problem with this is that after any significant calculation your error estimate is so wide that the result you have is useless.
Perhaps it helps to know that in advance if you are not thinking hard about your calculation.
I think Mr. Alan Jorgensen's patent is on shaky ground. It could be shown that Gustafson has prior art on some (all?) of his concepts and if it would ever be challenged, the patent could be invalidated.
...As far as I understand him the problem with this is that after any significant calculation your error estimate is so wide that the result you have is useless.
Why is military equipment that's protecting human lives using floating-point math? The planet is only so big. A 3D (lat, long, alt) tuple of 32-bit fixed-point ints has enough precision to represent the position of a missile anywhere on the planet to a fraction of an inch.
The Patriot can also be used to take out aircraft, so it not only protects but can also take lives. However, an intercept with the Patriot with regards to a missile, or a SCUD in the 1990's, is considered a success only when the Patriot can take out the warhead along with the rocket section of the missile. The issue was that the Patriot could take out the rocket portion of the missile but leave the warhead intact which could fall on friendly troops or populated areas. This happened too often so more precision was needed to ensure the warhead was taken out.
Why is military equipment that's protecting human lives using floating-point math? The planet is only so big. A 3D (lat, long, alt) tuple of 32-bit fixed-point ints has enough precision to represent the position of a missile anywhere on the planet to a fraction of an inch.
The Patriot can also be used to take out aircraft, so it not only protects but can also take lives. However, an intercept with the Patriot with regards to a missile, or a SCUD in the 1990's, is considered a success only when the Patriot can take out the warhead along with the rocket section of the missile. The issue was that the Patriot could take out the rocket portion of the missile but leave the warhead intact which could fall on friendly troops or populated areas. This happened too often so more precision was needed to ensure the warhead was taken out.
Exactly. So use fixed-point, since it gets you more precision in the same variable size (at the expense of dynamic range, which isn't needed here).
Exactly. So use fixed-point, since it gets you more precision in the same variable size (at the expense of dynamic range, which isn't needed here).
Actually, it was a fixed point issue that resulted in the correction as is stated in the "1991 Patriot missile failure". However, it is a bit debatable whether or not fixed-point is more accurate than floating point. But, floating point with a processor that supports it will have better performance.
"It turns out that the cause was an inaccurate calculation of the time since boot due to computer arithmetic errors. Specifically, the time in tenths of second as measured by the system's internal clock was multiplied by 1/10 to produce the time in seconds. This calculation was performed using a 24 bit fixed point register. In particular, the value 1/10, which has a non-terminating binary expansion, was chopped at 24 bits after the radix point. The small chopping error, when multiplied by the large number giving the time in tenths of a second, led to a significant error."
So, I conclude that the problem was faulty arithmetic and it makes no difference if they were using fixed point of floating point. Had they working a power of 2 rather than using decimal 10 there would have been no loss of accuracy.
Basically they did not have enough bits anyway. A 24 bit integer will only count seconds up about 194 hours. So they had to reboot the thing everyday to be sure the time was right.
If I remember correctly, the Queens Award tried to make UK great again (yes, it ones WAS great) and allowed the work on Formal Methods Applied to a Floating Point Number System by Geoff Barrett, on the part of the work to develop the floating-point unit for the T800 transputer. And using formal methods it could be proven that there in no bug in the FPU. While the FPU computes without throwing errors, due to the lack of energy from burning coal the transputer was no success story and the british decided to focus on their navel and to leave the EU, not realizing that a navel makes no sense without surrounding body.
If I remember correctly, the Queens Award tried to make UK great again (yes, it ones WAS great) .......
........ the british decided to focus on their navel and to leave the EU, not realizing that a navel makes no sense without surrounding body.
LOL, but it seems they have a companion also swimming against current trends.
If you haven't watched the video Heater posted, you should!
He talks about some new advances he has made and about it being turned in to real hardware but the most interesting thing he said was at the end.
At 59m18s he says, "The standard will be a lot like RISC-V. ...everything I do is open source by the way. There's no intellectual property ownership what so ever here. It's given away under the MIT open source license so if you wanted to give it away for free yourself, if you want to modify it and give it away, that's fine. If you want to sell it, that's fine. Just don't sue me." and "I'm giving it away like an academic."
If you haven't watched the video Heater posted, you should!
Thank you for prompting me to check it out. It really does have that snug 2's complement fit. I note at the end he mentions his examples were all using his least accurate proposal to compare against floats.
unums/posits are obviously going to entirely replace floats for computing reals in the future. It's a no-brainer.
"The standard will be a lot like RISC-V."
Out of context, that plain reads wrongly. He is talking about the delivery/copyright model rather than the technology/architecture.
Sounds like a nice addition to P3 if you ask me.
It'll certainly be interesting to compare against the CORDIC for logic count. Not that he gave any figures but it sounds like it might be that effective.
Comments
No, I got it right the first time. Memory protection is indeed a good example: In a real-time environment, an exception is the last thing you want triggered. Exceptions of any type should always be left to debugging alone when it comes to real-time processing.
So, his proposal is fine for simulations but useless for deployment.
Back when I worked on military radar, huge phased array radars that could get aircraft range, range, bearing and altitude out to 250 miles, everything was done with fixed point arithmetic.
Once again I have to quote the project manager on that project:
"If you think you need to use floating point to solve the problem then you don't understand the problem. If you do need floating point to solve the problem then you have a problem you don't understand"
So floating point has its point, just is used way to often for the stated reasons.
And there was this other guy @Heater. found, who made a very interesting alternative to standard floating point representation? Forgot the name but was quite understandable.
Enjoy!
Mike
That sounds like the UNUM stuff I posted by AMD chief engineer John Gustafson.
http://forums.parallax.com/discussion/166008/john-gustafson-presents-beyond-floating-point-next-generation-computer-arithmetic#latest
J
but, but,butbut... I've tried to find the conversion from light years to centimeters... it depends on whose calculator you use:)
Seems to be about 60 bits. seems like for large numbers... you could use multiples of light years as the "power" and then just use 256 bits for whatever is left over.
IF we want to have more precision in our physics... we absolutely need more precision in our numbers.
Ban floating point numbers!!!
altogether now:
Floating point has analytical advantages it its ability to naturally traverse scales but it isn't a do everything magic bullet.
How does that help?
This is is not a new idea. Although an efficient way of doing it in hardware may be.
John Gustafson mentions it in his discussion of unums:
As far as I understand him the problem with this is that after any significant calculation your error estimate is so wide that the result you have is useless.
Perhaps it helps to know that in advance if you are not thinking hard about your calculation.
j
Or, it's far less accurate than you would assume.
The Patriot can also be used to take out aircraft, so it not only protects but can also take lives. However, an intercept with the Patriot with regards to a missile, or a SCUD in the 1990's, is considered a success only when the Patriot can take out the warhead along with the rocket section of the missile. The issue was that the Patriot could take out the rocket portion of the missile but leave the warhead intact which could fall on friendly troops or populated areas. This happened too often so more precision was needed to ensure the warhead was taken out.
Exactly. So use fixed-point, since it gets you more precision in the same variable size (at the expense of dynamic range, which isn't needed here).
Actually, it was a fixed point issue that resulted in the correction as is stated in the "1991 Patriot missile failure". However, it is a bit debatable whether or not fixed-point is more accurate than floating point. But, floating point with a processor that supports it will have better performance.
"It turns out that the cause was an inaccurate calculation of the time since boot due to computer arithmetic errors. Specifically, the time in tenths of second as measured by the system's internal clock was multiplied by 1/10 to produce the time in seconds. This calculation was performed using a 24 bit fixed point register. In particular, the value 1/10, which has a non-terminating binary expansion, was chopped at 24 bits after the radix point. The small chopping error, when multiplied by the large number giving the time in tenths of a second, led to a significant error."
So, I conclude that the problem was faulty arithmetic and it makes no difference if they were using fixed point of floating point. Had they working a power of 2 rather than using decimal 10 there would have been no loss of accuracy.
Basically they did not have enough bits anyway. A 24 bit integer will only count seconds up about 194 hours. So they had to reboot the thing everyday to be sure the time was right.
You know exactly how many bits you have there and what they are used for.
LOL, but it seems they have a companion also swimming against current trends.
He talks about some new advances he has made and about it being turned in to real hardware but the most interesting thing he said was at the end.
At 59m18s he says, "The standard will be a lot like RISC-V. ...everything I do is open source by the way. There's no intellectual property ownership what so ever here. It's given away under the MIT open source license so if you wanted to give it away for free yourself, if you want to modify it and give it away, that's fine. If you want to sell it, that's fine. Just don't sue me." and "I'm giving it away like an academic."
Sounds like a nice addition to P3 if you ask me.
J
unums/posits are obviously going to entirely replace floats for computing reals in the future. It's a no-brainer.
Out of context, that plain reads wrongly. He is talking about the delivery/copyright model rather than the technology/architecture.
It'll certainly be interesting to compare against the CORDIC for logic count. Not that he gave any figures but it sounds like it might be that effective.