Welcome to the Parallax Discussion Forums, sign-up to participate.

thej
Posts: **12**

I have posted about John's "UNUM" replacement for Floating Point before.

He recently did a presentation on his latest work. (video is available in link below)

Here is the description for the presentation:

“A new data type called a “posit” is designed for direct drop-in replacement for IEEE Standard 754 floats. Unlike unum arithmetic, posits do not require interval-type mathematics or variable size operands, and they round if an answer is inexact, much the way floats do. However, they provide compelling advantages over floats, including simpler hardware implementation that scales from as few as two-bit operands to thousands of bits. For any bit width, they have a larger dynamic range, higher accuracy, better closure under arithmetic operations, and simpler exception-handling. For example, posits never overflow to infinity or underflow to zero, and there is no “Not-a-Number” (NaN) value. Posits should take up less space to implement in silicon than an IEEE float of the same size. With fewer gate delays per operation as well as lower silicon footprint, the posit operations per second (POPS) supported by a chip can be significantly higher than the FLOPs using similar hardware resources. GPU accelerators, in particular, could do more arithmetic per watt and per dollar yet deliver superior answer quality.”

http://insidehpc.com/2017/02/john-gustafson-presents-beyond-floating-point-next-generation-computer-arithmetic/

J

He recently did a presentation on his latest work. (video is available in link below)

Here is the description for the presentation:

“A new data type called a “posit” is designed for direct drop-in replacement for IEEE Standard 754 floats. Unlike unum arithmetic, posits do not require interval-type mathematics or variable size operands, and they round if an answer is inexact, much the way floats do. However, they provide compelling advantages over floats, including simpler hardware implementation that scales from as few as two-bit operands to thousands of bits. For any bit width, they have a larger dynamic range, higher accuracy, better closure under arithmetic operations, and simpler exception-handling. For example, posits never overflow to infinity or underflow to zero, and there is no “Not-a-Number” (NaN) value. Posits should take up less space to implement in silicon than an IEEE float of the same size. With fewer gate delays per operation as well as lower silicon footprint, the posit operations per second (POPS) supported by a chip can be significantly higher than the FLOPs using similar hardware resources. GPU accelerators, in particular, could do more arithmetic per watt and per dollar yet deliver superior answer quality.”

http://insidehpc.com/2017/02/john-gustafson-presents-beyond-floating-point-next-generation-computer-arithmetic/

J

## Comments

23 Commentssorted by Date Added Votes9,0010Vote UpVote DownIf he claims it is smaller in silicon, it should also be smaller in Software library ?

Are there any Posit libraries for 8b or 32b MCUs, that can be compared with existing Floats ?

5510Vote UpVote DownCool, CA, USA 95614

1,6010Vote UpVote Down18,2470Vote UpVote DownHe claims the scalr multiplication of the vectors (3.2e7, 1.0, -1, 8.0e7) and (4.0e7, 1.0, -1, -1.6e7) comes out as 0 using 32 and 64 bit IEE floats.

It does not. A quick test in Javascript yields the correct result:

The statement that "posits never overflow to infinity or underflow to zero" makes no sense to me. My conviction is that no matter how you represent numbers if they get big enough your machine cannot physically represent them due to size limitations. Unless you are working symbolically like a mathematician. Which I find unlikely and is not going to yield actual numbers to work with anyway.

If there is no infinity or NaN then that probably breaks all our software.

Guess I have to watch the whole vid now...

1,6010Vote UpVote Down6,0870Vote UpVote DownI was watching it when it was first posted but like many I'd rather have the document. Anyway what's stopping him from implementing his ideas into silicon, via the FPGA route?

Brisbane, Australia18,2470Vote UpVote DownIt was mentioned that some company is building a chip that uses these new floats.

His "assistant" in the video demos a software implementation of these numbers by defining the as type and operators in the Julia language. I would think that is just a few steps away from turning into HDL.

I like the suggestion to call them "Sigmoidal numbers". Sounds so cool, I'd use Sigmoidal numbers any day.

18,2470Vote UpVote Down9,0010Vote UpVote DownHe also looks to have tweaked/changed it, with unum1, unum2 and Posit, which is not so good for someone turning this into silicon, as where are they left when Posit 2 arrives ?

I would have expected a coded version of this, as proof.

Preferably where longhand mathops are used, not any native floating point, to better compare the software Posit with Software Float.

18,2470Vote UpVote DownPerhaps that's not a problem. There were dozens of float, decimal, fixed point formats in use in all kind of computers before the IEE standard arrived. Why not explore other possibilities? Solutions that are practical with today's technology and advances in knowledge make solutions possible that were previously not. At some point a big player like Nvidea or Intel may take this seriously and adopt it. Or perhaps some outsider like the RISC V. At that point a new standard might emerge.

There is a coded proof. Software implementation that is. Using only integer arithmetic. It's demoed in the video. As are lots of comparisons with regular float.

120Vote UpVote DownHe does say that the UNUM (1 & 2) were not ideal for hardware as they used varying length numbers.

Posits can be a set length and thus are much more practical to implement in actual hardware.

j

8380Vote UpVote Downthis got me curious ...

so I read a little and did a test ...

1. one article mentions that Intel FP implementation internally runs on 80 bit, not 64 or 32 which give some breathingroom --

if the compiler can optimize.

2. so I tried a little in VBA (Excel Macro)

a) in Excel the result is correct .. 2

b) so I got into VBA and forced the datatypes to single / double float

Result:

the Z line is calculated in one statement, that probably gets optimized somehow so both Z (for single and double) come out as 2 = correct

so I split up the scalar product and force storage to a single or double variable. So the 80 bit FP will not help.

Result:

for the dimensioning as single I get W = 0 WRONG !!!

for double I get the correct result ... W = 2

whatever that means ....

so at least part of his statement does not seem to be correct in practice. But maybe he is more into theory.

But he is right just rearranging a formula gives completely different result - sometimes .... bad enough ...

Tachyon code and documentation snippets from Tachyon thread

9,0010Vote UpVote DownNice work.

That means you need real care when testing algorithms on a PC, and

thenporting to a Microcontroller.If the PC can fail to show a flaw that exists when using true 32b real, that makes the PC a lot less useful.

Anyone try this on a RPi ?

18,2470Vote UpVote DownI like to quote my old boss from 1980 something, when we were a team working on a real-time embedded system using a 16 bit processor with no float support, who said:

"If you think you need floating point to solve the problem, you don't understand the problem. If you really do need floating point then you have a problem you do not understand"

We used fixed point arithmetic on that project. Which really focuses the mind on number ranges and precision. Everything worked just fine.

9,0010Vote UpVote DownIn 1980, he was probably right.

These days, floating point comes built into low cost MCUs and here is topical example of PC used for development, then ported to a Prop, here with automated tools. Nifty.

https://forums.parallax.com/discussion/165149/new-compiler-for-writing-elev8-fc-float-math

9,0010Vote UpVote DownOf course, that could already have happened,

This recent news...

https://www10.edacafe.com/nbc/articles/1/1482813/NVIDIA-Powers-New-Class-Supercomputing-Workstations-with-Breakthrough-Capabilities-Design-Engineering

says

" The GP100 provides more than 20 TFLOPS of16-bit floating pointprecision computing"Looks like NVIDIA already have their own custom float here, and you can be sure that is speed and silicon optimised!!

18,2470Vote UpVote DownSure we have floating point support in hardware everywhere today.

The point is that naive reliance on floating point to do the "right thing" without thinking it through can lead to all kinds of unexpected problems that are hard to understand.

Hardware support only makes that disaster faster.

You should watch the video linked in the opening post to get an idea about this. And/or read "What Every Computer Scientist Should Know About Floating-Point Arithmetic" https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

2380Vote UpVote DownDefault Datatyp is double

18,2470Vote UpVote Down7,9840Vote UpVote DownI liked the idea of keeping all the variables for a calculation inside as 80-bit and only bringing the answer out when it was done...

80-bit precision makes it a lot harder to mess up...

1,0540Vote UpVote Down18,2470Vote UpVote Downx87 may work in 80 bits or whatever in the background. Other IEEE 754 implementations do not. As a result you can get different results for the same calculation on different machines.

Basically use of "float" and "double" in C is another of the C languages "undefined behaviors".

The whole idea of posits is to make things consistent.

3770Vote UpVote DownSounds like something Chuck Moore would have said, and probably did, only worded differently.

Rick