Shop OBEX P1 Docs P2 Docs Learn Events
John Gustafson presents: Beyond Floating Point – Next Generation Computer Arithmetic — Parallax Forums

John Gustafson presents: Beyond Floating Point – Next Generation Computer Arithmetic

I have posted about John's "UNUM" replacement for Floating Point before.
He recently did a presentation on his latest work. (video is available in link below)

Here is the description for the presentation:
“A new data type called a “posit” is designed for direct drop-in replacement for IEEE Standard 754 floats. Unlike unum arithmetic, posits do not require interval-type mathematics or variable size operands, and they round if an answer is inexact, much the way floats do. However, they provide compelling advantages over floats, including simpler hardware implementation that scales from as few as two-bit operands to thousands of bits. For any bit width, they have a larger dynamic range, higher accuracy, better closure under arithmetic operations, and simpler exception-handling. For example, posits never overflow to infinity or underflow to zero, and there is no “Not-a-Number” (NaN) value. Posits should take up less space to implement in silicon than an IEEE float of the same size. With fewer gate delays per operation as well as lower silicon footprint, the posit operations per second (POPS) supported by a chip can be significantly higher than the FLOPs using similar hardware resources. GPU accelerators, in particular, could do more arithmetic per watt and per dollar yet deliver superior answer quality.”


http://insidehpc.com/2017/02/john-gustafson-presents-beyond-floating-point-next-generation-computer-arithmetic/

J

Comments

  • jmgjmg Posts: 15,173
    Certainly very interesting.
    If he claims it is smaller in silicon, it should also be smaller in Software library ?

    Are there any Posit libraries for 8b or 32b MCUs, that can be compared with existing Floats ?

  • TorTor Posts: 2,010
    Is there a writeup somewhere? I couldn't find one. I intensely dislike having to watch a vide to get information, it's such an extremely slow and inefficient method for info transfer.
  • Heater.Heater. Posts: 21,230
    He lost me in the first two minutes.

    He claims the scalr multiplication of the vectors (3.2e7, 1.0, -1, 8.0e7) and (4.0e7, 1.0, -1, -1.6e7) comes out as 0 using 32 and 64 bit IEE floats.

    It does not. A quick test in Javascript yields the correct result:
    > (3.2e7 * 4.0e7) + (1.0 * 1.0) + (-1 * -1) + (8.0e7 * -1.6e7)
    2
    
    The statement that "posits never overflow to infinity or underflow to zero" makes no sense to me. My conviction is that no matter how you represent numbers if they get big enough your machine cannot physically represent them due to size limitations. Unless you are working symbolically like a mathematician. Which I find unlikely and is not going to yield actual numbers to work with anyway.

    If there is no infinity or NaN then that probably breaks all our software.

    Guess I have to watch the whole vid now...
  • TorTor Posts: 2,010
    Good old 'bc' too..
    $ echo "(3.2*10^7*4.0*10^7)+(1.0*1.0)+(-1*-1)+(8.0*10^7*-1.6*10^7)" | bc -l
    2.00
    
  • and good ol' python too -
    >>> (3.2e7 * 4.0e7) + (1.0 * 1.0) + (-1 * -1) + (8.0e7 * -1.6e7)
    2.0
    

    I was watching it when it was first posted but like many I'd rather have the document. Anyway what's stopping him from implementing his ideas into silicon, via the FPGA route?
  • Heater.Heater. Posts: 21,230
    I guess he is more of a mathematician and software guy than a a hardware designer. Looks like since he came up with the idea he has been working on showing how it compares to old floats.

    It was mentioned that some company is building a chip that uses these new floats.

    His "assistant" in the video demos a software implementation of these numbers by defining the as type and operators in the Julia language. I would think that is just a few steps away from turning into HDL.

    I like the suggestion to call them "Sigmoidal numbers". Sounds so cool, I'd use Sigmoidal numbers any day.

  • Heater.Heater. Posts: 21,230
    Are we missing a point about that scalar product. Did he make a mistake? What is going on with that?
  • jmgjmg Posts: 15,173
    Heater. wrote: »
    .. Looks like since he came up with the idea he has been working on showing how it compares to old floats.

    He also looks to have tweaked/changed it, with unum1, unum2 and Posit, which is not so good for someone turning this into silicon, as where are they left when Posit 2 arrives ?

    I would have expected a coded version of this, as proof.
    Preferably where longhand mathops are used, not any native floating point, to better compare the software Posit with Software Float.


  • Heater.Heater. Posts: 21,230
    I did wonder how many number systems he was going to come up with.

    Perhaps that's not a problem. There were dozens of float, decimal, fixed point formats in use in all kind of computers before the IEE standard arrived. Why not explore other possibilities? Solutions that are practical with today's technology and advances in knowledge make solutions possible that were previously not. At some point a big player like Nvidea or Intel may take this seriously and adopt it. Or perhaps some outsider like the RISC V. At that point a new standard might emerge.

    There is a coded proof. Software implementation that is. Using only integer arithmetic. It's demoed in the video. As are lots of comparisons with regular float.
  • He also looks to have tweaked/changed it, with unum1, unum2 and Posit, which is not so good for someone turning this into silicon, as where are they left when Posit 2 arrives ?

    He does say that the UNUM (1 & 2) were not ideal for hardware as they used varying length numbers.
    Posits can be a set length and thus are much more practical to implement in actual hardware.

    j
  • MJBMJB Posts: 1,235
    Heater. wrote: »
    Are we missing a point about that scalar product. Did he make a mistake? What is going on with that?

    this got me curious ...

    so I read a little and did a test ...
    1. one article mentions that Intel FP implementation internally runs on 80 bit, not 64 or 32 which give some breathingroom --
    if the compiler can optimize.

    2. so I tried a little in VBA (Excel Macro)
    a) in Excel the result is correct .. 2
    b) so I got into VBA and forced the datatypes to single / double float
    Private Sub CommandButton1_Click()
    Dim x(4) As Single
    Dim y(4) As Single
    Dim z As Single
    
    x(1) = 32000000
    x(2) = 1
    x(3) = -1
    x(4) = 80000000
    
    y(1) = 40000000
    y(2) = 1
    y(3) = -1
    y(4) = -16000000
    
    z = x(1) * y(1) + x(2) * y(2) + x(3) * y(3) + x(4) * y(4)
    MsgBox z, , "Z"
    
    w = x(1) * y(1)
    w = w + x(2) * y(2)
    w = w + x(3) * y(3)
    w = w + x(4) * y(4)
    MsgBox w, , "W"
    
    End Sub
    
    Private Sub CommandButton2_Click()
    Dim x(4) As Double
    Dim y(4) As Double
    Dim z As Double
    
    x(1) = 32000000
    x(2) = 1
    x(3) = -1
    x(4) = 80000000
    
    y(1) = 40000000
    y(2) = 1
    y(3) = -1
    y(4) = -16000000
    
    z = x(1) * y(1) + x(2) * y(2) + x(3) * y(3) + x(4) * y(4)
    
    MsgBox z, , "Z"
    
    w = x(1) * y(1)
    w = w + x(2) * y(2)
    w = w + x(3) * y(3)
    w = w + x(4) * y(4)
    MsgBox w, , "W"
    
    End Sub
    

    Result:
    the Z line is calculated in one statement, that probably gets optimized somehow so both Z (for single and double) come out as 2 = correct

    so I split up the scalar product and force storage to a single or double variable. So the 80 bit FP will not help.
    Result:
    for the dimensioning as single I get W = 0 WRONG !!!
    for double I get the correct result ... W = 2
    whatever that means ....

    so at least part of his statement does not seem to be correct in practice. But maybe he is more into theory.

    But he is right just rearranging a formula gives completely different result - sometimes .... bad enough ...


  • jmgjmg Posts: 15,173
    MJB wrote: »
    ..
    so I split up the scalar product and force storage to a single or double variable. So the 80 bit FP will not help.
    Result:
    for the dimensioning as single I get W = 0 WRONG !!!
    for double I get the correct result ... W = 2
    whatever that means ....

    Nice work.

    That means you need real care when testing algorithms on a PC, and then porting to a Microcontroller.
    If the PC can fail to show a flaw that exists when using true 32b real, that makes the PC a lot less useful.

    Anyone try this on a RPi ?

  • Heater.Heater. Posts: 21,230
    jmg,

    I like to quote my old boss from 1980 something, when we were a team working on a real-time embedded system using a 16 bit processor with no float support, who said:

    "If you think you need floating point to solve the problem, you don't understand the problem. If you really do need floating point then you have a problem you do not understand"

    We used fixed point arithmetic on that project. Which really focuses the mind on number ranges and precision. Everything worked just fine.



  • jmgjmg Posts: 15,173
    Heater. wrote: »
    jmg,

    I like to quote my old boss from 1980 something, when we were a team working on a real-time embedded system using a 16 bit processor with no float support, who said:

    "If you think you need floating point to solve the problem, you don't understand the problem. If you really do need floating point then you have a problem you do not understand"

    We used fixed point arithmetic on that project. Which really focuses the mind on number ranges and precision. Everything worked just fine.

    In 1980, he was probably right.

    These days, floating point comes built into low cost MCUs and here is topical example of PC used for development, then ported to a Prop, here with automated tools. Nifty.

    https://forums.parallax.com/discussion/165149/new-compiler-for-writing-elev8-fc-float-math

  • jmgjmg Posts: 15,173
    Heater. wrote: »
    .... At some point a big player like Nvidea or Intel may take this seriously and adopt it. Or perhaps some outsider like the RISC V.

    Of course, that could already have happened,
    This recent news...
    https://www10.edacafe.com/nbc/articles/1/1482813/NVIDIA-Powers-New-Class-Supercomputing-Workstations-with-Breakthrough-Capabilities-Design-Engineering

    says
    " The GP100 provides more than 20 TFLOPS of 16-bit floating point precision computing"

    Looks like NVIDIA already have their own custom float here, and you can be sure that is speed and silicon optimised!!
  • Heater.Heater. Posts: 21,230
    edited 2017-02-08 01:17
    jmg,
    In 1980, he was probably right....These days, floating point comes built into low cost MCUs...
    What you are saying is a prime example of why my boss was right.

    Sure we have floating point support in hardware everywhere today.

    The point is that naive reliance on floating point to do the "right thing" without thinking it through can lead to all kinds of unexpected problems that are hard to understand.

    Hardware support only makes that disaster faster.

    You should watch the video linked in the opening post to get an idea about this. And/or read "What Every Computer Scientist Should Know About Floating-Point Arithmetic" https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html



  • ReinhardReinhard Posts: 489
    edited 2017-02-08 11:48
    Gnu Octave4.2.0 and Win7
    >> 3.2e7 * 4.0e7 + 1.0*1.0 + -1*-1 + 8.0e7 * -1.6e7
    ans =  2
    >> single(3.2e7) * single(4.0e7) + single(1.0)*single(1.0) + single(-1)*single(-1) + single(8.0e7) * single(-1.6e7)
    ans = 0
    >>
    

    Default Datatyp is double
  • Heater.Heater. Posts: 21,230
    Well, OK, using GCC on Win 10 the result is indeed 0 for 32 bit floats. However 64 bit floats yields the correct result of 2. Contrary to the vid.
  • RaymanRayman Posts: 14,643
    I remember doing x87 assembly a long time ago...

    I liked the idea of keeping all the variables for a calculation inside as 80-bit and only bringing the answer out when it was done...

    80-bit precision makes it a lot harder to mess up...
  • ElectrodudeElectrodude Posts: 1,657
    edited 2017-02-10 04:34
    I feel like anyone who actually implements posits in silicon would do it similarly to how the x87 does it, calculating more precision than necessary and then truncating values to fit into a posit only when storing them. The only difference would be that, in the case of posits, what the bits are used for will vary depending on the actual value instead of being fixed.
  • Heater.Heater. Posts: 21,230
    You can't do that with the posits proposal.

    x87 may work in 80 bits or whatever in the background. Other IEEE 754 implementations do not. As a result you can get different results for the same calculation on different machines.

    Basically use of "float" and "double" in C is another of the C languages "undefined behaviors".

    The whole idea of posits is to make things consistent.


  • "If you think you need floating point to solve the problem, you don't understand the problem. If you really do need floating point then you have a problem you do not understand"
    We used fixed point arithmetic on that project.

    Sounds like something Chuck Moore would have said, and probably did, only worded differently.

    Rick
  • Heater. wrote: »
    my old boss from 1980 something... said:

    "If you think you need floating point to solve the problem, you don't understand the problem. If you really do need floating point then you have a problem you do not understand"

    I had a C (programming language) teacher at the Lowell Institute at MIT in the Spring of 1996 who said something like, "If you think floating point is the solution, you don't understand the problem." It's an overstatement, but still seems like a classic quote to me. I've had no luck searching for attribution over the years. Who made the original statement? @Heater: who was your "old boss from the 80's"?

  • Heater.Heater. Posts: 21,230
    I don't think it's an overstatement at all. As soon as you see someone writing:
    if (x == y)
    {
        bla();
    }
    
    where x and y are floats, in code review, you know you have a potentially catastrophic problem.

    That is the first newbie float mistake.

    I'd like to say more about my "float story", but I need a nap....
  • Heater.Heater. Posts: 21,230
    edited 2017-04-03 12:32
    GlenKPeterson,
    @Heater: who was your "old boss from the 80's"

    My old boss from the early 1980's was John Stuck. He was a software project manager in Marconi Radar Systems. I was on his team building embedded software for a new phased array, 3D, radar. The Martello S713. Like so:

    14%2B%2BS713%2BMartello%2B3D%2BRadar.jpg


  • Heater.Heater. Posts: 21,230
    jmg,
    If he claims it is smaller in silicon, it should also be smaller in Software library ?
    I just discovered a certain Clément Guérin is working on a posit floating point library in C++: bfp - Beyond Floating Point:

    https://github.com/libcg/bfp

    The code looks short an sweet. Oddly the ADD method is not implemented yet in there.

    I was just fishing around for such a thing because now that I'm getting into verilog doing posit in verilog would be a challenge.


Sign In or Register to comment.