Shop OBEX P1 Docs P2 Docs Learn Events
Idea: "math coprocessor" using a cog — Parallax Forums

Idea: "math coprocessor" using a cog

mahjonggmahjongg Posts: 141
edited 2007-02-10 20:31 in Propeller 1
The thread "Multiplication on 24bits" gave me an idea, perhaps a bit radical, but let me share it with you.
Why not write a library of math/floating-point subroutines that fills a cog, and use that cog as a kind of "math co-processor".

Other cogs, or perhaps SPIN via some kind of calling routine, could use this "math co-processor" when needed, by feeding it the variables and the desired mathematical function (by placing them in a specified main ram locations) and then signaling the math cog it has work to do with a semaphore. When the "math cog" finishes his work he could place the result in the same variables, and signal he finished the calculations with another semaphore .

Mahjongg

Comments

  • John R.John R. Posts: 1,376
    edited 2007-02-10 02:59
    I'm not sure of the "bit size" (I believe 32 bit), but there are already "math objects" in the Object Exchange.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    John R.
    Click here to see my Nomad Build Log
  • mahjonggmahjongg Posts: 141
    edited 2007-02-10 03:35
    Yes, of course I realize that somebody will probably already have written sets of "math objects", that is nothing new. For most microprocessor you can find libraries of math sub routines.

    However, I propose something different than just linking a set of math objects with the rest of your assembler code, or Spin code.

    The core of my idea comes from my understanding that you can use a cog, and some software, to create "virtual hardware".
    I just propose to extend that idea to use a cog, and some software, to create a "virtual math co-processor".

    Maybe the existing math object interface already works in a very similar way to my hypothetical "virtual co-processor", and my proposal has no practical merit. In that case I beg your pardon for my "harebrained" idea.

    Mahjongg.
  • Mike GreenMike Green Posts: 23,101
    edited 2007-02-10 04:55
    This has already been done for the floating point library that is in the Object Exchange. Have a look at it.
  • mahjonggmahjongg Posts: 141
    edited 2007-02-10 13:55
    Aha, okay I will,

    thanks.
  • Tracy AllenTracy Allen Posts: 6,660
    edited 2007-02-10 20:31
    The thing about integer math, as opposed to floating point, is that the code tends to be more application specific. Which values are positive, negative, what are the ranges, required precision, what constants are involved, what is the optimum form for the intermediate results? 32 bits is a lot to work with, but not nearly the range of floating point when the problem is "stiff" (possible small values in a denominator). All those questions lead to substantially different algorithms. The underlying principles are the same, but the code can look and perform quite differently. I think that is what makes it hard to make a general purpose object to do integer math. That is not to say that there are not some routines that will come up in the same form time after time, for example, the routine to calculate the frqx value for a counter. But the routine is not that long, and if the object is speed, you don't want to have to pass it off via the hub. Integer math can be much faster than floating point when optimized and is preferred when speed of execution is a major requirement, as it is in a lot of signal processing at audio frequencies.

    Integer math is often called "fixed point" but that is somewhat a misnomer when applied to integer multiplication and division that involves implied fractions. The decimal point is decidedly not fixed, and part of the problem is to be sure it ends up where you want it. The problem of scaling and integer overflow must always be kept in mind.

    Cam Thomson wrote the excellent floating point library in the object exchange and also (as Micromega) designed the uM-FPU coprocessor. In floating point, it is again kind of a misnomer from a certain perspective, because the decimal point is mostly in exactly the same place, and the exponent absorbs the scale factor. Because of its even handed treatment of positive and negative and a huge range of scale factors, I believe floating point is much more amenible to treatment as a library object.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Tracy Allen
    www.emesystems.com
Sign In or Register to comment.