Shop OBEX P1 Docs P2 Docs Learn Events
How do you do exponential math on the Propeller? — Parallax Forums

How do you do exponential math on the Propeller?

Don MDon M Posts: 1,652
edited 2014-03-15 09:14 in Propeller 1
Lets say if:

x := 3
y := 5
z := answer

How would you do z := x^y in spin? (3^5 = 243)

Comments

  • Don MDon M Posts: 1,652
    edited 2014-03-09 13:41
    I tried this:
      x := 3      
      y := 5      
      z := x
    
      repeat y
        z := z * x
    
      ser.str(string(13, 13, "Z = "))
      ser.dec(z)
    
    

    But it doesn't work. I get Z = 729
  • Don MDon M Posts: 1,652
    edited 2014-03-09 13:44
    Ahh.... I found the mistake.
      x := 3      
      y := 5      
      z := x
    
      repeat y - 1
        z := z * x
    
      ser.str(string(13, 13, "Z = "))
      ser.dec(z)
    
    

    This works.

    Is this the only way to do this type of math on the Prop?
  • JonnyMacJonnyMac Posts: 9,107
    edited 2014-03-09 14:50
    You might try a floating point object, but that would require a cog. If you stick with standard Spin, you might want to package the code like this:
    pub int_y2x(y, x) | r
    
    '' Returns y^x
    
      if (y == 0)
        return 1
      elseif (y == 1)
        return y
      else
        r := y
        repeat x-1
          r *= y
    
      return r
    


    If you don't have large exponents, this is probably the easiest way to go.

    If you use F32, you can do it like this:
    pub fp_y2x(y, x)
    
    '' Returns y^x
    
      return fp.fround(fp.pow(fp.ffloat(y), fp.ffloat(x)))
    
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-03-09 18:10
    You could also use the log and antilog tables in ROM:
    z = xy
    log z = y * log x
    z = log-1(y * log x)

    -Phil
  • Duane C. JohnsonDuane C. Johnson Posts: 955
    edited 2014-03-09 18:17
    Hi Jonny;

    How about some examples of using the built in LOG ANTILOG tables in spin?
    I would really like to convert the procedures into a FemtoBasie function.

    Duane J
  • Duane C. JohnsonDuane C. Johnson Posts: 955
    edited 2014-03-09 18:32
    You beat me to it Phil;
    You could also use the log and antilog tables in ROM:
    z = xy
    log z = y * log x
    z = log-1(y * log x)

    -Phil
    But seriously, is there a good description of how to use these?
    The Propeller Manual appendix B discusses this in PASM but I didn't get very far.

    Duane J
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-03-09 18:42
    My reference for stuff like this is the "Propeller Guts" document (attached). 'Still in PASM, though.

    -Phil
  • JonnyMacJonnyMac Posts: 9,107
    edited 2014-03-09 20:41
    How about some examples of using the built in LOG ANTILOG tables in spin?

    Not my strong suit, else I would have done it. Perhaps Tracy Allen -- who is in fact a math wizard -- will chime in with something neat.
  • LawsonLawson Posts: 870
    edited 2014-03-10 15:34
    You can also borrow the code from Float Math Extended. It is Spin only and bases it's floating point Log2 and exp2 functions on a table driven successive approximation subroutine which can be used on it's own. (it's similar to CORDIC in many ways) You will need to pre-scale inputs to between 1 and 2 just like the hub table though. (or just use the floating point math.)

    Marty
  • lonesocklonesock Posts: 917
    edited 2014-03-10 17:14
    Jon, it there possibly a bug in this statement?
    if (y == 0)
        return 1
    
    I would have expected it to return 0.

    Note, if you expect the exponent to be very large, something like this can speed it up:
    PRI a2b_complex( a, b ) | c
      if b == 0 or a == 1
        return 1
      elseif b < 0
        return 0
      else
        result := a
        c := 1
        repeat while (c<<=1) =< b
          result *= result
        repeat b - (c >> 1)
          result *= a
    
    The break-even point is somewhere around 8, I think, so probably not generally that useful. [8^)

    Jonathan
  • Duane C. JohnsonDuane C. Johnson Posts: 955
    edited 2014-03-10 18:02
    lonesock wrote: »
    Jon, it there possibly a bug in this statement?
    if (y == 0)
        return 1
    
    I would have expected it to return 0.

    Nope.
    (AnyNumber)0 = 1
    Try it on a scientific calculator. Even 00, -00, 10, -10000 all = 1

    Duane J
  • Tracy AllenTracy Allen Posts: 6,664
    edited 2014-03-10 19:04
    Thanks for the wizard badge Jon. I doubt it though, having been at the Big U., I know my place as a mortal. I do enjoy certain oddball integer math problems.

    There are a couple of stumbling blocks mortals encounter in a direct attack on the HUB tables for exp and log The tables are in base 2, but problems are often framed or answered in a different base. That is handled by multiplying by a fractional constant at the right point in the calculation. If doing it in integer math, one has to be familiar with how to do fractional multiplication with something like the ** operator. Second, the HUB tables cover only one octave (power of 2) range. The range is input 0–1 and output 1–2 for exp2, and vice versa for log2, they are inverse functions. Not to worry, the functions are multiplicative by octaves, so larger inputs and outputs only need to be scaled by factors of two in a wrapper. Not only that, the values from 0–1 are represented as implied fractions, 0/2048 to 2047/2048 for the inputs, 11 bits as the input address. It is the numerator you are reading out of the table. Outputs are implied 16 bit fractions from 0/65536 to 65535/65536.

    Attached is a simple demo that uses the HUB tables for exp2 and log2, over the restricted 0-2 range.

    Methods using log and exp are not so great for integer exponentiation. There will be errors that accumulate. Better just multiply it out, up to a point.

    In floating point, all that scaling, interpolation, and base conversion is taken care of behind the scenes. The wizardly cordic-like algorithm Lawson used for Flat Math Extended (Spin) is much more accurate than the linear HUB table lookup. The same notions apply though, about the range reduction and scaling.
  • lonesocklonesock Posts: 917
    edited 2014-03-10 22:14
    Nope.
    (AnyNumber)0 = 1
    Try it on a scientific calculator. Even 00, -00, 10, -10000 all = 1

    Duane J
    That's what threw me: he's computing y^x, not x^y. So if you pass in 0^5, the code would return 1. I hadn't remembered that 0^0 was also defined as 1, though, thanks.

    Jonathan
  • JonnyMacJonnyMac Posts: 9,107
    edited 2014-03-10 22:20
    he's computing y^x

    Yep, because the button on my trusty HP 20S calculator is marked yx ;)
  • ErlendErlend Posts: 612
    edited 2014-03-11 02:27
    0^0 = 1
    Zero multiplied by itself zero times. Can anyone explay why that equals 1? I too have been taught the answer is 1, but I never understood why that is so. My calculator (RealCalc App) actually returns "Error" when I do this calculation.

    Erlend
  • Martin_HMartin_H Posts: 4,051
    edited 2014-03-11 06:30
    Erlend wrote: »
    0^0 = 1
    Zero multiplied by itself zero times. Can anyone explay why that equals 1? I too have been taught the answer is 1, but I never understood why that is so. My calculator (RealCalc App) actually returns "Error" when I do this calculation.

    I thought it was undefined because there are two answers:

    x ^ 0 = 1

    0 ^ x = 0

    So 0 ^ 0 could be either 0 or 1.
  • Tracy AllenTracy Allen Posts: 6,664
    edited 2014-03-11 08:41
    It is a matter of continuity in the limit of y^x as y-->0+ or x-->0+ or both . It equals 1 no matter how small, so mathematically you generally want the function to be well-behaved at the limit. That is not to say that it has to be that way. It does get dicey when the numbers go negative.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-03-11 09:06
    I guess it depends on which limit:
    limy->0+(y0) = 1, but

    limx->0+(0x) = 0

    I think it's just a matter of convention that the first is the one that's accepted for interpreting 00, as either could be viewed as correct.

    To illustrate further the nastiness that occurs right at zero, try taking the log of 00. You get zero times negative infinity.

    -Phil
  • Martin_HMartin_H Posts: 4,051
    edited 2014-03-11 09:10
    It is a matter of continuity in the limit of y^x as y-->0+ or x-->0+ or both . It equals 1 no matter how small, so mathematically you generally want the function to be well-behaved at the limit. That is not to say that it has to be that way. It does get dicey when the numbers go negative.

    I completely forgot about negative exponents which just makes things worse.

    0 ^ x where x < 0 is essentially 1 / (0 ^ x) which is division by zero for any non-zero x. So shouldn't 0 ^ x be undefined for x < 0?

    Which brings us back to 0 ^ 0. I thought one of the many reasons why division by zero is undefined is that it approaches two separate limits (+- infinity) when approached from opposite side of the x axis). So 0 ^ x also has two different values when approached from opposite sides as well. That seems to imply to me that 0 ^ 0 is also undefined.

    Update: Holy smokes, we're not the only ones confused. Here are two different answers to the same question.

    http://www.math.hmc.edu/funfacts/ffiles/10005.3-5.shtml

    http://www.math.utah.edu/~pa/math/0to0.html
  • Tracy AllenTracy Allen Posts: 6,664
    edited 2014-03-11 10:50
    A condition the limit would be that both x and y are positive numbers. You don't consider the exact values 0^x or y^0. Only the limit as either or both limit toward zero+.
  • ErlendErlend Posts: 612
    edited 2014-03-13 02:58
    But are we really free to talk about this as if it was an abstract isoteric matter? Mathematics describe reality, there is by definition no mathematics that lives 'outside' nature/reality. So, we cannot say that some result is such and such by convention, because the result must tie in with reality.
    I am close to illiterate wrt math. I hate that. I regret I did not realise that math is the programming language until it was so late in life that it is very hard to learn it. I keep reading the Feinman Lectures, but lots of it stops at almost-grasping level. Frustrating..

    Erlend
  • lonesocklonesock Posts: 917
    edited 2014-03-13 09:50
    OK, just to throw some more dust in the air, here are 3 different versions of varying complexity. For all of these I choose the following rules:
    1. 1^any power = 1
    2. any^0 = 1
    3. 0^any power = 0
    4. any ^ negative number = 0
    5. compute
    The rules happen basically in order, so 0^0 will return 1 (on my TI-85 it returns 'ERROR 04 DOMAIN' [8^) Here is the code:
    PRI a2b_simple( a, b )
      if (1 == a) or (not b)
        ' 1^any or any^0 == 1 
        return 1
      elseif a and b > 0
        ' if a==0 or b<0, the result will fall through to 0, and we already handled the b==0 case       
        result := a
        repeat --b
          result *= a
      ' else it will return 0
          
    PRI a2b_high_bit( a, b ) | c
      if (1 == a) or (not b)
        ' 1^any or any^0 == 1
        return 1
      elseif (b > 0) and a
        ' if a==0 or b<0, the result will fall through to 0, and we already handled the b==0 case
        result := a
        c := >|b - 1              ' how many times should I square the result?
        b -= |<c                  ' and how many multiplications are left?       
        repeat c
          result *= result
        repeat b
          result *= a
    
    PRI a2b_log_time( a, b )
      ' 1^any or any^0 == 1
      if (1 == a) or (not b)    
        return 1
      ' if a==0 or b<0, the result will fall through to 0, and we already handled the b==0 case
      elseif (b > 0) and a   
        result := 1
        repeat 
          if b & 1
            result *= a
          b >>= 1
          a *= a
        while b
    
    The simple version just loops b times...the runtime is O( b ).

    The 'a2b_high_bit' version optimizes the high bit of b, and once the power is 8 or higher it is faster than the simple case (about 2x faster on average). It is extremely quick when the power b is actually a power of 2 (faster than the log version).

    The 'a2b_log_time' version is O( log2 N ), but has more overhead. Once power is > 12 it is faster than the simple version. And on average it is also much faster than the high-bit version. If I was coding Exp in PASM, this is the version I would choose.

    Jonathan
  • Heater.Heater. Posts: 21,230
    edited 2014-03-14 00:28
    Erlend,
    Mathematics describe reality...
    I think you will find that that is a rather philosophical statement. Have a google for "Does mathematice describe the real world" and you will find it has been under debate since Plato.

    It's quite possible to set up a systems of mathematical axioms that do not describe any reality we are aware of but self-consistent none the less.
    ...there is by definition no mathematics that lives 'outside' nature/reality.
    I don't know whose definition you are referring to but I'll go for that. I see that Maths happens in the brains of mathematicians, that is humans, and humans are made of the stuff of the real world. Ergo Maths is inside reality, so to speak.
    So, we cannot say that some result is such and such by convention, because the result must tie in with reality.
    There are some problems with this idea. As far as I can tell maths is quite capable of describing things that do not or cannot exist.

    As a simple case try drawing a graph of y = 1/x.

    For all the positive numbers that goes nicely, as x gets bigger and bigger y gets smaller and smaller.

    For all the negative numbers it goes well. as x gets more and more negative y gets smaller and smaller but is negative.

    But as you approach zero from the positive side things run away to plus infinity. As you approach zero from the negative side things run away to minus infinity. So what do we say in the middle there at y = 1 / 0 ? Is it positive or negative? And what is this infinity anyway? We have mathematically constructed a thing that cannot be, physically or logically by the looks of it.

    In physics we have things like gravitational force between two objects: F = G * m1 * m2 / (1 / r^2)

    Sort of works, since the days of Newton. And looks a bit like our simple 1 / x thing above. But what does it mean when the distance "r=0"? An infinite amount of force holding two together? A black hole? Can we even have two masses at the same point in space? Does it ultimately make any sense to speak of a mass being "at a point"?

    And what about negative values of G or m? Does that make any sense?

    This simple mathematical model clearly tries to describe a lot more than the "reality" we see out there.

    So what about that 0^0 thing. Seems we cannot look to "reality" to tell us what it should be. We have to decide for ourselves: http://www.askamathematician.com/2010/12/q-what-does-00-zero-raised-to-the-zeroth-power-equal-why-do-mathematicians-and-high-school-teachers-disagree/
  • ErlendErlend Posts: 612
    edited 2014-03-14 04:03
    Heater,

    This subject fascinates me, but I do not pretend to understand half of it. I regularily google - or read a book on - math/reality/quantum/etc, because I enjoy when my brain has to strain to understand. So, I do know that Plato and all the others are not agreeing on all the 'truth'.
    Often I get a sort of I-almost-understand-it-now experience, but strive (forever) to get the final Eureka. One example of this is the Goedel theorem which aims to prove that a computer by principle cannot become creative. I feel I understand the reasoning enough to agree, but not enough to explain it to others. (-and what it therefore proves too, is that the brain is not a compuing device only, but that there is something more, something unknown).
    There is also a 'proof' that all math must be reflected in reality, I cannot recall by whom, and it is hard to understand, and it by no means reflects a consensous, but it sort of feels right to me.

    Maybe I suffer from stack overflow?

    Erlend
  • Heater.Heater. Posts: 21,230
    edited 2014-03-14 05:58
    Erlend,

    I do sympathize. I spent fours years studying Physics. It was kind of a shock to find that it was wall to wall mathematics everyday with the odd break for an actual lab experiment or project. Very different from school time science class. Most of it went over my head, a few gold nuggets hit home. Most of that has been forgotten since.

    Anyway we are in good company because "Mathematicians are chronically lost and confused", or at least according to this one http://j2kun.svbtle.com/mathematicians-are-chronically-lost-and-confused

    P.S. Re: Goedel ... therefore proves too, is that the brain is not a compuing device only, but that there is something more, something unknown.

    There are many that argue against this interpretation of Goedel's proofs.

    P.P.S. Do check out "Numberphile"http://www.numberphile.com/ I think you'll love it.
  • ErlendErlend Posts: 612
    edited 2014-03-15 02:03
    Heater, thanks for the links, Numberphile is really addictive! With stuff like this available how can it be that the kids at school struggle so to learn math? In my time we were at the mercy of the teacher, with no other option available.

    As regards the brain, I firmly believe that Nature has a few more tricks up her sleve than von Neumann. In general, science people should be more humble to the fact that there is lots and lots more to learn - and unlearn. Bodes well for an exciting future, I think.

    Erlend
  • Heater.Heater. Posts: 21,230
    edited 2014-03-15 02:55
    Erlend,
    science people should be more humble to the fact that there is lots and lots more to learn
    I have read statements like that a lot and I just don't get it. It usually comes out when people are complaining that scientists aren't taking their crack-pot theories about aliens or ghosts or God seriously.

    In my experience of "science people" they are very aware of the fact that the human race is only just lifting the corner of the rug when it comes to understanding the universe. They will say that making some discovery or finding an answer to a question always makes more questions and exposes more of what we don't know. They are very humble that way. See for example that article on the confused mathematicians above.

    It's the case that "The more you know, the more you know that there is more you don't know" :)
  • ErlendErlend Posts: 612
    edited 2014-03-15 09:14
    Heater,

    Apologize - I stepped into the generalising trap - my comment is only relevant to the (probably relativily few) scientists who loudly deny any questions to established truth. I am not part of the crack-pot community (I hope), but it so saddens me when someone claims e.g. that the Grand Unified Theory is just around the corner, and it will be the final one. Unfortunately the crack-pots/hyper-religous/alternative-energy people do serious damage by derailing what could otherwise have been an openminded and constructive discussion.
    I am not a scientist, but my job is working with advanced knowledge, and I have seen so many times how important it is to allow for asking questions which goes across disciplines and knowledge levels, even if they may appear disrespectful or stupid.
    :smile:
    Erlend
Sign In or Register to comment.