Shop OBEX P1 Docs P2 Docs Learn Events
integer or floating point LOG functions in spin? — Parallax Forums

integer or floating point LOG functions in spin?

LawsonLawson Posts: 870
edited 2013-08-30 11:16 in Propeller 1
I need to convert the resistance of a thermistor to a temperature. This uses a natural log function. Since I'm not concerned about speed, I want to use a SPIN library to do the natural log. (integer fixed point or floating point is fine) I'd like to avoid having to make a thread-safe version of F32.

Hopefully someone has a version of "FloatMath.spin" upgraded to use the ROM log table, or a version of Chip's Cordic code adjusted to do LOG functions? (don't want to re-invent the wheel.) Alternatively, if someone has cubic-interpolation code for spin, I could live with a look up table.

Lawson

Comments

  • Tracy AllenTracy Allen Posts: 6,656
    edited 2013-07-20 12:25
    This is not Steinhart-Hart, rather spin example using the hub table I put together for a question in some other thread. Temperature proportional to natural log, but no term in natural log cubed.
  • Tracy AllenTracy Allen Posts: 6,656
    edited 2013-07-20 12:34
    Here is another demo, purely to show how to read the hub log table and how to convert the result from base 2 to base e and base 10. It also shows the close relation between the bitlog function and the true log.

    The demo skips interpolation in the hub log table. (The floating point objects do the interpolation to 16 bits and stuff that into the 24 bit mantissa.) If you are going in with a 16 bit argument, 12 bits are used for the table lookup, and you can use the extra 4 bits for the interpolation. With a bit more calculation you could use a spline to squeeze out a little more.
  • LawsonLawson Posts: 870
    edited 2013-07-22 10:57
    Thanks for the code Tracy. I'll work on integrating it and Chip's cordic code into float math today.

    The hub table should be plenty accurate for my first sensor. And for the higher precision sensors, linear interpolation of the hub-table works well enough because the hub-table is so large. I'd only need cubic interpolation with a "small" table. (i.e <100 entries) That said, I'm still going to try the Cordic first because it should be higher accuracy and more versatile.


    Lawson
  • LawsonLawson Posts: 870
    edited 2013-07-22 16:18
  • Tracy AllenTracy Allen Posts: 6,656
    edited 2013-07-23 10:55
    I'll take a look later.

    Attached is an exponentiation program that demos the basics for integer math using the hub table. Calculates 2^power for power being between 0 and 1, 12 bits. Zero is 0/4096 and 1 is 4096/4096.

    For example 2^0.75 = 1.681792831 and the program reports 1.68179.

    The hub EXP table takes an argument in [ 0, 1) and returns its result in [ 1,2).
    The hub LOG table takes an argument in [ 1, 2) and returns its result in [ 0, 1).
    Both 11 bits. That works well with floating point, because the significand is a 23 bit integer representing a value from 0 to 1, and with an implied 24th bit always 1. And the exponent and sign. That representation fits right into the hub table lookup, although there is a terrific loss of precision.

    If you just need the thermistor conversion, why not just do it in integer math?
  • LawsonLawson Posts: 870
    edited 2013-07-23 14:03
    Well I found the bug in my Alog2 function. Was using the table base address from the example code and that address is wrong. So it turns out, the log table makes a lousy anti-log table. I've also added interpolation to the fixed point code. Updates attached. My initial testing is showing up to 5 significant digits of accuracy as long as the input is big enough. Though this degrades to about 4 significant digits if the input is slightly below a power of 2. (i.e QAlog2( QLog2( 1023.99)) has more than average error)

    I'm not planning on using integer math, because the ADCs on 4 of my thermistors have 16-17 stable bits. To avoid aliasing, I'd like to carry at least 20-bits through the whole calculation. That's enough bits that keeping all the terms of the Steinhart-Hart equation within 32-bits will be a pain. Floating point already takes care of all that for me. (I'm tempted to make a version of FloatMath that works with un-packed floats for speed and precision on long calculations)

    Lawson

    p.s. doh! the fixed point Alog2 breaks with inputs between zero and one. Copy-paste error on the bounds checking. (updating the file)
  • LawsonLawson Posts: 870
    edited 2013-07-24 17:46
    Started working on a floating-point/Cordic code. I'm starting with simple things and working up. So far, I've added Fcmp(), Fmod(), and a limited sin() function. Unfortunately the sin() function is limited to a range of -pi to pi by the Ftrunc() function. Anyone have ideas on how to modify Ftrunc() so it limits it's output to the cordic angle range. (i.e. integers from 0 to 2^32-1 over one full turn) It might also help if someone would explain how sin() in F32 and Float32Full deal with angles outside of the 0 to 2pi range. Attached is my current progress.

    Lawson
  • Tracy AllenTracy Allen Posts: 6,656
    edited 2013-07-25 13:39
    I have to ask... What are you doing with a thermistor, to want resolution to 16--20 bits?

    I'm interested to follow what you come up with. I can't speak to the floating point, but when the circle diameter is expressed in natural units of radians, the F32 code simply divides the input by 2pi and throws away the integer part:
    [SIZE=1][FONT=courier new]' from F32
    OneOver2Pi              long    1.0 / (2.0 * pi)        ' I need this constant to get the fractional angle
    ' snip code here to handle sin cosine offset
    _SinCos_cont            mov     fnumB, OneOver2Pi
                            call    #_FMul                  ' rescale angle from [0..2pi] to [0..1][/FONT][/SIZE]
    
  • LawsonLawson Posts: 870
    edited 2013-07-25 15:19
    I'm controlling the temperature of a seed laser down to single digit mili-kelvin. This is to hold the 1064 [nm] output of the seed laser steady to better than 0.001 [nm]. Environmental effects are killing my long term stability at the moment, but the extreme precision and stability I've engineered into the temperature sensors has been very helpful. (the tested noise floor on these sensors is low enough I can resolve 0.2 mili-kelvin changes sampling at 30Hz).

    Eventually I'll also be attempting to use the semi-conductor junction of a distributed brag reflector laser diode to sense it's junction temperature. My napkin calcs say I've go the hardware to do that. The calibration is going to be a bear, but It should let me eliminate environmental effects on the junction temperature. Finally giving us a rock steady seed laser.

    " the F32 code simply divides the input by 2pi and throws away the integer part:" That's what I figured. I can't seem to find where it throws away the integer part though.

    BTW, one of our lasers in the field broke. (something they do WAY too often...) So I'll be putting this work largely on hold for the next week or two.

    Lawson
  • Tracy AllenTracy Allen Posts: 6,656
    edited 2013-07-26 08:20
    I had to look up "seed laser". I figured it must be something that gives you an unfair advantage at a summer picnic in the watermelon seed fight.

    It's really exotic. I read the wiki and realized that I'm way behind on what is going on in laser technology.

    "I can't seem to find where it throws away the integer part though." It doesn't throw it away. In F32, after dividing by 2Pi and a couple of adjustments, it deals separately with the exponent, expA, and the mantissa, manA. The core of the logarithm math involves only the mantissa. At the end the result is rescaled using the exponent and is repacked back to floating point.
  • LawsonLawson Posts: 870
    edited 2013-07-26 11:29
    I'm building up to tackling the log/exp function by working on the trig functions first. This will make sure I've got all the overhead of dealing with floating point numbers strait in my head before I start modifying the Cordic code to do log/exponentials. So my "can't seem to find where it throws away the integer part" is only referring to F32's trig functions.

    Marty
  • Tracy AllenTracy Allen Posts: 6,656
    edited 2013-07-27 09:51
    The integer part is handled in pretty much the same way for trig functions as it is for log & exp. The terms "significand" and "mantissa" are more or less equivalent. I think significand is the preferred term for IEEE floating point, while mantissa applies more the fractional part of a logarithm. The distinction is academic, because they both are tied up in the notions of an integer (fraction) with some degree of precision and an exponent. Code in floatmath, float32, and F32 all refer to the significant digits as the mantissa.

    In CORDIC the log/exp math is convoluted. It works from a table of the hyperbolic arctangents, and additional steps are required to extract the log or exp from hypsin and hypcos. CORDIC is one member of a class of shift and add algorithms. There are direct algorithms for high precision log and exp. I used one such to calculate logs on the BASIC Stamp for dew point and thermistors. There are better methods that use a table (discrete base) of the values of ln(1 + 1/2^n). A good reference for this is Matters Computational Ideas, Algorithms, Source Code by J
  • LawsonLawson Posts: 870
    edited 2013-08-14 12:09
    I've been able to get back to this problem a bit, so I've been doing some google searches for Log() and Exp() shift and add algorithms. This link shows a nice Log() or Exp() algorithm that uses an "invariant" expression and "nice" numbers to successively approximate the answer. Quite similar to CORDIC actually, but specific to Log() and Exp() instead of trig functions. Sections 1-3 of this paper are also relevant, though the math notation used needs more effort to decode and understand. After I get the CORDIC trig functions completely working I think I'll implement the algorithm from the first link. (and see if doing a Log2() and 2^x version is faster)

    Lawson
  • Tracy AllenTracy Allen Posts: 6,656
    edited 2013-08-14 15:30
    Marty,
    The book by J
  • LawsonLawson Posts: 870
    edited 2013-08-15 18:06
    Thanks for the source recommendations Tracy. I'll look those up as I have time.

    Lots of progress today. I decided that if I limited the input angles to the 23-bit precision of the mantissa of a 32-bit float that the code would get nearly full precision and a wider input range. This change lets the trig functions work over a range of -1605 to 1605 radians, though the best precision is seen over the span of -pi to pi. The code now has working and tested code for sin(), cos() and tan(). Over 50,000 pseudo-random points, the FME code is approximately 10x more precise than F32 or Float32full when doing sin() or cos(). (assuming the functions in Matlab are exact) Tan() is about 5x more precise.

    I also have a working implementation of Atan2(). (and Atan() ) Precision is slightly worse than F32, and 4x better than Float32full. (again, tested with ~50,000 pseudo-random points) With Atan2() working, Asin() and Acos() will be easy to add using trig identities.

    Note, I've done no speed optimization to this code. Splitting the CORDIC code into separate Cartesian to polar and polar to Cartesian loops should gain a lot of speed. Likewise, getting rid of the extra pack() and unpack() calls inside the library code should also speed things up. (though this would likely need some api changes and/or redundant code)

    Marty
  • LawsonLawson Posts: 870
    edited 2013-08-16 17:11
    More progress. asin() and acos() now exist and have had basic testing. The both output NaN if the input is outside of the -1 to 1 range. Added an isNaN() function. Modified Fsqr() to output NaN if the input is negative. Added a "random" and seed() functions to generate random floating point numbers. Random unsteadily clocks two Spin LFSRs for a significant improvement in random number quality. I've started prototyping exp2() and log2() functions.

    If you use successive numbers from the Spin LFSR as X and Y coordinates, you get this picture after 20,000 point-pairs.
    Spin LFSR.png


    if the following code is used instead, the picture improves greatly. (an improvement is also seen even if only one LFSR is used)
    pub random(spud)
    ''add some unsteady clocking to the LFSR to improve it's quality a bit
      repeat (((spud <<13) >> 30)+1)                        'run the LFSR 1-4 times bassed on center bits of spud.
        ?sprout
      repeat (((sprout <<23) >> 30)+1)                      'run the LFSR 1-4 times bassed on center bits of sprout.  
        ?spud
      result := spud
    

    Unsteady spin LFSR.png


    Marty
  • LawsonLawson Posts: 870
    edited 2013-08-20 17:05
    Log2() and Exp2() are now done. (yea! :cool: ) The relative error is between 60 and 100 times lower than F32. Still need to pull the CORDIC into the same file, add base 'e' and base 10 variants, add Pow()/LogX() functions, and re-test the trig functions with my latest test framework. (and some code cleanup too) I had some small rounding errors in my test framework and those were swamping the error of the trig, exp, and log functions.

    Marty
  • Tracy AllenTracy Allen Posts: 6,656
    edited 2013-08-22 10:03
    Nice job Marty! I'm following along with interest, something I've thought about for a long time. You know, ever since the days I learned how to use a slide rule!

    I see that you truncated the series at 25 steps (out of 29 in the table), and also left off the correction for residual error. In terms rounding error, did you see any difference including those extra steps or not? For doing this with integer math (as opposed to IEEE floating point), I think I'd want to take it out to full precision.

    The development of the subject that you pointed out at http://www.quinapalus.com/efunc.html is quite different from others I've seen, but essentially the same ideas. It made me curious about his (Mark Owen's) book, Practical Signal Processing. It looks like he covers a lot of material. From Amazon:
    "Using intuitive arguments rather than mathematical ones wherever possible, this 2007 book introduces the basic theory of digital signal processing with emphasis on real-world applications."
    I like to see both, the underlying math and the intuitive side by side.
  • LawsonLawson Posts: 870
    edited 2013-08-22 10:32
    My test showed that the residual correction only gives 1-2 more bits. It's just simpler to run the loop again.

    I truncated at 25 steps because that's when diminishing returns started hitting hard. I.e. going the full 29 loops with residual correction only improved the standard deviation of the error about 30%. You'd need to go to double precision, or work with un-packed floats to get a significant advantage from more iterations. (anyone interested in a stack based RPN unpacked float library? :P )

    Marty
  • LawsonLawson Posts: 870
    edited 2013-08-22 13:11
    Well I'm calling it complete for now. FME.spin is the full library with CORDIC and invariant loops inside. The trig code uses three variables, so you'll need a copy of the object for each cog that uses it. Re-testing of the trig code shows that its 1-5x more accurate than F32. So no big gains. Not surprising really as F32 also uses CORDIC for the trig functions.

    The attached archive is my full testing environment including the Matlab scripts used to visualize the errors. I've included it to make it easier for everyone to poke at the library and find any bugs I missed.

    I'm planning to post this to the OBEX in a week if nobody finds any bugs in it.

    Marty
  • LawsonLawson Posts: 870
    edited 2013-08-30 11:16
    Just put the code on OBEX. FME.spin
    I made a few small changes since the last post. Trig functions now only use stack variables, isNaN() returns true for any value of NaN, and I added an isInf() function to test for infinite inputs.

    Marty
Sign In or Register to comment.