integer or floating point LOG functions in spin?
Lawson
Posts: 870
I need to convert the resistance of a thermistor to a temperature. This uses a natural log function. Since I'm not concerned about speed, I want to use a SPIN library to do the natural log. (integer fixed point or floating point is fine) I'd like to avoid having to make a thread-safe version of F32.
Hopefully someone has a version of "FloatMath.spin" upgraded to use the ROM log table, or a version of Chip's Cordic code adjusted to do LOG functions? (don't want to re-invent the wheel.) Alternatively, if someone has cubic-interpolation code for spin, I could live with a look up table.
Lawson
Hopefully someone has a version of "FloatMath.spin" upgraded to use the ROM log table, or a version of Chip's Cordic code adjusted to do LOG functions? (don't want to re-invent the wheel.) Alternatively, if someone has cubic-interpolation code for spin, I could live with a look up table.
Lawson
Comments
The demo skips interpolation in the hub log table. (The floating point objects do the interpolation to 16 bits and stuff that into the 24 bit mantissa.) If you are going in with a 16 bit argument, 12 bits are used for the table lookup, and you can use the extra 4 bits for the interpolation. With a bit more calculation you could use a spline to squeeze out a little more.
The hub table should be plenty accurate for my first sensor. And for the higher precision sensors, linear interpolation of the hub-table works well enough because the hub-table is so large. I'd only need cubic interpolation with a "small" table. (i.e <100 entries) That said, I'm still going to try the Cordic first because it should be higher accuracy and more versatile.
Lawson
Attached is an exponentiation program that demos the basics for integer math using the hub table. Calculates 2^power for power being between 0 and 1, 12 bits. Zero is 0/4096 and 1 is 4096/4096.
For example 2^0.75 = 1.681792831 and the program reports 1.68179.
The hub EXP table takes an argument in [ 0, 1) and returns its result in [ 1,2).
The hub LOG table takes an argument in [ 1, 2) and returns its result in [ 0, 1).
Both 11 bits. That works well with floating point, because the significand is a 23 bit integer representing a value from 0 to 1, and with an implied 24th bit always 1. And the exponent and sign. That representation fits right into the hub table lookup, although there is a terrific loss of precision.
If you just need the thermistor conversion, why not just do it in integer math?
I'm not planning on using integer math, because the ADCs on 4 of my thermistors have 16-17 stable bits. To avoid aliasing, I'd like to carry at least 20-bits through the whole calculation. That's enough bits that keeping all the terms of the Steinhart-Hart equation within 32-bits will be a pain. Floating point already takes care of all that for me. (I'm tempted to make a version of FloatMath that works with un-packed floats for speed and precision on long calculations)
Lawson
p.s. doh! the fixed point Alog2 breaks with inputs between zero and one. Copy-paste error on the bounds checking. (updating the file)
Lawson
I'm interested to follow what you come up with. I can't speak to the floating point, but when the circle diameter is expressed in natural units of radians, the F32 code simply divides the input by 2pi and throws away the integer part:
Eventually I'll also be attempting to use the semi-conductor junction of a distributed brag reflector laser diode to sense it's junction temperature. My napkin calcs say I've go the hardware to do that. The calibration is going to be a bear, but It should let me eliminate environmental effects on the junction temperature. Finally giving us a rock steady seed laser.
" the F32 code simply divides the input by 2pi and throws away the integer part:" That's what I figured. I can't seem to find where it throws away the integer part though.
BTW, one of our lasers in the field broke. (something they do WAY too often...) So I'll be putting this work largely on hold for the next week or two.
Lawson
It's really exotic. I read the wiki and realized that I'm way behind on what is going on in laser technology.
"I can't seem to find where it throws away the integer part though." It doesn't throw it away. In F32, after dividing by 2Pi and a couple of adjustments, it deals separately with the exponent, expA, and the mantissa, manA. The core of the logarithm math involves only the mantissa. At the end the result is rescaled using the exponent and is repacked back to floating point.
Marty
In CORDIC the log/exp math is convoluted. It works from a table of the hyperbolic arctangents, and additional steps are required to extract the log or exp from hypsin and hypcos. CORDIC is one member of a class of shift and add algorithms. There are direct algorithms for high precision log and exp. I used one such to calculate logs on the BASIC Stamp for dew point and thermistors. There are better methods that use a table (discrete base) of the values of ln(1 + 1/2^n). A good reference for this is Matters Computational Ideas, Algorithms, Source Code by J
Lawson
The book by J
Lots of progress today. I decided that if I limited the input angles to the 23-bit precision of the mantissa of a 32-bit float that the code would get nearly full precision and a wider input range. This change lets the trig functions work over a range of -1605 to 1605 radians, though the best precision is seen over the span of -pi to pi. The code now has working and tested code for sin(), cos() and tan(). Over 50,000 pseudo-random points, the FME code is approximately 10x more precise than F32 or Float32full when doing sin() or cos(). (assuming the functions in Matlab are exact) Tan() is about 5x more precise.
I also have a working implementation of Atan2(). (and Atan() ) Precision is slightly worse than F32, and 4x better than Float32full. (again, tested with ~50,000 pseudo-random points) With Atan2() working, Asin() and Acos() will be easy to add using trig identities.
Note, I've done no speed optimization to this code. Splitting the CORDIC code into separate Cartesian to polar and polar to Cartesian loops should gain a lot of speed. Likewise, getting rid of the extra pack() and unpack() calls inside the library code should also speed things up. (though this would likely need some api changes and/or redundant code)
Marty
If you use successive numbers from the Spin LFSR as X and Y coordinates, you get this picture after 20,000 point-pairs.
if the following code is used instead, the picture improves greatly. (an improvement is also seen even if only one LFSR is used)
Marty
Marty
I see that you truncated the series at 25 steps (out of 29 in the table), and also left off the correction for residual error. In terms rounding error, did you see any difference including those extra steps or not? For doing this with integer math (as opposed to IEEE floating point), I think I'd want to take it out to full precision.
The development of the subject that you pointed out at http://www.quinapalus.com/efunc.html is quite different from others I've seen, but essentially the same ideas. It made me curious about his (Mark Owen's) book, Practical Signal Processing. It looks like he covers a lot of material. From Amazon:
"Using intuitive arguments rather than mathematical ones wherever possible, this 2007 book introduces the basic theory of digital signal processing with emphasis on real-world applications."
I like to see both, the underlying math and the intuitive side by side.
I truncated at 25 steps because that's when diminishing returns started hitting hard. I.e. going the full 29 loops with residual correction only improved the standard deviation of the error about 30%. You'd need to go to double precision, or work with un-packed floats to get a significant advantage from more iterations. (anyone interested in a stack based RPN unpacked float library? :P )
Marty
The attached archive is my full testing environment including the Matlab scripts used to visualize the errors. I've included it to make it easier for everyone to poke at the library and find any bugs I missed.
I'm planning to post this to the OBEX in a week if nobody finds any bugs in it.
Marty
I made a few small changes since the last post. Trig functions now only use stack variables, isNaN() returns true for any value of NaN, and I added an isInf() function to test for infinite inputs.
Marty