Lazer range finder
B.ASHISH
Posts: 28
We have got the code for the Parallax Lazer Range Finder 28044 in SPIN from the Propeller Object Exchange and it is also available in the datasheet of the sensor (Lazer range finder 28044).IT's giving accurate distance but we wanted to increase the rate at which it displays the readings.Please help us to increase the frequency of it's readings display asap.
Comments
How about this: you try to increase the rate yourself, and when you run into problems you post here and see if there is any help available. Open ended questions for big projects typically do not get much in the way of responses.
As for the practicality of increasing the *laser* read frequency, I doubt it will be able to go much faster. It's already been optimized by Joe Grand (the designer), and I seem to remember he mentioned that it was limited by the camera speed. You might be able to take the code and cut out some unnecessary portions (fluff) to speed it up, or you might be able to decrease the resolution to speed it up.
I've been working with Joe on some optimizations. It's to early to promise anything specific, but it looks like we might get it doing 2 or 3 range samples per second.
If you dedicate 4 cores to grabbing data at the same time you could bring in 640x16 at the maximum frame rate of 25 FPS (not 30 because you need the prop chip to have a 6 MHz crystal for 24 MIPs operation). Because it takes 2 frame per sample you'll only get 12.5 samples a second.
Thanks,
I'm currently working on converting the key slowest parts of the spin stuff into PASM. If I can get that all working, then that should bring full range sample times down to about 250ms (4 per second).
After that I will attempt to get the camera frame captures going faster using multiple cogs. I think the best I could hope for is reading the camera at 30fps, and getting full range samples down to around 100ms (10 per second). That would be 66ms for the 2 frame captures, and 30-35ms for the processing.
However, I'm not promising anything here, I have some code working at about 2.5 range samples per second now that needs to be fully tested out and verified before it get's released. It may not be stable enough to release. However, I will promise to continue to work on it until we have something faster and stable.
Roy
Bascially, you need to disable the camera's PLL and then send the camera a 48 MHz or 24 MHz clock frequency depending on the internal divisor in the camera. Whatever signal causes the pixel clock to generate a 24 MHz output is the one you want. Once you do that, you can then use just a little bit of ASM code to align the sampling window to the camera and then you can read pixels back to back. Once core can do about 320 pixels at a time.
All the calculations for the image can be done in ASM on the fly. If I remember correctly, you just have to do background subtraction and then centroid computation.
The trick is that you'll need 2-4 cores depending on how much computation you do. Syncing all the cores isn't that hard however, the CMUcam4 uses 2 synced image processors.
Thanks,
The "background subtraction" happens in ASM now as it captures the second frame. The centroid computation and then the final distance triangulation math are both in spin. The centroid computation takes most of the time between those two (~155ms vs. 0.25ms for the rest.).
Centroid computation is done by taking the frame buffer and summing each column of pixels, counting pixels that are above a set threshold, then scanning that result for "blobs" of columns with pixel counts above a threshold. Once we have a list of blobs then if finds the blob with the largest mass and uses the centroid of that one. The summing step at the beginning takes about 135ms, the rest of the stuff takes about 21ms.
If you need some user testing done, I have a couple LRFs just sitting around. Not much use unless I can get to at last 2 scans/sec. Glad to help.