Whew! What a post Joe! Good job documenting the development of this project.
Have you considered the use of an optical bandpass filter to help isolate the bright spot in outside lighting? I would think a green 532 nm laser would be easier to detect with a bandpass in place. Just a hunch... but that's how my brain works. I also thought about a red laser with a blue filter in place. The blue would sort of level out the intensities. Sort of like looking at a scene through a yellow filter with the human eye to get an idea of what it would look like in black and white. A trick I picked up from an Ansel Adams book I have.
Thanks for reading the thread! Glad you like it so far
I had considered using a filter and I think it could be helpful, but the real problem is how to reliably deal with them during manufacturing. Since the module will not come with a housing and will just be the PCB with electronics, mounting the filter would end up being troublesome and costly (especially for a non-mechanical guy like me). I also wanted to keep the camera system as stock as possible to allow people to modify the Laser Range Finder module and potentially use the camera for other applications. The camera module itself already has an IR filter, but I haven't been able to get the actual specifications and only know that at 650nm, only 50% of the light is transmitted. I think a similar effect could be to take a single frame with the laser turned on, take a subsequent frame with the laser turned off, and look at the differences to help narrow down where the laser spot is.
As for using a different color laser diode, there's nothing available that I know of that is suitable for my design (given my requirements of size, shape, mounting options, and price) besides red. It's possible that a laser diode vendor would create a custom unit for me with color other than red, but that's not something I want to get into at this point. Green would definitely be much brighter and maybe if things don't work out with my current design, I'll explore this option a little more.
Ive just completed layout for the Laser Range Finder prototype PCB. Here are screenshots of the front and back. Units have been ordered and I should have some real, assembled pieces back in a few weeks (I decided on a slow turnaround time to keep costs down).
FRONT:
BACK:
The board is 4-layer, measures 100mm x 40mm (~4 x 1.6), and is completely routed by hand. All components are mounted on the back side of the board with the exception of the camera module and laser diode. The Propeller, its supporting electronics, and the camera interface circuitry can be seen on the left side of the image (thus, on the right side of the actual PCB, since were looking through the PCB here from the front side) and the laser diode control circuitry is on the right. The Prop Clip 4-pin interface is at the top of the board as surface mount pads and used for Propeller programming and user hacking/modifications. Below it is a bi-color (red/green) LED for indication of system status. At the bottom is a single-row, 4-pin header (VCC, Serial IN, Serial OUT, GND) for the primary user interface (sending commands, receiving data, etc.)
The center points of the camera and laser diode are spaced at exactly 78mm, which closely matches the spacing of my [post=947589]proof-of-concept[/post] (and the original project on which my project is based) and keeps the module to a reasonable size. The spacing of the camera and diode ultimately determines the effective range (minimum and maximum detectable distance), so if my current spacing doesnt give a decent, usable result, Ill need to do some experiments to select a more ideal configuration (while still considering overall module size). Since the laser diode control circuitry is isolated on one side of the board, it will be possible for folks wanting an even wider camera-to-diode spacing than is ultimately provided to cut the board and extend the spacing.
Ill be back with more photos and progress updates once the prototype units arrive!
"Laser Radiation - Avoid direct eye exposure".
How will that work? A kit or a board only? No sales to minors, everyone signs a waiver (like the Segway)?
Parallax and I haven't discussed this in detail yet, but we will obviously need to follow whatever laws (if any) are required for the sale of a Class IIIa laser product (which includes, for example, most of your standard, run-of-the-mill red laser pointers, laser levels sold at hardware stores, laser-based thermometers, etc.) and make sure our customers are aware of potential damage if they stare into the laser.
The visible red, 635nm diode I've selected for the design has a 3mW maximum output power. Of this common, "safe" class, the few documented cases of eye damage are related to someone staring at the beam for a prolonged period (http://en.wikipedia.org/wiki/Laser_safety#Laser_pointers).
The assembled circuit boards for my Laser Range Finder prototype have arrived! I wrote a little bit about the board [post=981013]last month[/post], but thought it would be fun to share some actual photos of the real deal:
FRONT:
BACK:
For development purposes, Ive soldered in a laser diode socket on this prototype (the tall, white plastic part on the right front of the module) This makes it easy for minor adjustments to the laser diode positioning and to replace the laser diode in case something goes wrong. In the production version, there wont be a socket and the laser diode will be soldered flush to the board. The tiny OVM7690 camera module can barely be seen on the left front.
While waiting for these boards to come back, Ive been working on the Laser Range Finder firmware and making great steps with my PC-based host/monitor program. But, with the new hardware finally here, I can hardly wait to test it out - keep your fingers crossed for me!
Well, that was fast! After making some modifications in the Propeller firmware to account for pin changes between my development board and prototype PCB, the unit came up on the first try! It's times like these when I love engineering
Using a solid copper plane (recommended by @PhiPi) and black matte soldermask on the PCB to prevent light from reflecting into the back of OVM7690 camera module helped to give me a nice, clear image with no remnants of the camera's internal vias/traces like I had experienced early on in my development process. This is great news, as I may not need to use any epoxy, tape, or other light-blocking material around the edges of the camera. I'll need to do some experiments to ensure that this holds true in a variety of lighting conditions.
OK, now back to work on adding support for 16-bit per pixel (YUV422) images into my frame grabber. Being able to capture color is key for accurate detection of the red laser spot and will translate into better range calculations.
Just wondering how you got the OVM7690 soldered to your board? I assume you had someone place it for you, and didn't solder it by hand.
Even with professional assembly, we had a few of the sensors get knocked off the pcb during shipment and handling around the office. If you plan to sell them on boards without a case, this might be an issue.
Glad to hear you got the problems with the back reflections fixed. We had the same problem. Even omnivision's evaluation kit has the problem, which is amusing.
Just wondering how you got the OVM7690 soldered to your board? I assume you had someone place it for you, and didn't solder it by hand.
I had the units assembled by my local facility. The camera module is a 20-ball micro-BGA package. While you could attempt to solder one by hand (using a skillet or reflow oven), you wouldn't be able to check for reliability without using X-ray.
Even with professional assembly, we had a few of the sensors get knocked off the pcb during shipment and handling around the office. If you plan to sell them on boards without a case, this might be an issue.
Omnivision recommends using an epoxy underfill to increase solder joint reliability, and that's what we plan to do. Otherwise, since the solder balls and pads are so small, they could break off if stressed (as you've experienced).
As I mentioned [post=959702]a while ago[/post], Ive been working on a PC-based host/monitor program that will give me a nice, graphical interface to control the Laser Range Finder (LRF) module.
Though not a requirement for usage or operation of the LRF (since the module can be controlled with any host microcontroller through its serial interface and will calculate/provide all of the necessary range finding results on-board), I thought it would be handy for users to have a simple interface at their disposal. Its also been indispensable during module development, as Ive been able to quickly see what the camera is seeing, make adjustments to frame grabbing routines, and help verify some of my initial range finding math.
Some of the primary features of the tool include:
Send commands to the LRF Module (using the terminal console)
Grab/display/save images (greyscale or color)
Get color tracking information (set color bounds, get centroid coordinates of the detected color blob, range/distance calculations)
My monitor program, which Im calling the LRF Image Viewer, is based on the open-source CMUcam3 Frame Grabber tool that was originally written in Visual Basic. I used that tool as a starting point, ported it to Visual Basic .NET, and have heavily customized it for the specific needs of my LRF module.
The program is still a work-in-progress, but I think Im close to being done. When its completed, it will be released along with source code so users can easily customize it for their own purposes, such as adding extra image processing functionality or creating new detection/measurement routines.
Heres a screenshot of the current version:
The left side of the screen provides a terminal console and COM port selection/configuration. The LRF connects to the host PC via its serial interface through a USB-to-Serial adapter (or, using a PropClip, #32200 through the programming pads on the back of the LRF). The yellow text box beneath the blue console window allows you to type commands and transmit them via the serial port. Any data returned by the LRF will be displayed in the console window.
The right side of the screen provides the graphical interface for the LRF Module:
The Get Info button sends a command to the LRF to obtain version information, parses it, and displays it in the text boxes.
The top window displays an image sent by the LRF, which is requested with the Grab Image button. You can choose to retrieve a greyscale or color image using the radio buttons. The resolution of the selected image type is also displayed - currently 160 x 128 for greyscale and 320 x 32 for color, but subject to change as I fine tune the design.
The Save Image button allows you to save the currently displayed image either as a standard bitmap (.BMP) or as raw binary. The raw format is useful if you plan to pipe the frame grab into another program for post-processing and dont want any of the bitmap headers or down-sampled color (which happens when converting from the cameras YUV422 to the bitmaps RGB color space).
For example, I had used the raw output extensively while modifying my frame grabber to support 16 bits/pixel color, because I didnt want to introduce another potential failure point by trying to decode and display the image with the Image Viewer before knowing that I was receiving the proper data from the camera. My first successful color frame grab was the OVM7690 color bars at 320 x 32 resolution (which, not coincidentally, fits into the same frame buffer size of 5120 longs as the 160 x 128 greyscale frame). I used YUVTools to convert the raw data into a YUV422-decoded color image and, once I knew I had good data in the proper format, I moved forward with building the color decoding and color image display functionality into the Image Viewer tool.
With my next frame grab, I tried to get artsy and mysterious with this shot of my eyes:
Searching for the Light...
Ive already started working a bit on the laser spot detection and range calculations. Right now, all of the math is being performed within the Image Viewer. This makes it easy to make adjustments to my detection algorithms. Once I get the results to a point Im happy with, I can port the routines to run directly on the LRFs Propeller.
If the Color Tracking checkbox is selected in the Image Viewer, the LRF module will grab two consecutive frames - one with the laser off to obtain a baseline background and one with the laser on (each frame grab takes 400mS, so the laser is only enabled for a short period). Using background subtraction, Im able to identify only the pixels that have increased in brightness between the two frames (in most cases, this is just the laser spot, though sometimes reflections of the laser pointer are also detected). The image below shows the result of the laser being shined onto a beige wall - essentially all brightness is gone except for where the laser is located:
An important part of the color/laser spot detection algorithm is to locate a blob of pixels that match a set of criteria within the captured frame. In our case, the blob must fall within the color bounds selected with the Upper and Lower controls (the user can use the up/down buttons to adjust the Y, U, and V color components and the result is shown in the color box immediately to the right of the controls). I currently have the bounds set to look for a brightness (Y) value between 0.35 and 1 of any color (U/V).
The Cross and Box radio buttons are used to choose how to highlight the blob if one is detected. The Cross selection (shown in the image above) draws horizontal and vertical lines that intersect at the center point/centroid of the blob. The Box selection will draw a box around the entire blob. Using the Box mode has been helpful as I try to fine tune the detection algorithms, so I can easily see if there is some portion of the frame incorrectly being treated as part of the blob (and where that portion is located).
The text boxes on the right show the Centroids X and Y coordinates, Pixels From Center (the number of pixels the centroid is away from the frames center, which is used as a component of the range finding calculation), and (preliminary) Range Finding/Distance results in centimeters and inches.
It's time to refine!
With the LRF Image Viewer tool close to being done, my next step is to continue refining the laser spot detection and range finding functionality of the LRF.
Ill come back when I have some quantifiable data to report!
Just found this thread, you have another fan. Excellent work. I really like the thorough documetation. Thanks for sharing all your work!
My pleasure! Glad you like it. It's been fun to write, since it gets all the thoughts out of my head and into a convenient location that I (and others) can refer to
I'm still working on re-writing the image processing/blob detection routines and will have another post once that's all tidied up.
This is pretty rad-tastic! I'm really enjoying following this thread. Back when I was doing my undergrad, I was working for a laser scanning and measurement company, and had the idea to create a small single-point laser range finder for an undergraduate thesis. However, this is WAY cooler than the simple laser/photodiode design that I had specified.
I've attached my original Thesis Proposal. Keep in mind that this was 100% theoretical, as I never got to any prototyping stage [my schedule would have plopped this alongside Senior Design, so I decided to abandon it lest I suffer a nervous breakdown] - but it still may be interesting.
Ive been obsessing over the laser spot detection and range finding for the past few weeks. Since Ive made some significant progress, I thought Id write a really long post about it!
However, if there were pixels outside of the primary blob but still within our frame (for example, from reflections caused by the laser diode or some other noise not removed by our background subtraction done during the frame grab), they were included in the blob's bounding box (which is either a virtual or visual box surrounding the detected blob). Here's an example:
When the bounding box was calculated (shown as a red border in the image above), it started from the left-most pixel that met the color bounds and ended at the right-most pixel. The routine didn't take into account any space in between pixels that didn't match the color bounds, as it basically looked at the entire frame and created a box around the largest area of matched pixels. This caused the bounding box to be much larger than the primary blob (the blob that is most likely to be the laser spot used in the range finding calculations). As such, the centroid was also miscalculated, since the routine used the center location of the bounding box as the centroid.
As I was taking some preliminary range finding measurements and looking at the captured frame with the LRF Image Viewer, the range result was sometimes way off because of this centroid miscalculation. I knew that Id have to refine the blob detection to be more accurate and to handle multiple blobs within the frame in order for the LRF to be useful.
I poked around online for a while and noticed that, while there are lots of examples of blob finding routines, many require significant computational resources and are designed for PC (not embedded) applications. Here are a few of the sites I explored:
My head was spinning from trying to digest all of this material. I mentioned my dilemma to Zoz, friend and former brother-in-arms of Prototype This. Zoz specializes in human-robot interaction and computer vision, among other things, and he had an elegant solution in mind for multi-blob detection. I was ready to learn!
Lessons with Zoz
After an hour of brain-exploding conversation with Zoz, I had a plan scribbled down on paper for a simple, efficient, and robust algorithm that could detect multiple blobs within the frame, properly determine their mass and centroid coordinates, and execute on a dedicated microcontroller like the Propeller.
Over the course of a few days, I implemented the plan in Visual Basic .NET for my LRF Image Viewer Program (porting the routines to run directly on the Propeller will come later). I also changed my region-of-interest for color frame grabs (used for range finding) from 320 x 32 to 640 x 16. This still fits into the existing 5120 long frame buffer and gives a more detailed horizontal view at the expense of limiting the vertical view to be closer around the centerline. This is fine, as we are only concerned with the X position of the blob and any spot too far away from the Y centerline is probably not the primary laser spot that we're interested in.
The image processing and blob detection routines function as follows:
Background Subtraction. As briefly mentioned in my [post=990449]previous post[/post], this step occurs during the frame grabbing process. Two consecutive frames are grabbed, one with the laser diode off and one with the laser diode on. Each pixels Y/luma component from the first frame is subtracted from the same pixels Y/luma component from the second frame (and absolute valued), leaving only the pixels that have changed in brightness between the two frames. All other background details (anything that has stayed the same between the two frames) disappears. Due to timing constraints of the frame grabber running on the Propeller, the U/V color components are grabbed only on the first of the two frames and not modified. Details of pixel subtraction can be found here.
Thresholding. Look at each pixel within the frame and determines if it falls within our defined lower and upper color bounds. If so, the pixel is set to a '1'. If not, the pixel is set to a '0'. In our case, we are only concerned with the Y component and are looking for pixels above a certain brightness threshold. For easier visualization from within the LRF Image Viewer, I set all 1 pixels to white and all 0 pixels to black:
Column Sum. Count the number of 1 pixels within each vertical column. This gives us a one-dimensional array that consists of the number of "valid" pixels per column. For our purposes of identifying the X coordinate of the blob, we dont care where in the Y space it is. So, by summing the valid pixels, we can more easily and quickly search to locate any blobs. The following image shows a zoomed-in blob with the columns sum printed at the bottom of each column:
Blob Detection. Traverse the one-dimensional array of column sums looking for any sums above a pre-defined threshold (currently, a column sum needs to be greater than 2 in order to be considered part of the blob). For example, in the image from Step 3, the blob would start at column 7 (which has a sum of 5) and end at column 22 (which has a sum of 6). We can repeat this across the entire frame until all blobs have been detected.
Mass/Centroid Calculation. Calculate the total mass and centroid for the detected blob(s) in the frame. The mass is simply the number of valid 1 pixels within the total blob. The centroid of a blob is its center of mass and is calculated by weighting every valid pixel with where it is in the blob and averaging by the total mass:
For column 1..n of the blob
sum = 1 * s1 + (2 * s2) + ... + (n * sn)
Where sn = column sum for column n
Then, centroid = sum / mass
Simpler image processing, like the CMUcam3s blob detection, sets the centroid as the center point of the bounding box. However, this type of calculation would only return a proper value for round, balanced spots. Since we dont know what type of object the laser will be pointing at and how the light will vary when it reflects off of the object, the resulting blob may neither be round nor balanced. Performing the weighted average gives a more accurate center-of-mass result regardless of blob shape.
Heres an example of determining the centroid using the blob from Step 3:
The blob with the largest mass is then chosen as the primary blob (which we assume is the actual laser spot) and will be used for the subsequent range finding calculations. If there are multiple blobs with the same mass, the first occurrence remains the primary.
Heres a screenshot of the LRF Image Viewer showing a multi-blob frame and resulting information, such as the number of blobs, blob characteristics, and primary blob determination (the blob on the right is the laser spot and the small blob on the left is a reflection off of my metal filing cabinet). The LRF Image Viewer marks the centroid of the primary blob with a vertical red line.
Calculating Range with Optical Triangulation
As I wrote about in [post=947589]my first post[/post], this entire LRF project is based on the concept of optical triangulation, where the distance to a targeted object is calculated using triangulation with simple trigonometry between the center points of the laser light, camera, and object. The most compelling example is Todd Dankos Webcam Based DIY Laser Rangefinder and my design is based, in theory, on his implementation:
Referring to the image above, a laser diode module shines a laser spot onto the target object. The value h is a fixed, known distance between the center points of the laser diode and the camera. In our case, its exactly 78mm. When the distance to the target object D changes, so do both the angle q and the value pfc, which is the number of pixels the centroid of the primary blob (laser spot) is away from the cameras center point.
As the object gets closer, the value of pfc (and angle q) increases. As the object gets farther away, pfc (and angle q) approaches zero. Parallaxs own Beau Schwabe made a short video that [post=613631]demonstrates this phenomenon[/post].
If we know the angle q, then we can use basic trigonometry to calculate the distance value D:
tan q = h / D
Solving for D, we get:
D = h / tan q
But, how do we determine the angle q if our image processing routine is only giving us a pfc value?
The relationship between pfc and angle q can be described with a slope-intercept linear equation. This relationship varies ever so slightly between LRF modules due to manufacturing and assembly tolerances, so each unit needs to be calibrated by taking a number of measurements from known distances (the actual angle corresponding to each distance is determined with the arctan function) and recording the resultant pfc values. The slope and intercept are then calculated from those values (Im doing it all with a spreadsheet). The calibration only has to be done once unless the laser diode or camera changes position. To help prevent accidental position changes (e.g., from knocking, bumping, or otherwise rough treatment of the module), the camera module and laser diode will both be reinforced with epoxy around their bases.
So, once we shine the laser onto the target object and receive the centroid of the laser spot (in number of pixels from center), an angle q can be calculated using the slope-intercept equation and passed to the trigonometric function to determine the actual distance the LRF module is from the target object. Phew!
Measurement Results
My test setup for calibration and initial measurements consisted of eight marked distances on the floor of my lab (denoted with blue tape and corresponding to 6, 10, 20, 30, 40, 50, 75, and 100 inches). The BASIC Stamp II Board of Education was just used to easily provide power (via battery) to the LRF module. All serial communication between the module and my laptop (upper left corner of the picture) are via the Prop Clip interface. The far wall was used as the target object.
Ive attached a spreadsheet showing the measurement results from each of my five prototype units:
In summary, this design has a usable range from about 6 inches to 6 feet (72 inches). Accuracy varies anywhere from perfect (no difference between actual distance and the distance calculated by the LRF) to around 2% error (approximately 1/4 to 1 difference between actual and calculated distance). Within the usable range, the change in pfc/angle is very noticeable, making the range calculations more reliable. Although the camera/blob detection can see the laser at longer distances, the further away the module is from the target object, the closer to the camera frames center the laser spot appears, making it difficult to determine changes in distance. At distances less than six inches, the laser spot is out of the cameras field-of-view, so no blob detection can occur.
Limitations
My LRF module (currently in conjunction with the LRF Image Viewer tool) is able to consistently and reliably detect multiple blobs and make a determination of which is the primary laser spot. The range finding mathematics work well with reasonable accuracy. However, like any sensor system, its not suitable for use in all conditions and has its limitations:
Range. As discussed above, this design has a useable range from about 6 inches to 6 feet (72 inches). Longer distances will result in a noticeable reduction in accuracy.
Environment. While the camera does have automatic white balance and automatic exposure, which are enabled by default (they can be enabled/disabled with a single command sent to the LRF module), and will automatically adjust its image to account for sudden changes in lighting conditions, the module works best in a controlled environment, such as indoors with minimal changes in brightness across the frame. The module is also less reliable when the laser is shining onto a bright object (for example, a white piece of paper), since the background subtraction done during image processing could potentially subtract the bright laser from the already bright frame. Giving the camera time for its automatic white balance and automatic exposure to settle helps a bit to make the laser spot stand out.
Calibration. Each unit will require calibration to account for manufacturing and assembly tolerances (which cause the slope and intercept values that we rely on for range calculations to vary slightly) and to ensure that the unit is providing as accurate a range measurement as possible. This will happen during production and the end user wont typically need to worry about it, unless they choose to re-calibrate on their own. Calibration in production quantities could be tricky and time consuming as the LRF will need to be programmed, then used to obtain a number of raw readings at fixed distances (two at a minimum), then re-programmed with the resultant slope and intercept values. Im thinking about how to best approach this. I may be able to get away with the Propeller automatically storing the updated values in a portion of the EEPROM to at least remove the final step of having to re-program the unit.
Next Steps
With the image processing, blob detection, and range finding routines all working in my LRF Image Viewer program, the next (and final!) step is to port them to run directly on the LRF module's Propeller. This will allow completely standalone operation of the LRF module without the need for a PC host.
While Im doing that, Im curious on your thoughts so far. Having followed along on this development journey, what do you think of the design? Is it something youd like to play with or use in a real project? Or, are the limitations too much of a concern?
All questions, comments, or complaints are welcome!
Joe,
All I can say is wow! Yes, I remember Zoz from "Prototype This" and I am certain that his insights were amazing. Yes, I will lookforward to playing with the final product. My own engineering degree went by the wayside when I could not muster freshman calc. (probably because I discovered Booze). I did spend the next 40 years selling video hardware so I stayed close enough to technology to satisfy the itch. Your engineering diary on this project shows me my own weaknesses in trying to develope an idea. Will use this a model for working on my own concepts in the future.
Enough of my ramblings. I do have one question. If you were to use a lenticular lens to create a verticle line, say 1/4-1/2" high, would that make it easier to differentiate the return from other light blobs? I was looking at what happened when I shined my cheap laser leval against different backgrounds at a 20 foot distance. To me it seemed to standout more than just a blob from another laser pointer of similar intensity. I mention the short length of line as I am sure you do not wish to have the light intensity being deluted by spreading it out further than necessary.
Is there a way I could buy one of these prototype devices assembled (completed/working code is not needed) ? I'm doing a blob-tracking project for robotics research at a university. I've been prototyping around the propellor and a larger toshiba camera module, but with little luck. I just happened to order some of the Omnivision camera modules to try working with them, but placing the camera modules reliably requires more equipment than I have available.
Joe,
All I can say is wow! Yes, I remember Zoz from "Prototype This" and I am certain that his insights were amazing. Yes, I will lookforward to playing with the final product. My own engineering degree went by the wayside when I could not muster freshman calc. (probably because I discovered Booze). I did spend the next 40 years selling video hardware so I stayed close enough to technology to satisfy the itch. Your engineering diary on this project shows me my own weaknesses in trying to develope an idea. Will use this a model for working on my own concepts in the future.
Very cool! Thanks for your feedback. I don't normally keep detailed design diaries like this, but it's actually nice to sit down and really get it all out on "paper." It really does show how much effort is actually required in a typical design process. I'm glad to hear you'll use it as a model! Looking forward to what you come up with
Enough of my ramblings. I do have one question. If you were to use a lenticular lens to create a verticle line, say 1/4-1/2" high, would that make it easier to differentiate the return from other light blobs? I was looking at what happened when I shined my cheap laser leval against different backgrounds at a 20 foot distance. To me it seemed to standout more than just a blob from another laser pointer of similar intensity. I mention the short length of line as I am sure you do not wish to have the light intensity being deluted by spreading it out further than necessary.
One of the benefits of this OmniVision OVM7690 CameraCube module I'm using is that it doesn't require any external lenses or focusing mechanisms. It has it all integrated into the tiny little device. The trade-off is that it limits expansion, so adding any external lens would be difficult. With that said, I haven't tried using an external lenticular lens. You're right in that having a laser spot as a small vertical line would essentially make the blob bigger, so it may stand out even more within the frame, but the additional expense and hassle of adding an additional lens may not be worth it. The real challenge was dealing with reflections and other bright objects, and the background subtraction/image processing helps with that.
Is there a way I could buy one of these prototype devices assembled (completed/working code is not needed) ? I'm doing a blob-tracking project for robotics research at a university. I've been prototyping around the propellor and a larger toshiba camera module, but with little luck. I just happened to order some of the Omnivision camera modules to try working with them, but placing the camera modules reliably requires more equipment than I have available.
Any ideas?
Hi Lexxmac-
Unfortunately, I don't have enough prototype units to sell - the ones I have will be going to beta testers in the next few weeks. If you need something quickly, the best option would be to contact your OmniVision distributor and order the evaluation board from them for your particular camera. It's a carrier board that holds the camera module and associated circuitry and breaks out the connections onto a dual-row male header (like I used in my earlier prototype):
Joe,
I'm looking forward to being able to play with one of these (either in testing or later when I can buy one). I think the limitations are acceptable. 6 inch to 6 feet is plenty of range for my desired usage on my robots, and having to do calibration seems like it won't be too bad. For use in navigating around my house with a small wheeled bot, I think it's going to be awesome.
Joe,
I'm looking forward to being able to play with one of these (either in testing or later when I can buy one). I think the limitations are acceptable. 6 inch to 6 feet is plenty of range for my desired usage on my robots, and having to do calibration seems like it won't be too bad. For use in navigating around my house with a small wheeled bot, I think it's going to be awesome.
Great! I can't wait to see how you use it!
I'm hoping the calibration will be handled during production, so you won't have to worry about it. I'm also thinking of making the calibration routine/command available for end users in case they need/want to re-calibrate.
Joe, this is way cool and I am excited to see that you worked out kinks. I was wondering if you could offer a way to adjust the range in the field. What I am thinking is if you could make it possible to change the camera to laser distance through 2 or 3 pre-defined mounting brackets. This would let the end user trade off short distance for long distance (or vice-versa). Even if you calibrate it the factory, be sure to provide an easy field re-calibration process.
Joe, this is way cool and I am excited to see that you worked out kinks.
Cool! Thanks for following along
I was wondering if you could offer a way to adjust the range in the field. What I am thinking is if you could make it possible to change the camera to laser distance through 2 or 3 pre-defined mounting brackets. This would let the end user trade off short distance for long distance (or vice-versa). Even if you calibrate it the factory, be sure to provide an easy field re-calibration process.
Interesting suggestion! I'll certainly consider this for future versions based on response to the unit, but don't want to implement such a drastic physical change this close to the "finish line."
My main concern would be to ensure that the pre-defined positions would stay exactly fixed during use, otherwise the user may have to re-calibrate too often to be useful. In my current configuration, the laser diode and camera are essentially locked into position by being soldered down to the PCB and glued. But, the layout for the two halves (laser and camera) are distinct, making it possible for someone to chop the board and re-adjust the camera-to-laser distance.
I'll certainly do my best to provide an easy field re-calibration process, however. I have some ideas on how I'd implement this and just need to do some experiments.
Take care,
Joe
PS - I am sad to say that I only ranked as a Geek at 17.15867%
Alternative - Could there be three lasers, individually calibrated at near, intermediate, and far? That way, one could roughly determine the distance from any or all three; and select the correct unit for maximum accuracy. I have a whole box of surplus laser pointers.....
Alternative - Could there be three lasers, individually calibrated at near, intermediate, and far? That way, one could roughly determine the distance from any or all three; and select the correct unit for maximum accuracy. I have a whole box of surplus laser pointers.....
Possibly. But, like providing multiple mounting brackets, I think would be too much of a diversion from the original version that is (hopefully) almost done. I'd certainly like to see someone hack the normal LRF Module to do something like this, though, with surplus laser pointers!
Using three lasers would also increase the frame grab by ~3x, since we'd need to grab the frame three times with each of the lasers on, then do the background subtraction and image processing on each. I don't know what the actual per-range time is yet, but I'm going to assume it's around 500mS total in its current configuration.
It would be interesting to do some calculations with different camera-to-laser spacings to determine what minimum and maximum lengths are possible, and then leave it as an exercise to the user to modify it for anything outside of the production capability.
Just in time for UPEW 2011, I've finished porting all of the image processing, blob detection, and range finding routines onto the Propeller! I can now run my Laser Range Finder module in a standalone mode without the assistance of a PC host. And, I still have 93 longs remaining
I'll be giving a talk at UPEW sometime on Saturday at about the LRF and will have a unit there to demonstrate.
When I get back, I need to finish working on the calibration routines and prepare the units I have for a short beta testing round. I'll write back with more details and examples of the standalone operation as soon as I can!
Hope to see as many of you as possible this weekend,
Heres an overdue update on the state of the Laser Range Finder:
As briefly mentioned last month right before the Unofficial Propeller Expo West 2011, I finished porting all of the image processing, blob detection, and range finding routines onto the Propeller! This was a crucial step in allowing the Laser Range Finder to become a standalone module that can operate by itself without the need to be tethered to a PC host.
So, barring any unforeseen disasters, the Laser Range Finder is done!
Module Commands
The LRF is controlled through a simple serial communications interface (ASCII command/responses at TTL-level). Host baud rate is automatically detected upon LRF power-up, allowing easy interfacing to a variety of microcontrollers and computers. The auto-detection routines were borrowed from [URL=http://www.rayslogic.com/propeller/3rdPartyHardware/uOLED-96-Prop/RS232Driver.htm]Raymond Rayman Allen's RS232 interface for uOLED-96-Prop[/URL] and supports standard baud rates from 300 to 115.2k.
Here's the list of available commands:
Basic commands
R Single range measurement (returns a 4-digit decimal value in millimeters)
L Repeated range measurement (any subsequent byte will stop the loop)
E Adjust camera for current lighting conditions
S Reset camera to initial settings
V Print version information
H Print list of available commands
Advanced commands
O Display coordinate, mass, and centroid information for all detected blobs
X Calibrate camera system for range finding (requires user interaction)
G Capture & send single frame (8 bits/pixel greyscale @ 160x128)
C Capture & send single frame (16 bits/pixel YUV422 color @ 640x16) w/ laser enabled
P Capture & send processed frame (16 bits/pixel YUV422 color @ 640x16) w/ background subtraction
Demonstrations
Ive put together a few simple projects to show off how the LRF can be used and integrated into projects. All of the code will be available on the Parallax website when the LRF is released.
BASIC Stamp II. A portable Laser Range Finder unit using a BSII Board-of-Education, piezo buzzer, Parallax serial LCD module, and 9V battery. When the pushbutton switch is pressed, the LRF calculates the distance between itself and the target object. The result is displayed on the LCD in centimeters and inches. When the measurement is out-of-range (outside of the defined minimum or maximum distance bounds of the LRF), the piezo buzzer will sound a warning tone.
Propeller. Using the Propeller as a host (a separate Propeller, not the one on-board the LRF module) to display the distance between the LRF and the target object in text and graphical form via composite video output.
Freescales FSLBOT robot platform. The LRF module is mounted to the robots head and interfaced directly to the MCF52259 ColdFire processor on the TWR-MECH board. The robot walks/waddles forward until it becomes too close to an object, then stops until the object goes away.
Host PC. Since the LRF communicates via standard, printable ASCII, it can be controlled with any host computers terminal program (e.g., HyperTerminal or the Parallax Serial Terminal). The only additional hardware is a USB-to-serial interface or level translator to properly interface the serial lines.
The [post=990449]LRF Image Viewer[/post] can also be used for easy control of the LRF module and to take frame captures and see what the module is seeing:
Ive posted a few videos on YouTube which explain the demonstrations in more detail:
As discussed in an [post=994021]earlier post[/post], the final routine I needed to write for the LRF module was to handle the required calibration, which is necessary for each individual module to account for manufacturing and assembly tolerances.
The calibration routine requires the user to align to LRF module at a number of fixed distances. The LRF takes a measurement at each distance and calculates the SLOPE, INTERCEPT, and PFC_MIN values (the routine is based lightly on http://www.eng.umd.edu/~nsw/ench250/slope.htm). The values are then stored in an unused portion of the non-volatile boot Serial EEPROM. The LRF has a 64KB EEPROM, which leaves 32KB available for data storage after the Propeller uses the first 32KB for program code. This also means that the values will not get over-written when the LRF code is re-loaded into the EEPROM.
The SLOPE and INTERCEPT are used to convert the pixel offset to angle using a best-fit slope-intercept linear equation. The PFC_MIN value is used to set the maximum allowable distance of the LRF module, which is represented by a minimum pixels-from-center value (the number of pixels the primary blob is located away from the center of the camera frame).
A YouTube video demonstrating the calibration routine can be found here.
Beta Testing
On June 6, I kicked off a short beta testing period for the Laser Range Finder module. This is the first time having any sort of formal beta test for one of my hardware products, as they are normally developed and tested internally. Having been so focused on the low-level details of designing the product for so long and finally getting it all working, I thought it would be nice to get some extra sets of eyes on it to really exercise it in a variety of scenarios and get their honest opinions, both good and bad.
The beta testing group consists of a small handful of folks from Parallax and longtime members of the Parallax forums. I should be receiving all feedback by the end of June. I may need to make some minor adjustments to the firmware and/or hardware based on their comments, but as long as there arent any major design flaws, the Laser Range Finder module should be going into production sometime in July and would be ready for sale through Parallax a few months after that.
Thanks for Reading!
Its been a pleasure writing this development diary over the past seven months. Thanks to all of you for reading along and providing comment during this adventure. Posting here really helped me to wrap my head around the various aspects of the project and I hope youve learned something along the way, too!
With the project essentially complete, I dont plan on having any more major updates to post here. However, Ill post when the module formally launches and, of course, will happily answer any questions you may have in the meantime.
Hi Goe,
I'm interested in using a CMU CAM3 with an Arduino (328 ou Uno).
Since you have used a CMU CAM2 with your project, you may have some help for me (on how to get images from a CMUCAM).
Thanks in advance.
Comments
:cool:
Paul
Thanks for reading the thread! Glad you like it so far
I had considered using a filter and I think it could be helpful, but the real problem is how to reliably deal with them during manufacturing. Since the module will not come with a housing and will just be the PCB with electronics, mounting the filter would end up being troublesome and costly (especially for a non-mechanical guy like me). I also wanted to keep the camera system as stock as possible to allow people to modify the Laser Range Finder module and potentially use the camera for other applications. The camera module itself already has an IR filter, but I haven't been able to get the actual specifications and only know that at 650nm, only 50% of the light is transmitted. I think a similar effect could be to take a single frame with the laser turned on, take a subsequent frame with the laser turned off, and look at the differences to help narrow down where the laser spot is.
As for using a different color laser diode, there's nothing available that I know of that is suitable for my design (given my requirements of size, shape, mounting options, and price) besides red. It's possible that a laser diode vendor would create a custom unit for me with color other than red, but that's not something I want to get into at this point. Green would definitely be much brighter and maybe if things don't work out with my current design, I'll explore this option a little more.
Take care,
Joe
Ive just completed layout for the Laser Range Finder prototype PCB. Here are screenshots of the front and back. Units have been ordered and I should have some real, assembled pieces back in a few weeks (I decided on a slow turnaround time to keep costs down).
FRONT:
BACK:
The board is 4-layer, measures 100mm x 40mm (~4 x 1.6), and is completely routed by hand. All components are mounted on the back side of the board with the exception of the camera module and laser diode. The Propeller, its supporting electronics, and the camera interface circuitry can be seen on the left side of the image (thus, on the right side of the actual PCB, since were looking through the PCB here from the front side) and the laser diode control circuitry is on the right. The Prop Clip 4-pin interface is at the top of the board as surface mount pads and used for Propeller programming and user hacking/modifications. Below it is a bi-color (red/green) LED for indication of system status. At the bottom is a single-row, 4-pin header (VCC, Serial IN, Serial OUT, GND) for the primary user interface (sending commands, receiving data, etc.)
The center points of the camera and laser diode are spaced at exactly 78mm, which closely matches the spacing of my [post=947589]proof-of-concept[/post] (and the original project on which my project is based) and keeps the module to a reasonable size. The spacing of the camera and diode ultimately determines the effective range (minimum and maximum detectable distance), so if my current spacing doesnt give a decent, usable result, Ill need to do some experiments to select a more ideal configuration (while still considering overall module size). Since the laser diode control circuitry is isolated on one side of the board, it will be possible for folks wanting an even wider camera-to-diode spacing than is ultimately provided to cut the board and extend the spacing.
Ill be back with more photos and progress updates once the prototype units arrive!
Joe
How will that work? A kit or a board only? No sales to minors, everyone signs a waiver (like the Segway)?
[I know it'll come up sooner or later.]
Parallax and I haven't discussed this in detail yet, but we will obviously need to follow whatever laws (if any) are required for the sale of a Class IIIa laser product (which includes, for example, most of your standard, run-of-the-mill red laser pointers, laser levels sold at hardware stores, laser-based thermometers, etc.) and make sure our customers are aware of potential damage if they stare into the laser.
The visible red, 635nm diode I've selected for the design has a 3mW maximum output power. Of this common, "safe" class, the few documented cases of eye damage are related to someone staring at the beam for a prolonged period (http://en.wikipedia.org/wiki/Laser_safety#Laser_pointers).
Hope that helps,
Joe
KIDDING!!! I'm a kidder.. no really... just kidding....
*sigh* we live in a litigious society don't we.
Looking good Joe. I'll be watching for updates. What a Grand Idea!
The assembled circuit boards for my Laser Range Finder prototype have arrived! I wrote a little bit about the board [post=981013]last month[/post], but thought it would be fun to share some actual photos of the real deal:
FRONT:
BACK:
For development purposes, Ive soldered in a laser diode socket on this prototype (the tall, white plastic part on the right front of the module) This makes it easy for minor adjustments to the laser diode positioning and to replace the laser diode in case something goes wrong. In the production version, there wont be a socket and the laser diode will be soldered flush to the board. The tiny OVM7690 camera module can barely be seen on the left front.
While waiting for these boards to come back, Ive been working on the Laser Range Finder firmware and making great steps with my PC-based host/monitor program. But, with the new hardware finally here, I can hardly wait to test it out - keep your fingers crossed for me!
Until next time,
Joe
Using a solid copper plane (recommended by @PhiPi) and black matte soldermask on the PCB to prevent light from reflecting into the back of OVM7690 camera module helped to give me a nice, clear image with no remnants of the camera's internal vias/traces like I had experienced early on in my development process. This is great news, as I may not need to use any epoxy, tape, or other light-blocking material around the edges of the camera. I'll need to do some experiments to ensure that this holds true in a variety of lighting conditions.
OK, now back to work on adding support for 16-bit per pixel (YUV422) images into my frame grabber. Being able to capture color is key for accurate detection of the red laser spot and will translate into better range calculations.
Joe
Just wondering how you got the OVM7690 soldered to your board? I assume you had someone place it for you, and didn't solder it by hand.
Even with professional assembly, we had a few of the sensors get knocked off the pcb during shipment and handling around the office. If you plan to sell them on boards without a case, this might be an issue.
Glad to hear you got the problems with the back reflections fixed. We had the same problem. Even omnivision's evaluation kit has the problem, which is amusing.
-chris
I had the units assembled by my local facility. The camera module is a 20-ball micro-BGA package. While you could attempt to solder one by hand (using a skillet or reflow oven), you wouldn't be able to check for reliability without using X-ray.
Omnivision recommends using an epoxy underfill to increase solder joint reliability, and that's what we plan to do. Otherwise, since the solder balls and pads are so small, they could break off if stressed (as you've experienced).
Take care,
Joe
As I mentioned [post=959702]a while ago[/post], Ive been working on a PC-based host/monitor program that will give me a nice, graphical interface to control the Laser Range Finder (LRF) module.
Though not a requirement for usage or operation of the LRF (since the module can be controlled with any host microcontroller through its serial interface and will calculate/provide all of the necessary range finding results on-board), I thought it would be handy for users to have a simple interface at their disposal. Its also been indispensable during module development, as Ive been able to quickly see what the camera is seeing, make adjustments to frame grabbing routines, and help verify some of my initial range finding math.
Some of the primary features of the tool include:
My monitor program, which Im calling the LRF Image Viewer, is based on the open-source CMUcam3 Frame Grabber tool that was originally written in Visual Basic. I used that tool as a starting point, ported it to Visual Basic .NET, and have heavily customized it for the specific needs of my LRF module.
The program is still a work-in-progress, but I think Im close to being done. When its completed, it will be released along with source code so users can easily customize it for their own purposes, such as adding extra image processing functionality or creating new detection/measurement routines.
Heres a screenshot of the current version:
The left side of the screen provides a terminal console and COM port selection/configuration. The LRF connects to the host PC via its serial interface through a USB-to-Serial adapter (or, using a PropClip, #32200 through the programming pads on the back of the LRF). The yellow text box beneath the blue console window allows you to type commands and transmit them via the serial port. Any data returned by the LRF will be displayed in the console window.
The right side of the screen provides the graphical interface for the LRF Module:
The Get Info button sends a command to the LRF to obtain version information, parses it, and displays it in the text boxes.
The top window displays an image sent by the LRF, which is requested with the Grab Image button. You can choose to retrieve a greyscale or color image using the radio buttons. The resolution of the selected image type is also displayed - currently 160 x 128 for greyscale and 320 x 32 for color, but subject to change as I fine tune the design.
The Save Image button allows you to save the currently displayed image either as a standard bitmap (.BMP) or as raw binary. The raw format is useful if you plan to pipe the frame grab into another program for post-processing and dont want any of the bitmap headers or down-sampled color (which happens when converting from the cameras YUV422 to the bitmaps RGB color space).
For example, I had used the raw output extensively while modifying my frame grabber to support 16 bits/pixel color, because I didnt want to introduce another potential failure point by trying to decode and display the image with the Image Viewer before knowing that I was receiving the proper data from the camera. My first successful color frame grab was the OVM7690 color bars at 320 x 32 resolution (which, not coincidentally, fits into the same frame buffer size of 5120 longs as the 160 x 128 greyscale frame). I used YUVTools to convert the raw data into a YUV422-decoded color image and, once I knew I had good data in the proper format, I moved forward with building the color decoding and color image display functionality into the Image Viewer tool.
With my next frame grab, I tried to get artsy and mysterious with this shot of my eyes:
Searching for the Light...
Ive already started working a bit on the laser spot detection and range calculations. Right now, all of the math is being performed within the Image Viewer. This makes it easy to make adjustments to my detection algorithms. Once I get the results to a point Im happy with, I can port the routines to run directly on the LRFs Propeller.
If the Color Tracking checkbox is selected in the Image Viewer, the LRF module will grab two consecutive frames - one with the laser off to obtain a baseline background and one with the laser on (each frame grab takes 400mS, so the laser is only enabled for a short period). Using background subtraction, Im able to identify only the pixels that have increased in brightness between the two frames (in most cases, this is just the laser spot, though sometimes reflections of the laser pointer are also detected). The image below shows the result of the laser being shined onto a beige wall - essentially all brightness is gone except for where the laser is located:
An important part of the color/laser spot detection algorithm is to locate a blob of pixels that match a set of criteria within the captured frame. In our case, the blob must fall within the color bounds selected with the Upper and Lower controls (the user can use the up/down buttons to adjust the Y, U, and V color components and the result is shown in the color box immediately to the right of the controls). I currently have the bounds set to look for a brightness (Y) value between 0.35 and 1 of any color (U/V).
The Cross and Box radio buttons are used to choose how to highlight the blob if one is detected. The Cross selection (shown in the image above) draws horizontal and vertical lines that intersect at the center point/centroid of the blob. The Box selection will draw a box around the entire blob. Using the Box mode has been helpful as I try to fine tune the detection algorithms, so I can easily see if there is some portion of the frame incorrectly being treated as part of the blob (and where that portion is located).
The text boxes on the right show the Centroids X and Y coordinates, Pixels From Center (the number of pixels the centroid is away from the frames center, which is used as a component of the range finding calculation), and (preliminary) Range Finding/Distance results in centimeters and inches.
It's time to refine!
With the LRF Image Viewer tool close to being done, my next step is to continue refining the laser spot detection and range finding functionality of the LRF.
Ill come back when I have some quantifiable data to report!
Joe
Just found this thread, you have another fan. Excellent work. I really like the thorough documetation. Thanks for sharing all your work!
My pleasure! Glad you like it. It's been fun to write, since it gets all the thoughts out of my head and into a convenient location that I (and others) can refer to
I'm still working on re-writing the image processing/blob detection routines and will have another post once that's all tidied up.
Take care,
Joe
This is pretty rad-tastic! I'm really enjoying following this thread. Back when I was doing my undergrad, I was working for a laser scanning and measurement company, and had the idea to create a small single-point laser range finder for an undergraduate thesis. However, this is WAY cooler than the simple laser/photodiode design that I had specified.
Really looking forward to "playing" with this! :]
Cheers,
Jessica
Can you share some of your thesis idea? Its interesting to compare different approaches.
I've attached my original Thesis Proposal. Keep in mind that this was 100% theoretical, as I never got to any prototyping stage [my schedule would have plopped this alongside Senior Design, so I decided to abandon it lest I suffer a nervous breakdown] - but it still may be interesting.
Cheers,
Jessica
Ive been obsessing over the laser spot detection and range finding for the past few weeks. Since Ive made some significant progress, I thought Id write a really long post about it!
My First Attempt
I based my original image processing/blob finding routine on the CMUCam3's cc3_color_track.c source file, revision 556 from 12/29/2008. This simple function worked well in scenarios where there was a singular blob of pixels that were within our defined color bounds.
However, if there were pixels outside of the primary blob but still within our frame (for example, from reflections caused by the laser diode or some other noise not removed by our background subtraction done during the frame grab), they were included in the blob's bounding box (which is either a virtual or visual box surrounding the detected blob). Here's an example:
When the bounding box was calculated (shown as a red border in the image above), it started from the left-most pixel that met the color bounds and ended at the right-most pixel. The routine didn't take into account any space in between pixels that didn't match the color bounds, as it basically looked at the entire frame and created a box around the largest area of matched pixels. This caused the bounding box to be much larger than the primary blob (the blob that is most likely to be the laser spot used in the range finding calculations). As such, the centroid was also miscalculated, since the routine used the center location of the bounding box as the centroid.
As I was taking some preliminary range finding measurements and looking at the captured frame with the LRF Image Viewer, the range result was sometimes way off because of this centroid miscalculation. I knew that Id have to refine the blob detection to be more accurate and to handle multiple blobs within the frame in order for the LRF to be useful.
I poked around online for a while and noticed that, while there are lots of examples of blob finding routines, many require significant computational resources and are designed for PC (not embedded) applications. Here are a few of the sites I explored:
- Wikipedia: Blob Detection
- VisualCore: Simple Recursive Blob Detection
- Erik van Kempens Blob Detection
- m3rLinEz: Really simple blob detector
- Jim Weigang newsgroup post
My head was spinning from trying to digest all of this material. I mentioned my dilemma to Zoz, friend and former brother-in-arms of Prototype This. Zoz specializes in human-robot interaction and computer vision, among other things, and he had an elegant solution in mind for multi-blob detection. I was ready to learn!Lessons with Zoz
After an hour of brain-exploding conversation with Zoz, I had a plan scribbled down on paper for a simple, efficient, and robust algorithm that could detect multiple blobs within the frame, properly determine their mass and centroid coordinates, and execute on a dedicated microcontroller like the Propeller.
Over the course of a few days, I implemented the plan in Visual Basic .NET for my LRF Image Viewer Program (porting the routines to run directly on the Propeller will come later). I also changed my region-of-interest for color frame grabs (used for range finding) from 320 x 32 to 640 x 16. This still fits into the existing 5120 long frame buffer and gives a more detailed horizontal view at the expense of limiting the vertical view to be closer around the centerline. This is fine, as we are only concerned with the X position of the blob and any spot too far away from the Y centerline is probably not the primary laser spot that we're interested in.
The image processing and blob detection routines function as follows:
Simpler image processing, like the CMUcam3s blob detection, sets the centroid as the center point of the bounding box. However, this type of calculation would only return a proper value for round, balanced spots. Since we dont know what type of object the laser will be pointing at and how the light will vary when it reflects off of the object, the resulting blob may neither be round nor balanced. Performing the weighted average gives a more accurate center-of-mass result regardless of blob shape.
Heres an example of determining the centroid using the blob from Step 3:
The blob with the largest mass is then chosen as the primary blob (which we assume is the actual laser spot) and will be used for the subsequent range finding calculations. If there are multiple blobs with the same mass, the first occurrence remains the primary.
Heres a screenshot of the LRF Image Viewer showing a multi-blob frame and resulting information, such as the number of blobs, blob characteristics, and primary blob determination (the blob on the right is the laser spot and the small blob on the left is a reflection off of my metal filing cabinet). The LRF Image Viewer marks the centroid of the primary blob with a vertical red line.
Calculating Range with Optical Triangulation
As I wrote about in [post=947589]my first post[/post], this entire LRF project is based on the concept of optical triangulation, where the distance to a targeted object is calculated using triangulation with simple trigonometry between the center points of the laser light, camera, and object. The most compelling example is Todd Dankos Webcam Based DIY Laser Rangefinder and my design is based, in theory, on his implementation:
(Image from http://sites.google.com/site/todddanko/home/webcam_laser_ranger/laser_ranger_drawing.gif)
Referring to the image above, a laser diode module shines a laser spot onto the target object. The value h is a fixed, known distance between the center points of the laser diode and the camera. In our case, its exactly 78mm. When the distance to the target object D changes, so do both the angle q and the value pfc, which is the number of pixels the centroid of the primary blob (laser spot) is away from the cameras center point.
As the object gets closer, the value of pfc (and angle q) increases. As the object gets farther away, pfc (and angle q) approaches zero. Parallaxs own Beau Schwabe made a short video that [post=613631]demonstrates this phenomenon[/post].
If we know the angle q, then we can use basic trigonometry to calculate the distance value D:
Solving for D, we get:
But, how do we determine the angle q if our image processing routine is only giving us a pfc value?
The relationship between pfc and angle q can be described with a slope-intercept linear equation. This relationship varies ever so slightly between LRF modules due to manufacturing and assembly tolerances, so each unit needs to be calibrated by taking a number of measurements from known distances (the actual angle corresponding to each distance is determined with the arctan function) and recording the resultant pfc values. The slope and intercept are then calculated from those values (Im doing it all with a spreadsheet). The calibration only has to be done once unless the laser diode or camera changes position. To help prevent accidental position changes (e.g., from knocking, bumping, or otherwise rough treatment of the module), the camera module and laser diode will both be reinforced with epoxy around their bases.
So, once we shine the laser onto the target object and receive the centroid of the laser spot (in number of pixels from center), an angle q can be calculated using the slope-intercept equation and passed to the trigonometric function to determine the actual distance the LRF module is from the target object. Phew!
Measurement Results
My test setup for calibration and initial measurements consisted of eight marked distances on the floor of my lab (denoted with blue tape and corresponding to 6, 10, 20, 30, 40, 50, 75, and 100 inches). The BASIC Stamp II Board of Education was just used to easily provide power (via battery) to the LRF module. All serial communication between the module and my laptop (upper left corner of the picture) are via the Prop Clip interface. The far wall was used as the target object.
Ive attached a spreadsheet showing the measurement results from each of my five prototype units:
LRF Measurements Spreadsheet.pdf
In summary, this design has a usable range from about 6 inches to 6 feet (72 inches). Accuracy varies anywhere from perfect (no difference between actual distance and the distance calculated by the LRF) to around 2% error (approximately 1/4 to 1 difference between actual and calculated distance). Within the usable range, the change in pfc/angle is very noticeable, making the range calculations more reliable. Although the camera/blob detection can see the laser at longer distances, the further away the module is from the target object, the closer to the camera frames center the laser spot appears, making it difficult to determine changes in distance. At distances less than six inches, the laser spot is out of the cameras field-of-view, so no blob detection can occur.
Limitations
My LRF module (currently in conjunction with the LRF Image Viewer tool) is able to consistently and reliably detect multiple blobs and make a determination of which is the primary laser spot. The range finding mathematics work well with reasonable accuracy. However, like any sensor system, its not suitable for use in all conditions and has its limitations:
Next Steps
With the image processing, blob detection, and range finding routines all working in my LRF Image Viewer program, the next (and final!) step is to port them to run directly on the LRF module's Propeller. This will allow completely standalone operation of the LRF module without the need for a PC host.
While Im doing that, Im curious on your thoughts so far. Having followed along on this development journey, what do you think of the design? Is it something youd like to play with or use in a real project? Or, are the limitations too much of a concern?
All questions, comments, or complaints are welcome!
Until next time,
Joe
All I can say is wow! Yes, I remember Zoz from "Prototype This" and I am certain that his insights were amazing. Yes, I will lookforward to playing with the final product. My own engineering degree went by the wayside when I could not muster freshman calc. (probably because I discovered Booze). I did spend the next 40 years selling video hardware so I stayed close enough to technology to satisfy the itch. Your engineering diary on this project shows me my own weaknesses in trying to develope an idea. Will use this a model for working on my own concepts in the future.
Enough of my ramblings. I do have one question. If you were to use a lenticular lens to create a verticle line, say 1/4-1/2" high, would that make it easier to differentiate the return from other light blobs? I was looking at what happened when I shined my cheap laser leval against different backgrounds at a 20 foot distance. To me it seemed to standout more than just a blob from another laser pointer of similar intensity. I mention the short length of line as I am sure you do not wish to have the light intensity being deluted by spreading it out further than necessary.
Jim
Any ideas?
Very cool! Thanks for your feedback. I don't normally keep detailed design diaries like this, but it's actually nice to sit down and really get it all out on "paper." It really does show how much effort is actually required in a typical design process. I'm glad to hear you'll use it as a model! Looking forward to what you come up with
One of the benefits of this OmniVision OVM7690 CameraCube module I'm using is that it doesn't require any external lenses or focusing mechanisms. It has it all integrated into the tiny little device. The trade-off is that it limits expansion, so adding any external lens would be difficult. With that said, I haven't tried using an external lenticular lens. You're right in that having a laser spot as a small vertical line would essentially make the blob bigger, so it may stand out even more within the frame, but the additional expense and hassle of adding an additional lens may not be worth it. The real challenge was dealing with reflections and other bright objects, and the background subtraction/image processing helps with that.
Take care,
Joe
Hi Lexxmac-
Unfortunately, I don't have enough prototype units to sell - the ones I have will be going to beta testers in the next few weeks. If you need something quickly, the best option would be to contact your OmniVision distributor and order the evaluation board from them for your particular camera. It's a carrier board that holds the camera module and associated circuitry and breaks out the connections onto a dual-row male header (like I used in my earlier prototype):
Have fun!
Joe
I'm looking forward to being able to play with one of these (either in testing or later when I can buy one). I think the limitations are acceptable. 6 inch to 6 feet is plenty of range for my desired usage on my robots, and having to do calibration seems like it won't be too bad. For use in navigating around my house with a small wheeled bot, I think it's going to be awesome.
Roy
Great! I can't wait to see how you use it!
I'm hoping the calibration will be handled during production, so you won't have to worry about it. I'm also thinking of making the calibration routine/command available for end users in case they need/want to re-calibrate.
Joe
thx
Cool! Thanks for following along
Interesting suggestion! I'll certainly consider this for future versions based on response to the unit, but don't want to implement such a drastic physical change this close to the "finish line."
My main concern would be to ensure that the pre-defined positions would stay exactly fixed during use, otherwise the user may have to re-calibrate too often to be useful. In my current configuration, the laser diode and camera are essentially locked into position by being soldered down to the PCB and glued. But, the layout for the two halves (laser and camera) are distinct, making it possible for someone to chop the board and re-adjust the camera-to-laser distance.
I'll certainly do my best to provide an easy field re-calibration process, however. I have some ideas on how I'd implement this and just need to do some experiments.
Take care,
Joe
PS - I am sad to say that I only ranked as a Geek at 17.15867%
Alternative - Could there be three lasers, individually calibrated at near, intermediate, and far? That way, one could roughly determine the distance from any or all three; and select the correct unit for maximum accuracy. I have a whole box of surplus laser pointers.....
Possibly. But, like providing multiple mounting brackets, I think would be too much of a diversion from the original version that is (hopefully) almost done. I'd certainly like to see someone hack the normal LRF Module to do something like this, though, with surplus laser pointers!
Using three lasers would also increase the frame grab by ~3x, since we'd need to grab the frame three times with each of the lasers on, then do the background subtraction and image processing on each. I don't know what the actual per-range time is yet, but I'm going to assume it's around 500mS total in its current configuration.
It would be interesting to do some calculations with different camera-to-laser spacings to determine what minimum and maximum lengths are possible, and then leave it as an exercise to the user to modify it for anything outside of the production capability.
Joe
I'll be giving a talk at UPEW sometime on Saturday at about the LRF and will have a unit there to demonstrate.
When I get back, I need to finish working on the calibration routines and prepare the units I have for a short beta testing round. I'll write back with more details and examples of the standalone operation as soon as I can!
Hope to see as many of you as possible this weekend,
Joe
Heres an overdue update on the state of the Laser Range Finder:
As briefly mentioned last month right before the Unofficial Propeller Expo West 2011, I finished porting all of the image processing, blob detection, and range finding routines onto the Propeller! This was a crucial step in allowing the Laser Range Finder to become a standalone module that can operate by itself without the need to be tethered to a PC host.
So, barring any unforeseen disasters, the Laser Range Finder is done!
Module Commands
The LRF is controlled through a simple serial communications interface (ASCII command/responses at TTL-level). Host baud rate is automatically detected upon LRF power-up, allowing easy interfacing to a variety of microcontrollers and computers. The auto-detection routines were borrowed from [URL=http://www.rayslogic.com/propeller/3rdPartyHardware/uOLED-96-Prop/RS232Driver.htm]Raymond Rayman Allen's RS232 interface for uOLED-96-Prop[/URL] and supports standard baud rates from 300 to 115.2k.
Here's the list of available commands:
Basic commands
- R Single range measurement (returns a 4-digit decimal value in millimeters)
- L Repeated range measurement (any subsequent byte will stop the loop)
- E Adjust camera for current lighting conditions
- S Reset camera to initial settings
- V Print version information
- H Print list of available commands
Advanced commandsDemonstrations
Ive put together a few simple projects to show off how the LRF can be used and integrated into projects. All of the code will be available on the Parallax website when the LRF is released.
The [post=990449]LRF Image Viewer[/post] can also be used for easy control of the LRF module and to take frame captures and see what the module is seeing:
Ive posted a few videos on YouTube which explain the demonstrations in more detail:
Module Calibration
As discussed in an [post=994021]earlier post[/post], the final routine I needed to write for the LRF module was to handle the required calibration, which is necessary for each individual module to account for manufacturing and assembly tolerances.
The calibration routine requires the user to align to LRF module at a number of fixed distances. The LRF takes a measurement at each distance and calculates the SLOPE, INTERCEPT, and PFC_MIN values (the routine is based lightly on http://www.eng.umd.edu/~nsw/ench250/slope.htm). The values are then stored in an unused portion of the non-volatile boot Serial EEPROM. The LRF has a 64KB EEPROM, which leaves 32KB available for data storage after the Propeller uses the first 32KB for program code. This also means that the values will not get over-written when the LRF code is re-loaded into the EEPROM.
The SLOPE and INTERCEPT are used to convert the pixel offset to angle using a best-fit slope-intercept linear equation. The PFC_MIN value is used to set the maximum allowable distance of the LRF module, which is represented by a minimum pixels-from-center value (the number of pixels the primary blob is located away from the center of the camera frame).
A YouTube video demonstrating the calibration routine can be found here.
Beta Testing
On June 6, I kicked off a short beta testing period for the Laser Range Finder module. This is the first time having any sort of formal beta test for one of my hardware products, as they are normally developed and tested internally. Having been so focused on the low-level details of designing the product for so long and finally getting it all working, I thought it would be nice to get some extra sets of eyes on it to really exercise it in a variety of scenarios and get their honest opinions, both good and bad.
The beta testing group consists of a small handful of folks from Parallax and longtime members of the Parallax forums. I should be receiving all feedback by the end of June. I may need to make some minor adjustments to the firmware and/or hardware based on their comments, but as long as there arent any major design flaws, the Laser Range Finder module should be going into production sometime in July and would be ready for sale through Parallax a few months after that.
Thanks for Reading!
Its been a pleasure writing this development diary over the past seven months. Thanks to all of you for reading along and providing comment during this adventure. Posting here really helped me to wrap my head around the various aspects of the project and I hope youve learned something along the way, too!
With the project essentially complete, I dont plan on having any more major updates to post here. However, Ill post when the module formally launches and, of course, will happily answer any questions you may have in the meantime.
Take care,
Joe
I'm interested in using a CMU CAM3 with an Arduino (328 ou Uno).
Since you have used a CMU CAM2 with your project, you may have some help for me (on how to get images from a CMUCAM).
Thanks in advance.
Alex