PDA

View Full Version : Following a Giant Line



crgwbr
04-10-2007, 08:25 PM
In a contest I'm planning to compete in next year, the robot must follow a along a blacktop path. I attached a picture of part of the path below. Does anyone know of a good way to follow this path? I thought of something like a CMUcam or an AVRcam, but I wasn't sure if the contrast would be high enough. Anyone had experience with something like this?

Thanks,
Craig

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.

Microsoft: "You've got questions. We've got dancing paper clips."

John Abshier
04-10-2007, 09:23 PM
You may want to check out the CMU Cam3. Its ARM7 gives you more processing power. The camera is the same as the CMU Cam2 and the AVRcam.

crgwbr
04-10-2007, 09:47 PM
ooh, but a bit pricey at $240. What do you think about the CMUcam2+. It's slightly cheaper ($170 at Acroname Robotics).

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.

Microsoft: "You've got questions. We've got dancing paper clips."

Phil Pilgrim (PhiPi)
04-11-2007, 01:01 AM
Just hope for a cloudy day! The dappled sunlight through the tree branches is going to cause no end of trouble for any camera. I think the best approach would be to use a camera from which pixels can be extracted in the HSB (hue, saturation, brightness) color space. The blacktop will have a low saturation value compared to the surrounding bark mulch, vegetation and occasional traffic cone.

Good luck!
-Phil

crgwbr
04-11-2007, 01:10 AM
It's Pennsylvania, what day isn't cloudy? lol. Thanks for the tip, I'm in the process of reading the CMUcam2+ manual, hopefully it will have the HSB extraction feature. One though I had, would putting a polorized lens (like Sunglasses) in front of the camera help to deal with excess sunlight?

Thanks,
Craig

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.

Microsoft: "You've got questions. We've got dancing paper clips."

Ken Gracey
04-11-2007, 01:19 AM
How about differential GPS? If you're within 200 nautical miles of a station this could be a good solution if you can determine points on the entire path ahead of time.

http://en.wikipedia.org/wiki/Differential_GPS

Ken Gracey
Parallax, Inc.

Ken Gracey
04-11-2007, 01:25 AM
I know an easier way. . . maybe.

Yesterday the guys from RoboRealm came to Parallax. Their free software (runs on a PC, is that allowed in your competition) can identify the extent and location of the black path with no problem and send serial messages of your choice. You'd need an RF transmitter/receiver (our Embedded Blue devices) between the robot and PC, and to stay within range of the PC the whole time.

Take a look at what they offer, if your contest allows a remote PC.

Ken Gracey
Parallax, Inc.

crgwbr
04-11-2007, 01:25 AM
I'm only allowed to walk 25% of the corse ahead of time, so thats no good. The is going to be GPS on board though. The goal is to navigate to different GPS locations throughout the PennState Albington collage campus. So I will have the GPS coordinates of these points, just not the coordinates of every turn in the road (and yes, you must stay on the path for most of the course). I'll post the link to the contest below. Nothing is posted for the 08 contest yet, but it should be in several monthes.

http://www.ecsel.psu.edu/~avanzato/robots/contests/outdoor/contest05.htm

Thanks,
Craig


Ken Gracey said...
I know an easier way. . . maybe.

Yesterday the guys from RoboRealm came to Parallax. Their free software (runs on a PC, is that allowed in your competition) can identify the extent and location of the black path with no problem and send serial messages of your choice. You'd need an RF transmitter/receiver (our Embedded Blue devices) between the robot and PC, and to stay within range of the PC the whole time.

Take a look at what they offer, if your contest allows a remote PC.

Ken Gracey
Parallax, Inc.


Better Yet, I'll just put a laptop on the robot, it's already 100+ pounds, 20 more won't hurt.

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.

Microsoft: "You've got questions. We've got dancing paper clips."

Post Edited (crgwbr) : 4/10/2007 6:40:58 PM GMT

kelvin james
04-11-2007, 01:21 PM
Maybe an idea to consider, the possibility of the intensity of reflected light to guide the robot. Different colors and terrain have different reflective qualities for light. Lasers projected ahead and sensed on each side might work, and a main photo sensor to adjust the readings for amount of changing ambient light.

crgwbr
04-11-2007, 05:59 PM
That would probable work too. Last night I downloaded RoboRealme, and have been experimenting with it. So far, it looks like that will be the software I use.

Thanks,
Craig

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.

Microsoft: "You've got questions. We've got dancing paper clips."

RoboRealm
04-12-2007, 01:02 AM
Craig,

We played around with your two images a bit and here is what we were able to come up with in terms of finding the route. Note that we used the floor finder module for both images with the same configuration.

The biggest problem will be dealing with shadows. You can see that if a shadow occupies the lower part of the image then it will be counted as part of the road (due to how the floor finder module works). But in the other image you can clearly see the shadow being eliminated from the detected route .. that's something we will have to work on eliminating. It is a form of the "fill highlight" option that we have in the floor finder module .. but instead is a "fill lowlight" type function. We'll see what we can do.

STeven.
http://www.roborealm.com/

crgwbr
04-12-2007, 02:05 AM
Thanks Steven,
From the functions I've used o far, RoboRealm has proved to be really great software. Could you post the list of objects you were using? For example, was it just the floor finder object, or did it have the normalize object too.

Thanks Again,
Craig

Post Edit: I just tried the floor finder, with a normalize and Raise to the power of 2 before it, then a Center of Gravity after it. It works great.

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.

Microsoft: "You've got questions. We've got dancing paper clips."

Post Edited (crgwbr) : 4/11/2007 7:22:04 PM GMT

Phil Pilgrim (PhiPi)
04-12-2007, 05:18 AM
Here's a similar treatment, using the saturation data I mentioned above. This was done in Corel PhotoPaint, using an HSB separation and then thresholding the saturation channel. No other massaging was necessary. If you're using a color NTSC TV camera, the saturation data should be easy to separate out with hardware by means of a high-pass filter and peak detector.

-Phil

DiablodeMorte
04-12-2007, 07:45 AM
Wait, I Guess I don't completely understand, if you are given GPS data of where to go why not go on Google maps and route the turns yourself? Or is there a chance there is fork and only one way is allow?

crgwbr
04-12-2007, 06:34 PM
If you noticed the traffic Cones in the photo I posted, those are to prevent the robot from going down the wrong side of a fork. In other words, there is only one way to go through the course, and that way is unknown to you.

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.

Microsoft: "You've got questions. We've got dancing paper clips."

Dave Hein
04-14-2007, 04:56 AM
One attribute of the walking path is that it has little or no color. It consist mostly of shades of gray. I wrote a program to extract the chroma and set the gray areas to a value of 128. I found that the bright areas of the walking path have a blue-ish tint, so I also added a test for that.

I processed the two images with the program, and I then filled the area containing the 128's with yellow. The results are shown below. Note that the shadows do not have much effect on locating the path using this method.

I would also use an ultrasonic range finder to determine if the path ahead is clear. You could also shine a laser in front of the robot and mount a camera a few feet to the side looking at the line on the path. If the line appears straight then there are no bumps or valleys in front of the robot. If the line deviates up and down then there are bumps ahead. If the line is broken in places then the bumps and valleys are large enough to hide part of the line.

You won't be able to use the imaging technique for the off-road portion of the contest. A combination of using GPS, ultasonic rangefinder and the laser technique will probably work OK.

The image processing code that I used is attached below. Good luck on your project.

Dave Hein

Dave Hein
04-14-2007, 04:58 AM
The images didn't get attached to my previous message. Here are the images.

crgwbr
04-14-2007, 05:01 AM
Thanks Dave,
I don't understand C, so your code is Greek to me. Anyway, will the program do the processing in real time? Also, once it finds the yellow path, is there anyway to find the center?

Thanks,
Craig

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.

Microsoft: "You've got questions. We've got dancing paper clips."

Dave Hein
04-14-2007, 05:09 AM
Sorry for the triple-post, but I just wanted to make it clear that the method I suggested is essentially the same as the one proposed by Phil Pilgrim. Hope you find the code useful.

Dave Hein

Dave Hein
04-14-2007, 05:19 AM
Craig,

You could do a center of mass calculation to locate the center of the path. Or you could compute the midpoint between the left and right edges.

Dave

Dave Hein
04-16-2007, 11:34 PM
crgwbr said...

Anyway, will the program do the processing in real time? Also, once it finds the yellow path, is there anyway to find the center?



Craig,

I modified my program to draw red lines on the sides and middle of the path. The pictures are attached below. Of course, all you really need is a single value that tells the robot how far to turn to the right or left. You would only need to process a few lines of the image that are looking a few feet ahead of the robot. An average value computed over 10 or 20 lines is probably sufficient. In theory you could get by with a single scan line, but this would give you a noisy result.

This technique should be able to run in real time if you process only a few lines per frame, and you capture a few frames per second.

Dave

crgwbr
04-17-2007, 12:19 AM
Thank Again Dave,
Once again, I can't read C, so I really don't understand your code. I'm amazed though, it seems to work very well for such a short program. Because of the fact I don't understand C, I'll probable stick with RoboRealm.

Once Again Thanks,
Craig

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.

Microsoft: "You've got questions. We've got dancing paper clips."

DiablodeMorte
04-18-2007, 06:06 AM
crgwbr, don't worry, I don't think his code is "finished" in the sense that it seems to be a "sample" and not a finished product. I looked at it, very interesting but I contains no ways to open the jpg or actually derive any of the specified values which led to believe that It wasn't complete.

Dave Hein
04-18-2007, 10:21 AM
The code I posted·was only a piece of the whole program.· It only shows the processing for a single pixel.· This code sits within a double loop that scans the rows and columns of the image.· My intent was to describe the algorithm, and not to provide a finished product.· I felt that the details of how the image was read and written were not important.

By the way, I used PaintShop Pro to convert from JPG to BMP and back to JPG.· I also used PaintShop Pro to do the fill with yellow.· My program reads and writes BMP images.· It's easier than decoding and encoding JPG images, which I have done in the past.· My goal was to write a quick and dirty program to demonstrate the algorithm.

Dave
·

Paul Baker
04-18-2007, 11:52 AM
Craig,

The code Dave has written is pretty straightforward, and written in such a way that it is quite similar to PBASIC. The only thing of any complexity·are his formulas, but thats math, not C. What about his code do you find "Greek"?

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Paul Baker (mailto:pbaker@parallax.com)
Propeller Applications Engineer
[/url][url=http://www.parallax.com] (http://www.parallax.com)
Parallax, Inc. (http://www.parallax.com)

Post Edited (Paul Baker (Parallax)) : 4/18/2007 5:00:42 AM GMT

crgwbr
04-18-2007, 06:17 PM
Thanks Guys, I understand it a little more now. Apart from the math being complex, I couldn't find where he loaded or captured the image. I figured that it must be there, an I was just not understanding it correctly. Now that I know it's not there, I can read most of it fine.

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.

Microsoft: "You've got questions. We've got dancing paper clips."

RoboRealm
04-18-2007, 10:42 PM
crgwbr,

Do you have any more images from the course? We'd like to work up a tutorial surrounding all the great stuff that has been discussed here and offer more insight into what the problems you will have with this task. The color saturation, YCrCb processing techniques are very simple and will work great as long as the path remains black/grey. We can post the configuration for all the techniques discussed in this forum which will allow everyone to experiment more interactively with those algorithms. Note that for this kind of a contest you will need many more example images. Specifically look for areas along the course where the road color/intensity changes (i.e. like between old and new road), areas with gravel or dirt, areas of sharp turns, areas with pot holes or broken asphalt, areas with any colored markings like arrows or signs on the road, areas with obstacles like a black trash can, etc. Also take a couple images from non-road surfaces.

When we did a field trip a couple years ago for research for the first Darpa Grand Challenge contest we spent a day out in the California dessert and took about 300 photos. After selection of the 'hard' images we ended up with about 5 images. Unfortunately, we are *still* working on those images from time to time even today ... so yes, this can be a hard problem but perhaps we can share what we've learned so far.

Dave,

We were also a little confused by your technique until you mentioned that you used PSP to fill in the 128 grey area with yellow. You made a manual tweak to the image that should probably be automated. I assume you simply used the flood fill by clicking in the lower center of the image within PSP? The floor area to the left of the orange cones is also 128 and by was not highlighted due to the selection of the larger/center area for the flood fill. Automating this selection would require some simplistic blob processing where either the larger blob or the one more in the image center is selected. It wasn't clear from your code that this action is needed so we thought we'd try to suggest how this could be done. This is just our guess ... perhaps you can explain more about your procedure.

Thanks,
STeven.
www.roborealm.com

crgwbr
04-19-2007, 02:25 AM
I'll post as many pictures as I can. The website of the contest (with all the photos) is located here:www.ecsel.psu.edu/~avanzato/robots/contests/outdoor/index.htm (http://www.ecsel.psu.edu/~avanzato/robots/contests/outdoor/index.htm)

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.

Microsoft: "You've got questions. We've got dancing paper clips."

crgwbr
04-19-2007, 02:27 AM
Here's some more photos.

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.

Microsoft: "You've got questions. We've got dancing paper clips."

crgwbr
04-19-2007, 02:29 AM
Here's the last of the photos.

Thanks for all the help,
Craig

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.

Microsoft: "You've got questions. We've got dancing paper clips."

Paul Baker
04-19-2007, 02:43 AM
The bridge in 6 & 7 will be an interesting part of the navigation software, seeing how the non-path portion of 9 is similar in color.

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Paul Baker (mailto:pbaker@parallax.com)
Propeller Applications Engineer
[/url][url=http://www.parallax.com] (http://www.parallax.com)
Parallax, Inc. (http://www.parallax.com)

crgwbr
04-19-2007, 02:46 AM
Hopefully the textures prove to be different enough to tell the difference.

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.

Microsoft: "You've got questions. We've got dancing paper clips."

Dave Hein
04-19-2007, 05:40 AM
I wrote a fill program so I wouldn't need to use PSP to do the fill. This way I could write a script to process all of the images. I assume that part of the path is in the bottom center part of the image, so I start the fill at that point.

Some of these images are very challenging. Images 6 and 7 failed because of tinted lines across the bridge, and not because of the general color of the bridge. The algorithm would need to be smarter to understand that it can cross over thin lines on the path.

Other sources of problems are dark shadows across the path and concrete walls adjacent to the path. A number of the images also contain tinted patches. In a couple of images the white balance was far enough off that the entire path had a green or blue tint.

I tried edge filters on a few of the images, and the path is fairly smooth compared to the surrounding area in some pictures. In other pictures the path has as much texture as the surrounding area.

I think the ultimate solution would be to use a combination of imaging, GPS, laser line and ultasonic range finder as I mentioned before.

Dave

Paul Baker
04-19-2007, 05:57 AM
It would be possible via several methods, a very powerful one would be the a priori knowledge that it is on a path, that the path must continue from the present point and providing a weighted average of the likelyhood a particular direction is the path. The picture I see as potentially the most problematic is this one (http://www.ecsel.psu.edu/~avanzato/robots/contests/outdoor/campus%20path%20gallery/P5063287.JPG), if assuming the robot is to make a right turn onto the bridge. There would be orange cones blocking the other direction, a test for orange would weight that path at the least likely direction. If you can get the algorithm to weight the bridge higher than the straight forward path then you should be golden. Perhaps one means of accomplishing this is to do an edge detect with a threshold set to not trigger on the path, then do a edge population of the image. A high edge/square area is mulch and not the path, whereas the bridge would have edges but fewer than the mulch and it would be chosen as the corect path.

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Paul Baker (mailto:pbaker@parallax.com)
Propeller Applications Engineer
[/url][url=http://www.parallax.com] (http://www.parallax.com)
Parallax, Inc. (http://www.parallax.com)

DiablodeMorte
04-19-2007, 08:29 AM
Hey dave, would you mind posting your source//compiled exe's? I'd love to see them and play with them.

Dave Hein
04-19-2007, 10:43 PM
DiablodeMorte said...
Hey dave, would you mind posting your source//compiled exe's? I'd love to see them and play with them.


Here are the source files. There are three programs -- chroma, fill and center. chroma generates the chroma-only image, where the areas detected as the path are set to (128, 128, 128). The fill program detects values of (128, 128, 128) in the bottom 20 lines of the image, and around the center. It then fills with a yellow color starting at these pixels. The center program draws the red lines at the left, right and center of the yellow area. All of these programs use bmpsubs.c and bmpsubs.h.

The source code is far from a finished product. The files contain minimal comments. The fill program is somewhat of a kludge. It scans the image 8 times in different directions to do the fill. It will not do a complete fill for complex patterns, such as areas that look like mazes or spirals. The BMP read and write routines will only work with 24-bit BMP files.

This is code is intented only to demonstrate the algorithms. Let me know if you have any questions.

Dave

Ed Kenna
05-12-2007, 11:31 PM
Hello,
I have competed in this competition for the last 3 years. Some comments on the competition and then on the problem. First I tried GPS the 1st year. The GPS reception is aweful. If your robot depends on GPS you might not even get off the starting line. Also the Wayponts are NOT are each turn. The 1st waypoint is at the first turn then after that the path leads around a pond and the direct straight path to the next waypoint is through the pond. That is why I gave up on GPS. Also any pc needs to be one the robot no remote pcs allowed. Believe it or not Paul Boxmeyer won it the 1st 2 times without a pc just using several PICS.

Now for comments on the pictures you are using. These pictures were taken with a high resolution camera at the height of a person. Pictures from my webcam are only 320 by 240. Placement of the camera is a question. I have mine is on a 5 foot pole at the front of my bot pointing down so that the the bottom of the picture is about 1 foot in front of the bot and the top is about 5 feet in front of the bot. The left and right side are about 1 foot to the left and right of the bot. I have thought about changing the view to a more forward facing view from the top of my bot which is about 1.5 feet tall.


One major problem I have noticed is that the camera moved when the robot is moving which causes the picture to blur. My plan was to take an image and use the area just in front of the robot as the Match Value (either color or texture). However the images while moving will not match color because of the blurring which turns things blueish.

I have my robot capture the images and the filtered image to disk so I can review them. I also can run my code from these saved images. My current code tries to do color matching using RGB color distance to the Match Value or Hue distance to a HSI hue. However this is not working very well.
I am trying this on my driveway. In some frames only the driveway is visible. In others I have the edge of the grass. However the grass matches the road and sections of the road do not match. it a real mess.

I could send some images if anyone is interested. I can also talk about the contest if you have questions.

Ed Kenna

crgwbr
05-13-2007, 04:26 AM
Thanks Ed,

I would be interested in hearing about your experiances with this contest. I never thought of the blur problem, but it makes since. Hope to see you there in 08.

CRaig

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
I would love to change the world, but they won't give me the source code

People say that if you play Microsoft CD's backwards, you hear satanic things, but that's nothing, because if you play them forwards, they install Windows.

I spent a minute looking at my own code by accident. I was thinking "What the heck is this guy doing?"

Ed Kenna
05-14-2007, 02:11 AM
My first 2 times I just was using a MAVRIC II board to do all the processing.

The First time I wrote a GPS Parser for it. It would calculate a new heading and distance to the waypoint. However my GPS would slowly drift. I mean in a matter of seconds the distance would vary by 30 feet even though the robot was not moving. Then 2nd problem was that on the contest day I could not get any GPS readings. Therefore my bot did not get off the starting line.

The 2nd time I abandoned GPS and just used compass and wheel encoders. At least I got off the starting line, that time I came in 2nd. That is because I proved I could stop when someone was in front and I provided entertainment which got me 20 points.

This past time I added a 2nd level for my laptop and tried using vision. I had real problems with my sensor. My I2C bus would go out all the time which meant my sonars would stop working. When that happens my code would think there was a person and stop. I found out my sonars (SRF08) were to close to the ground. This was probable responsible for some problems I had the first 2 times howeer I was not able to realize that was the problem until I did full sensor data capture and realtime display so I could see there was nothing in front of the robot even though the robot was saying there was an obstacle 2 feet in front of it. So I have moved the sonars up to the 2nd level.

You know about the requirement to carry 1 gallon of water right? Well this past year most of the time I worked without the gallon and when I put it on the robot became front heavy ie when it stops it could tip forward. I have moved my laptop further back to compensate.

The requirement to stop when someone is on the path infront of you is a real pain. You need some way to tell it is a person and not that you have turned sideways are are facing a rock the is off the path. My friend Paul and I have been using a timing thing and I ask the person to get out of the way. This this can ruin a good navigation algorthyum

The guy who won it this time used Canny edge detection. I think he go to 4 of the waypoints just by following the path.

Ed

crgwbr
05-14-2007, 02:39 AM
Yes I did know about the payload requirement. However, on a bot the size of mine (see pictures on first page), 7 pounds of water shouldn't make a difference. My motor controllers are coming in on Wednesday, so that should help things move along from simulation to real movement.
Right now a main concern of mine is securing a laptop to use on board. As I am 15, and getting paid only $6 per hour, it is rather tough to find a laptop that will run RoboRealm (2.0+ GHz) and is in my budget.
I shouldn't have a problem with my sonar, the are mounted roughly 3 feet off the ground on a pan/tilt head. This should be plenty of height to deal with the 8 degree wave separation.
I also did some experimenting with Canny Edge detection, however, I found it to be rather "shaky." Any idea how he overcame this problem?

Thanks,
Craig

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
I would love to change the world, but they won't give me the source code

People say that if you play Microsoft CD's backwards, you hear satanic things, but that's nothing, because if you play them forwards, they install Windows.

I spent a minute looking at my own code by accident. I was thinking "What the heck is this guy doing?"

Ed Kenna
05-16-2007, 09:44 AM
You do not need a 2.0 GHZ laptop mine is only 1.5GHZ and I can do the processing in about 10ms and that is in debug mode. Compiled code should be even faster. I would think 1.0 GHZ could work. Tommorrow I will try to post some of my raw images from my last run to see what everyone thinks about them.

Ed Kenna
05-16-2007, 11:45 PM
Here are actual images I am capturing during one of my runs. I am interested in suggestions on how to handle all 7 images consistently (through roborealm or otherwise.)

Raw1.bmp is when I start out fully on the driveway. Note the diagonal shadow and the light gradient brightest in the center of the image even though I am fully on blacktop

Raw2.bmp shows blacktop with a shadow from the roof of my house and white bricks that line the driveway. My robot actually veered away from the brick however I think this was because the sonar picked them up not vision.

Raw3.bmp and raw4.bmp are one after the other. However the camera adjusted something to reduce the light between raw3 and raw4. This would cause havock with color matching.

Raw5.bmp has a shadow covering half the image. Since the robot is still on blacktop it should continue straight and consider the shadow still part of the driveway.

Two more images will be in the next post

Ed Kenna
05-16-2007, 11:54 PM
Real images continued

Raw6.bmp shows an edge situation. However there is also a very bright spot (again in the center of the image) that is still part of the road but won't match. Note the brownish organic material and grass in the upper right corner. Also note the tannish colored organic material in the very lower left corner. This is seed pods that dropped off the tree overhead and they are all over the driveway. again this area would be a nonmatch

raw7.bmp shows an edge situation. Again there is a bright spot on part of the road. This time ti the right of center of the image. on the far right is the organic material and grass.

So these are representative images from my bot I would be interested in hearing how to deal with them.

Dave Hein
05-17-2007, 03:03 AM
Ed, I think your camera needs a wider angle of view. It would be very hard for a human to follow the path using these images.

I did a little more work on my programs last month, but I didn't post the results. The program assumes that it is already on the path, and it computes the average chroma values of the path in a 300x150 area at the bottom of the image. One assumption I made was that the color of the path was indepenent of the brightness. This was an incorrect asumption. So the program now computes the average chroma values for 8 different ranges of luminance.

I also added code to detect the cones, and then widen them so they create a solid barrier. Finally, I computed the center of the path in an area of 50 lines just beyond the 300x150 training area. The results are much better with these changes. Five of the images are shown below. The cyan box is the training area. The magenta box is the area of the path that is used to compute the center value. The black vertical line in the magenta box is the center of the image, and the blue line is the center of the path.

Dave

Dave Hein
05-17-2007, 08:56 PM
Ed,

I processed Raw6 and Raw7 with my programs, and the results weren't as good as I hoped. The images are shown below. I scaled them up to 640x480 resolution so I wouldn't have to change my code. I also tried changing the color saturation to see if that made a difference.

As I mentioned before, I think it would be better to look further ahead, and to have a wider field of view. It would be good to see both edges of the path. I don't think there are any problems using 320x240 resolution, and it's OK if the image is blurred.

Depending on the speed of the robot, you probably only need to process one frame per second. You would need to look ahead to where the robot will be in a few seconds.

I think your current method of looking almost straight down has a few problems. It has a small field of view, so the robot won't know where the center of the path is. It will only know where the nearest edge is. This would be like a person trying to walk on the path while looking straight down and wearing blinders.

Another problem is the bright spots in the picture. I suspect they are due to the reflection of the sun off the road. The road is not a perfect reflector, so the brightness and the color of the reflection is different at different angles. I suspect this is why the chroma method doesn't work well on these images. By looking further ahead, the sun angle will be about the same throughout the picture.

Dave

Paul Baker
05-18-2007, 12:48 AM
Yeah having a robot which wasn't shoegazing would help, but I think his pictures illustrated a flaw with your algorithm Dave. You shouldn't assume that both edges of the path are in the frame, there should be some test to see if the frame only contains one edge of the path.

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Paul Baker (mailto:pbaker@parallax.com)
Propeller Applications Engineer
[/url][url=http://www.parallax.com] (http://www.parallax.com)
Parallax, Inc. (http://www.parallax.com)

Dave Hein
05-18-2007, 02:55 AM
Paul,

The positioning part of the algorithm would work to a certain extent if the detection part worked OK.· The robot should move away from the edge of the road until it no longer sees the edge.· At that point it should just continue going straight.· Eventually, it will see either the left or right edge again, and it will do another course correction.· However, if the robot can see both edges at the same time it can try to stay in the middle of the path.· This will help to keep it from going off the edge of the path in cases where the edge is not well defined.

The problem with the color algorithm with Ed's picture is that the color of the road changes.· The thresholds could be increased, but this might cause it to detect the grass area as road.· As I mentioned before, I think the bright spots in Ed's pictures are the reflection of the Sun on the road.· If the camera was lower, and it was looking more toward the front it wouldn't have this problem.

I think the camera's view should look something like the picture shown below.· The top part of the image should see both sides of the path.· The bottom part doesn't need to see either side.· The information on the bottom part of the image can be used to "train" the detection algorithm to find the path in the top part of the image.· The position of the center of the path in the image would then be used to steer the robot.

Dave

Ed Kenna
05-18-2007, 08:46 AM
Okay I think you both (Dave and Paul) are beginning to my problems. In my current configuration in order for all of the road to match the thresholds need to be so high that it also matches the grass. I agree with Dave that the variation (bright spots) are reflection of the sun. I am going to try moving the view forward to see if this helps. However I do have to point out that the webcam is only capable of 320 by 240 which is only going to be so big. So it highly probable that both sides will not be visible at the same time. That is ok because I intent to have a compass that can help to keep the robot moving straight.

Having the path change color is something that does happen over the 1/4 mile course. Somewhere around waypoint 2 there is white mixed in. I do not know how I will deal with this but I have to get that far first.

Dave,
I decided to try to implement your code to see what I get on my images. I have questions.

Looking at chroma.c
you first convert rgb to ycrcb by doing
luma = (213*red + 715*green + 72*blue + 500) / 1000;
Cr = red - luma;
Cb = blue - luma;

The formula I have see elsewhere is
double r = (double)rgb.Red / 255;
double g = (double)rgb.Green / 255;
double b = (double)rgb.Blue / 255;

ycbcr.Y = 0.2989 * r + 0.5866 * g + 0.1145 * b;
ycbcr.Cb = -0.1687 * r - 0.3313 * g + 0.5000 * b;
ycbcr.Cr = 0.5000 * r - 0.4184 * g - 0.0816 * b;

I assume your code is trying to avoid doubles and stick to integer math however I do not get anywhere near the same values.
Take r=126 g =132 b = 124
you way I get luma=130 cr=-4 cb=-6
my way I get y=.507 cr = -.012 cb=-.009

Next you take those values and convert them back into rgb before doing the compare. Why convert to ycrcb and back again?

Dave Hein
05-18-2007, 09:53 PM
Ed,

I think you found a bug in my code. To be honest, I don't know exactly where I found the coefficients for the luma computation. Your coefficients match the CCIR 601 spec, which is what I intended to use. So my code should have been

luma = (299*red + 587*green + 114*blue + 500) / 1000;

I use fixed point math because it is usually more efficient than floating point on most processor. However, the division by 1000 is not efficient on most processor. If I wanted to optimize this code I would have scaled the coefficients by 1024 instead of 1000, and do a shift to the right by 10 instead of a divide by 1000.

I compute Cr and Cb as simple differences of the red and blue components with the luma. The color space is somewhat arbitrary as long as it's reasonable. A straight difference is simple to compute.

Then later on in the code I have the following two lines:

red = ((8*64*Cr)/luma) + 128;
blue = ((8*64*Cb)/luma) + 128;

I am attempting to normalize the chroma components to make them independent of the luma. I store the values in the red and blue components to create a false-color image. I compute a value for the green component that will give a luma value of 128. This is mostly for convience so that can I view the resulting image, and also pass this image on to other programs.

I did some measurements with my Logitech QuickCam, and I determined that it has a horizontal field of view of about 40 degrees. Since it's a 4:3 aspect ratio camera, the vertical field of view is 30 degrees. The contest website says that the path is about 7 to 8 feet wide. To ensure that I see both edges of the path, I would want to see about 16 feet across at the top of the image. At 40 degrees, this is about 22 feet away. If the camera is mounted 4 feet high it would need to be looking down at an angle of 25.5 degrees from horizontal. The bottom line of the image would be about 5 feet away.

I have attached the latest version of chroma.c. This version includes the fix to the luma computation. This version also computes average values for the Cr and Cb components based on a training area at the bottom of the image. It computes the average values for 8 different luma ranges. This makes the program more robust, and it can handle lighting changes, white balance changes and changes in the path color.

By the way, I changed the scaling of the chroma components. They are now computed as follows:

red = ((4*64*Cr)/luma) + 128;
blue = ((4*64*Cb)/luma) + 128;

Dave

Ed Kenna
05-19-2007, 02:55 AM
Ok good explanation of what you are trying to do. I will not be able to try repositioning the camera until late Sunday or Monday as it is supposed to storm all weekend. I like your adaptive strategy for trying to detect road in all kinds of lighting. However it assumes that the "trainning area" is always going to contain the path. Suppose it is doing a good jobs of staying in the center of the path but a person walks in front of the robot. It would then be tracking the person's colors not the road (or some ratio of person and road colors). Another problem I see is what happens when the path takes a sharp turn, ie the first waypoint is at a turn. At that point you have orange cones on the left blocking an intersecting path coming in from the left. The robot needs to turn right at a almost T intersection where the robot needs to be about 4 feet from the end of the straight and in your configuration it can only see objects between 5 and 22 feet out. Maybe it will have already favored the right side before it gets to the end of going straight. I guess this will have to be a wait and see what really happens thing

Robert Kubichek
05-19-2007, 03:54 AM
Use sound for people detection, and detection of cones/trees ect....
Sort of a backup to the visual detector..

Bob

Ed Kenna
05-30-2007, 10:05 AM
OK I have done some more work. I have raised the angle on the camera.
I have implemented your code. Some comments on you code. Your fill program looks at rows 0 to 20. Well for me row 0 is the farest from the camera to I needed to go from 240 to 220. Also you code would not have worked if it did not find a 128,128,128 pixel on the first row because of if (row != 20) break;

Also in your adaptive version of chroma you calculate a range by range = luma >> 5;
and throw red and blue values and count into an array. Then you try to make sure the 0 elements in the array are filled by swapping elements down where count > 50. However this completely ignores the fact that those red and blue values you later associate with a range ie luma >> 5 when doing the actual threshold. This totally throws things off.

I complete ignored the code the move the values around and just put and
if (count[range]> 50)
{
if (red < redave[range] + RED_RANGE &&
red > redave[range] - RED_RANGE &&
blue < blueave[range] + BLUE_RANGE &&
blue > blueave[range] - BLUE_RANGE)
{
red = blue = green = 128;
}
else
{
green = (1000 * 128 - 299 * red - 114 * blue + 293) / 587;
if (green < 0) green = 0;
if (green > 255) green = 255;

}

Ed Kenna
05-30-2007, 10:12 AM
Here are some new photos
I have the darker one that I am trying to use to work on the file/centering code with and then one where the sun was very bright and every thing is bleached out. The another one where there is again a bright spot caused by the sun even though I am looking out. and last I have one filled. As you can see it is a mess.

crgwbr
05-31-2007, 07:29 PM
Sorry I haven't posted in a while, I've been quite busy. Anyway, I've started work on a program written in python that uses Dave's algorithm. So far, I've gotten it to capture a frame from a Web cam. To save on time, it will only process lines 115-125. That reduces it's workload by 73280 pixels. Anyway, I'll keep you updated.

Thanks for all the help,
Craig

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
I would love to change the world, but they won't give me the source code

People say that if you play Microsoft CD's backwards, you hear satanic things, but that's nothing, because if you play them forwards, they install Windows.

I spent a minute looking at my own code by accident. I was thinking "What the heck is this guy doing?"

Dave Hein
06-01-2007, 12:52 AM
Ed Kenna said...

Your fill program looks at rows 0 to 20. Well for me row 0 is the farest from the camera to I needed to go from 240 to 220. Also you code would not have worked if it did not find a 128,128,128 pixel on the first row because of if (row != 20) break;

Also in your adaptive version of chroma you calculate a range by range = luma >> 5;
and throw red and blue values and count into an array. Then you try to make sure the 0 elements in the array are filled by swapping elements down where count > 50. However this completely ignores the fact that those red and blue values you later associate with a range ie luma >> 5 when doing the actual threshold. This totally throws things off.



Ed,

I found the error in fill.c a few weeks ago, but I never posted the corrected version. It is attached below. Also, row 0 in a BMP image is at the bottom of the image. I think this is a carry over from the way the original CGA displays were addressed. In almost any other case, row 0 is normally the top of the image.

I think my logic is correct concerning the range, which is the 3 most significant bits of the luma. It's true that I am ignoring luma values less than 16 and greater than 239, so ranges 0 and 7 are ignoring half of the possible values. After I accumulate the stats I want to ensure that each range has valid values. I set a threshold of 50 counts because I think that anything less would not generate a representative average for that range. Any range that has less than 50 counts will use the values from the previous range. That's why I first make sure that the zero range has valid values.

I believe that values from neighboring ranges can be used when there is not enough data for a range. The chroma components were normalized by the luma, so they should be somewhat independant of the range (or luma) value. Of course, the reason that I use 8 ranges is because the chroma is not entirely independant of the luma.

I suspect that your images have an offset in them. That is, black areas are not represented by the value 0, but by some higher value, such as 16 or 32. It might be useful to determine the black value by putting some black construction paper in the view of the camera and measuring the value. Instead of computing Cr/Y the computation should be Cr/(Y-Y0), where Y0 is the black value. I'll experiment with this to see if there's an improvement.

I noticed that your images have a gray border around them. This was causing some problems with the fill program. I modified the fill program to ignore the 3 pixels/rows closest to the edge of the image.

I ran your three images through my programs. Two of the images came out fairly well. The bright image did not work at all. My results are attached below.

There seems to be a problem in your processed image. The filled area looks reasonable, but I don't understand where all of the red pixels came from. Were these generated when you tried to locate the left and right edges of the path?

Dave

Dave Hein
06-01-2007, 02:05 AM
I tried changing the offset and gain of raw12, but it didn't improve the detection. It actually made it worse. One thing I noticed is that the chroma image is fairly noisy. I tried running a 5x5 averaging filter over the image, and it seems to help a little bit. The filtered image and the processed image are attached below. Since I used a 5x5 filter this image could easily be reduced to 160x120 resolution without any further loss of information. You might even be able to go to a lower resolution.

Dave

crgwbr
06-05-2007, 08:22 PM
A while ago, I think some one asked to see a video of my entry (for the Penn State Contest). I have one posted on my blog at this address:

Blog: progelectromech.blogspot.com/ (http://progelectromech.blogspot.com/)
Post: progelectromech.blogspot.com/2007/06/finally-got-video-uploaded.html (http://progelectromech.blogspot.com/2007/06/finally-got-video-uploaded.html)

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
I would love to change the world, but they won't give me the source code

People say that if you play Microsoft CD's backwards, you hear satanic things, but that's nothing, because if you play them forwards, they install Windows.

I spent a minute looking at my own code by accident. I was thinking "What the heck is this guy doing?"

Post Edited (crgwbr) : 6/5/2007 8:14:20 PM GMT

Ed Kenna
06-07-2007, 09:39 AM
Dave said
>I think my logic is correct concerning the range, which is the 3 most significant bits of the luma. It's true that I >am ignoring luma values less than 16 and greater than 239, so ranges 0 and 7 are ignoring half of the possible >values. After I accumulate the stats I want to ensure that each range has valid values. I set a threshold of 50 >counts because I think that anything less would not generate a representative average for that range. Any >range that has less than 50 counts will use the values from the previous range. That's why I first make sure >that the zero range has valid values.

The logic that worked with the avgred and avg blue was flawed. I agree with the need to have enough samples points.

Dave said
>I suspect that your images have an offset in them. That is, black areas are not represented by the value 0, >but by some higher value, such as 16 or 32. It might be useful to determine the black value by putting some >black construction paper in the view of the camera and measuring the value. Instead of computing Cr/Y the >computation should be Cr/(Y-Y0), where Y0 is the black value. I'll experiment with this to see if there's an >improvement.
I did not manipulate the raw images. However the webcam was set to auto adjust so it may be going something. The colors in the images will vary based on how much sunlight their is. It seems to do better when the sun is behind the clouds. I suspect that black construction paper would look drastically different in varied light. Varied light found in the outdoor environment is a major obstacle.

Dave said
>I noticed that your images have a gray border around them. This was causing some problems with the fill >program. I modified the fill program to ignore the 3 pixels/rows closest to the edge of the image.

>I ran your three images through my programs. Two of the images came out fairly well. The bright image did >not work at all. My results are attached below.

>There seems to be a problem in your processed image. The filled area looks reasonable, but I don't >understand where all of the red pixels came from. Were these generated when you tried to locate the left >and right edges of the path?
Definately their was a bug in my implementation of the center program. Part of the problem is the grey border around the image as you stated. This caused my processed image to have a straight line up the center of the image. If I ignore 2 pixels I get better results. And then I found the other problem that was causing all the red spots all over the image.· So I got good looking center lines. Then I transposed the y axis to the x axis. I used least squares to convert·center points to a slope and an intercept. Then I used the slope to drive left or right. This works great if the picture has a good match like your SRaw_12 and SRaw_19 in your post. ·But it still has problems with bright images and shadows.

Dave said
>I tried changing the offset and gain of raw12, but it didn't improve the detection. It actually made it worse. >One thing I noticed is that the chroma image is fairly noisy. I tried running a 5x5 averaging filter over the >image, and it seems to help a little bit. The filtered image and the processed image are attached below. >Since I used a 5x5 filter this image could easily be reduced to 160x120 resolution without any further loss of >information. You might even be able to go to a lower resolution.

I am not sure what you mean by "changed the offset and gain".

I am familar with 5 X 5 filtering where you use the pixels around to update each pixel ie the image size remains the same or are you talking about averageing the 25 pixels and only outputing 1 pixel?








·

Ed Kenna
06-07-2007, 09:44 AM
crgwbr,
Great progress. Keep working. It is a big step to add the automation. and Oh Yeah the contest is near Philadelphia not Harrisburg.

crgwbr
06-07-2007, 08:20 PM
oops, not sure why I thought Harrisburg. Unfortunately that means I have a 7 hour drive instead of 5 hours.

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
I would love to change the world, but they won't give me the source code

People say that if you play Microsoft CD's backwards, you hear satanic things, but that's nothing, because if you play them forwards, they install Windows.

I spent a minute looking at my own code by accident. I was thinking "What the heck is this guy doing?"

Dave Hein
06-07-2007, 10:06 PM
Craig,

Your video looked great. I kept wondering if your robot was going to be hit by a car on the street, but you successfully navigated it onto the street and back without any problems. Good work!

Ed,

I applied the 5x5 averaging filter in Paint Shop Pro. It replaces all of the pixels by the average of itself and its 24 neighboring pixels. I kept the image at 320x240 resolution. The 5x5 filter could be separated into a 5-pixel horizontal filter followed by a 5-line vertical filter. The filtering seems to help a little bit.

"offset and gain" refer to the brightness and contrast of an image. The brightness and gain can be changed as follows:

new_value = gain*old_value + offset;

Dave

Ed Kenna
06-08-2007, 07:16 AM
Ok thanks for your explanations The gain and offset make sense with what iy were talking about. I think I saw it called something else.

crgwbr
06-27-2007, 12:20 AM
I just thought of a completely different way to follow the path. Let me know what you think:

Say you have three LDRs (Light Dependent Resistor), each with a Red, Green, or Blue filter. Those LDRs are surrounded by white lights. White light is every color of light combined into one. By continually taking resistance readings from the LDRs, you should be able to determine exactly what color the object is that the device is positioned over. Right? Theoretically, a color will (say it's red) will absorb every color except it's own. Therefore, a red object reflects red light and absorbs all other light. Then, if the object is position over a red object, only the LDR with the red filter will be receiving light while the other two LDRs will be left in the dark. Is this reasoning correct? Obviously I know that nothing in the outdoors will be a pure color, like red, however, if I feed the LDRs output into a 10/12-bit ADC; I should be able to get a very accurate representation of what color the device is looking at. Then, by using this data, the black-top path that my robot is required to follow should be no problem at all to locate and follow.

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
My Blog: Prog | Electro | Mech (http://progelectromech.blogspot.com/)

I would love to change the world, but they won't give me the source code

People say that if you play Microsoft CD's backwards, you hear satanic things, but that's nothing, because if you play them forwards, they install Windows.

I spent a minute looking at my own code by accident. I was thinking "What the heck is this guy doing?"

Dave Hein
06-29-2007, 10:30 PM
Your idea might work if you were following a thin brightly colored line painted on the path. However, it may have problems following a wide path that does not have a very distinct color. Imagine a person trying to follow the path by looking through a long thin tube pointed a few feet ahead of his feet. He would probably zig-zag back and forth on the path, and may even end up going the wrong way. He may also go off the path if there was a small path of dirt next to the path that had a similar appearance.

crgwbr
06-30-2007, 03:31 AM
That's not quite what I intended. I was thinking more along the lines of two of these mounted on the two front corners of the bot, aimed directly at the ground. Would that work?

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
My Blog: Prog | Electro | Mech (http://progelectromech.blogspot.com/)

I would love to change the world, but they won't give me the source code

People say that if you play Microsoft CD's backwards, you hear satanic things, but that's nothing, because if you play them forwards, they install Windows.

I spent a minute looking at my own code by accident. I was thinking "What the heck is this guy doing?"