Vision based navigation with the propeller
SciTech02
Posts: 154
Okay, I've gained enough information to start planing for this project.
Here's my stragety:
I'm following NASA's method that they used with there Mars rovers.
This·method·seeks the most open point on the horizon and goes there.
It use's two cammeras (Black and white) to take pictures.· The two pictures cover the X and Y axis'.· The third axis is found by comparing the images·using the parallax method.· With the X, Y, and Z axis' covered, it can start mapping.· It could generate a 3D image by two ways.· The first way would be to find the distance of every or most of the pixels.· The second way is to select individual points on the picture and generate the image based on those general points.
With a 3D map made, it could navagate on that.· It would check for how flat the pixels or points are (If it is in a large,·open room the points·generate·a flat surface.· If it's by a wall, the points would have a giant "wall" coming up off the model.· Other objects like shoes or rocks could be detected by this method too, they would come up as "bumps".).· If it is a open flat area, it would go stait.· If a wall or object is detected, it would avoid it.· It would avoid any·excess bumpy areas (A corner or a messy area).
The eyesight of the robot would be about 2-4 feet.· It would·get the 3D model, and move half-way across the path it·picked and check again.
This stragety can also be used·to get to a specific area too (Instead of·picking the clearist path, it would find ways around objects to get there).
The processing that would have to be done isn't as big as it sounds.· It would have to get the·serial data from the cammeras, compare the images, and then look at the 3D model it created.
Now, my questions are, can the propeller do this?· I was thinking of using the demo board for this, can I?
Thank you for reading this.········
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
There is always an answer.
There is always a way.
There is always a reason.··· -SciTech02.
Here's my stragety:
I'm following NASA's method that they used with there Mars rovers.
This·method·seeks the most open point on the horizon and goes there.
It use's two cammeras (Black and white) to take pictures.· The two pictures cover the X and Y axis'.· The third axis is found by comparing the images·using the parallax method.· With the X, Y, and Z axis' covered, it can start mapping.· It could generate a 3D image by two ways.· The first way would be to find the distance of every or most of the pixels.· The second way is to select individual points on the picture and generate the image based on those general points.
With a 3D map made, it could navagate on that.· It would check for how flat the pixels or points are (If it is in a large,·open room the points·generate·a flat surface.· If it's by a wall, the points would have a giant "wall" coming up off the model.· Other objects like shoes or rocks could be detected by this method too, they would come up as "bumps".).· If it is a open flat area, it would go stait.· If a wall or object is detected, it would avoid it.· It would avoid any·excess bumpy areas (A corner or a messy area).
The eyesight of the robot would be about 2-4 feet.· It would·get the 3D model, and move half-way across the path it·picked and check again.
This stragety can also be used·to get to a specific area too (Instead of·picking the clearist path, it would find ways around objects to get there).
The processing that would have to be done isn't as big as it sounds.· It would have to get the·serial data from the cammeras, compare the images, and then look at the 3D model it created.
Now, my questions are, can the propeller do this?· I was thinking of using the demo board for this, can I?
Thank you for reading this.········
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
There is always an answer.
There is always a way.
There is always a reason.··· -SciTech02.
Comments
If your dealing with images, in reality your dealing with color variations of groups of pixels.
You would need to determin a base line color (ie nothing), and a max color (solid wall at the min depth).
Next issue at hand is the amount of data your dealing with at that color depth.
Monochrome doesn't work, and 13 levels (shades) of gray (0 = black, 16 = white) = 4 bits per pixel, 2 pixels per byte, at 320X200 you need 32K of accessable ram just for the image. You could reduce the number of levels, but that doesn't leave a lot of room for the resoultion of the image. Reducing the color information into cartisian x,y,z (X and Y need to be calculated from the image as well), would need to estabish the resoultion you wanted for the map. Next, you would need to determin what a color change represents.
I'm sure the propeller could do it, but not without extra hardware.
The Rovers has 128K ram and 128K flash, where as the Propeller has 32K.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Just tossing my two bits worth into the bit bucket
KK
·
Also, This isn't really a color variation thing at all.· It·takes a picture and picks points on it to generate a image.
I forgot to mention that also on the Mars rovers they remove stuff out of the images (The front weels stick out so they're removed from consideration.· Also, there is a line going through the middle of the image that cuts off the sky and leaves only the ground.· You could have that as high or low as you want (The lower, the·lesser the range and·processing and so on).· Also, it cuts out what things that only one camera sees so the images are almost the same.).·
You could also cut all around the image to make it smaller, as it seems we're in a penny-pinching situation here·so that would help.
To clear up how it picks points on the images, I belive it selects a point for each scan line (So if you have a 300x300 image that's cut down to 175x80 there would be at least 80 points. (You would probably want to do more though)).
But still the images seem to big.· I hear you can break them down to make them easier to process.· Of corse, if it gets to a point that it's so large that one propeller just cannot do it, you could get two to do it or even three.
I hope this isn't to contrary to your post·KK.·
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
There is always an answer.
There is always a way.
There is always a reason.··· -SciTech02.
It's not contrary at all, mearly explaining how and what you see. Not an issue at all, so let me see if I can make my points clearer.
Black and white has only 2 values, on or off. So either there's something there, where your fixed horizon is, or not. Objects that are behind or further away will show up as close as the closet object. At the absoulte least, a 4 bit gray scale is needed. When dealing with light, different things reflect light differently. For example, with something shinner but further away will appear, in light terms, as close as something duller and closer. This is the down side of using reflected light as the medium. This is why you need either color or gray. It's needed so groups of pixles can be compaired to determin if color recieved is brighter because of the object, the aimbient light, or it's distance. This is the entire color variation thing. Enviremental (ambient) light and on board image processing lighting is very important in this as well. I do know a good deal about how imaging systems work, as well as what is on the Rovers; they use 64 bit gray scale cams, with highspeed on board digital filters. Just to help illistrate my point, try thinking about how you, a human, can determin the distance of an object in a colorless room. Use colorless classess so you can see only gray scale (you can make them by using RGB trancparencys to make glassess just like the 3D RB glassess, and write down everything you need to do to determine how far one things is. Place the object on a floor, and you lay on the floor, and without looking at anything else on the floor, you determin it's distance. The average human's eyes are 6 cms apart.· All you need to do is measure the distance from the floor to the center of your eyes.· Then you have ALL the information the computer has to triangulate the distance.· You can't do it without resulting to·using other "lighting" or "object" clues.· Humans·use the color of the object and the sourrounding objects, in relationship to the ambient light.· And at that, you will see that it's very hard, and that's a very large (I believe the human eyes can detect just about 290 shades of gray) 292 bit scale.· This is exactly what we are talking about doing in a microcontroller.
The real issues are ram and speed. Yes, you could create a 3D vision maped gray scale system, but resoultion of the map would need to be low, and the program will need to scan up and down to generate "bands" of 3D data. Basically, you would move your horizon, snap and process a row of data based on that horizon, then move the horizon, and process again, thus creating "bands" of 3D data that could be used as a map. But again, the issue of ram (where and what to do with the data, and how to determin what data you want to keep, where and how to store the unprocessed video data until it's processed still comes to mind.
Please, I'm not knocking your idea at all, and the more we talk about it, the more interesting it becomes. Heck, if I had the cams, I'd start playing with the idea to see the resoultion that could come from this. Two cams, fixed at a known point apart on a verticle moving platform (if you just tilt the cams to move the horzontal line, I believe it distorts the math needed to calculate the cords, and it does change the lighting), with a small video window. Determin the exact offset between both images in pixels, do a logicle AND (both have to be set for the pixle to be retained) in a pixle by pixle bases, dropping pixles that don't appear in both images and pixles that don't matter (ie x pixles above and below the horzontal scan line), calculate the true x,y & z cords, then do something with the cords (like maybe ploting dots with lines between them on the tv graphics screen at first). You could maybe average 10 X 10 pixle block into being one point, thus reducing the number of calculations and the resulting cords, but the issue of ram still conserns me, and, the speed of the entire process. At any point, the amount of ram needed outside of program ram is the cost of two full images, and the retained x,y,z cords.
All said and done, the fact is:RAM... and lots of it, is needed for any vision processing system. I did say it could be done, and if fact it can and has been done, with additional hardware. I'm just explaining the issues and problems that will come up doing this project.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Just tossing my two bits worth into the bit bucket
KK
Post Edited (Kaos Kidd) : 5/24/2006 3:52:42 PM GMT
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
1+1=10
I just am failing to see where color detection is needed in this. I'm not trying to understand the color of the picture or make it look directly at the raw image and navigate on that. I'm trying to get X, Y, and Z axis' so I can make a rough 3D map of the area in front of it. Again, resilotion isn't really important ether. I just need a image that it could get the distance of and pick points on and get a 3D map of it.
I thought that a black and white camera was the same as a greyscale camera. I ment to say greyscale (So it can see everything in shades of gray (Not pure black and pure white)) Two colors would get you nowhere.
So the RAM space is the problem. I really like the sub sampling idea the Paul said. but still, the unprocessed image needs to be stored somewhere. I'm alright about having to get a extra ram or somthing (That I can intergrate to the propeller). Is that the extra hardware you talked about? If it is, how would I intergrate it?
One other way I could acomplish this goal is if there is some kind of "World maping senser" or "Trianglulating senser" that I could use to create a 3D model with. That would be a much easer way to get a 3D map (I think). Does anything like these exsist?
Yeah, this subject is becoming more of a robotic fourm one. My bad for puting it here, sorry. And I hope I wasn't too contrary again. Thanks for your guys help so far.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
There is always an answer.
There is always a way.
There is always a reason.··· -SciTech02.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
1+1=10
When I saw read your post "Black & White", my mind read Monochrome. My bad.
I have a suggestion or two, but I want to read up on subject in the robotics thread before I make any more statements.
Paul:
Thanks for pointing that out to me, I haven't read the other threads, I'll go and read up on the subject.
I have more to say on the conversation at hand, but as you point out, not this thread. I'll go read and see if maybe my thoughs are somewhat off before I post.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Just tossing my two bits worth into the bit bucket
KK
·
I believe it is 640X400 pixels.
I am just fighting back the urge to buy one and a few hundred dollars of more stuff for the Propeller. The have the SDcard breakout board to add ram too.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
"When all think alike, no one is thinking very much.' - Walter Lippmann (1889-1974)
······································································ Warm regards,····· G. Herzog [noparse][[/noparse]·黃鶴 ]·in Taiwan
Unfortunately, I probably won't have time to play with it for a while, and also, they don't have the documentation for it. (They're offering a cool $200 for a decent document on it)
I ordered it at the same time as I ordered the SDcard breakout box...
(I hope to be able to utilize that somewhat sooner)
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Don't visit my new website...
Also, I like the idea of using a cell phone camera (The're giving them away now too).· That's one way to get some cameras.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
There is always an answer.
There is always a way.
There is always a reason.··· -SciTech02.
You can read more about them in this thread:
Secure Digital Card/FAT16/Read
As for the digital cameras on cell-phones...
If they were only documented...
Feel free to surf over to Sparkfun and read the docs the do have on CMOS cameras.
(Unfortunately, none are for the camera they sell)
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Don't visit my new website...
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
There is always an answer.
There is always a way.
There is always a reason.··· -SciTech02.
If you're going to work on large amounts of DATA in some sort of array, I guess you could try to use 'Virtual RAM' by swapping parts of it out to some form of storage(FLASH, Static RAM, NVRAM, whatever) and other parts in, but the overhead is... bad...
you would most probably need a custom-written assembly routine to get any decent speed out of it, and that assumes that the rest is in Spin. (If your main routines are also in Assembler, because of time-constraints, then you're screwed. On PCs we say that Virtual memory is 1000x slower than main memory... )
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Don't visit my new website...
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
There is always an answer.
There is always a way.
There is always a reason.··· -SciTech02.
-Phil
· The issue with color is that it gives you another dimension to determine contrast. You could have two objects which look exactly the same shade in monochrome, but would be completely different colors. So color helps discriminate, but adds complexity.
·· And as I suggested earlier, you could add propellors instead of adding memory chips, (which would give you tons of free processing time to use on other stuff). At least for the edge map part, which is pretty localized.
-phar
-Phil
I think the idea of adding another propeller seems the most logical.· Ultimatly, I want to do the sub sampling method that Paul said to get the·pictures distance and map out a 3D model all on one propeller (It takes up only about half of the propeller's memory).·
I'm planing on using·two low resilotion, greyscale·cameras that would be cut down before the edges are detected·(Cut down as when the picture arives at the propeller, it cuts out·the sky and other parts that only·one camera sees (So the pictures almost·look the same)).· Then it would do the edge detection method, and then·measure the distance.· Could that still work, or does it mater if I do it in another order?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
There is always an answer.
There is always a way.
There is always a reason.··· -SciTech02.
Post Edited (SciTech02) : 5/30/2006 3:51:41 AM GMT
It seems that it's between two propellers, or 1 propeller with extremly tight programing.·
Well,·most of my questions are answered.· Now I ask, can I do this with one of·the two methods I described above?· That would sum everything up.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
There is always an answer.
There is always a way.
There is always a reason.··· -SciTech02.
They have a mapping senser called the north star·that does my overall goal I wanted to do (Inteligent navigation).· I don't need cameras or advanced programing anymore (If the senser works).· I'm going to look more into that now.· Thank you all for helping me with my·old plan.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
There is always an answer.
There is always a way.
There is always a reason.··· -SciTech02.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Life is one giant teacup ride.
Perhaps if you're doing industrial automation within a defined area...
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Just tossing my two bits worth into the bit bucket
KK
·
(I am once again pleased that the Props have trigonometric tables in ROM.)