Advanced Vision Systems
A.C. fishing
Posts: 262
Out of curiosity, (because they are probably WAY too expensive for me), does anyone know of any super advanced Vision systems that have almost perfect detail for robots? (I don't mean like a camera that I attach to my robot and view using bluetooth). I mean like a really nice CMU cam that the robot uses to guide its self. Like something that people at MIT use.
Comments
You could, conceivably, use a mini-board computer and connect a webcam to it and run something that watches pixels. The software would probably be something you'd have to engineer yourself though!
Then just have the computer output to the stamp and the stamp control the servo's etc....
Otherwise, I don't know of any 'budget' vision systems (budget meaning less than what the taxman leaves you this time of the year!)
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
·
Steve
"Inside each and every one of us is our one, true authentic swing. Something we was born with. Something that's ours and ours alone. Something that can't be learned... something that's got to be remembered."
I'd rather use ultrasonics as it doesn't demand as much processing for obstacle-detection.
Any halfway 'decent' vision-system will need to be stereoscopic if you want to be able to detect edges and judge distances with any accuracy.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Don't visit my new website...
·· But the results may be disapointing. I think 2-D image recognition is very doable, but getting your robot to recognize objects in 3D probably isn't.
··
:loop·················shr···· t1,#1·········· wc
······· if_c···········add···· t1,t2
······················· djnz··· m1,#:loop
resulting in 150ns/bit execution.·There is a barrel shift though, so DSP algorithms tailored to barrel shift processors would work nicely on the propeller.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
·1+1=10
http://www.cs.cmu.edu/~cmucam/cmucam2/
Ill give you an example, say you tell a robot that you want it to follow a person and you want it to maintain a distance of 3 feet from the person. You have the robot capture stills of the person, do edge detection to figure out the bounding box containing the person, do some trig functions to estimate the distance to the person. If the person in wearing dark clothing in a lightly colored room, the algorithm works great, but if the person steps in front of a dark background, now the robot cant determine the true size of the person because the edge detection portion fails. Now say a small child comes into the room, now the robot's asumption of a "standard height" for a human is off, and now the bot constantly thinks it's further from the child than it actually is and the bot now stops in less than a foot away instead of 3 feet.
Intelligent navigation using video-only algorithm can work when the environment is strictly controlled, but if the enviroment varies from it's learning set it will no longer perform as expected.
Another aspect is processing power, the more intelligence required the larger physical system you'll have, meaning you not going to get an extremely intelligent video navigation system onto a BOE-BOT.
But without defining what you mean by intelligent navigation, no true answer can be given. Its like asking if you can teach your car to drive itself, the answer is probably not, unless you are defining it to operate in a very narrow set of circumstances. I know at least one of you are going to pipe in with the DARPA car project, but these systems are the size of cars (many computers), use many sensors, and operate in a desert like environment for a very good reason: they can be kept a safe distance from any bystanders.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
·1+1=10
Post Edited (Paul Baker) : 3/15/2006 7:25:58 PM GMT