Shop OBEX P1 Docs P2 Docs Learn Events
Visual Tracking Enabled Robot — Parallax Forums

Visual Tracking Enabled Robot

RoddiyRoddiy Posts: 13
edited 2011-08-07 00:16 in Robotics
Hey guys,

first time posting on the Parallax Forums.

I came across an interesting idea (perhaps it has been done before, I haven't seen anything specific)

Firstly, this is not my first Robotics experience, i have created a small wall-avoidance bot, based off an Arduino, but decided the Propeller is a much more powerful platform, so I got my hands on one recently.

That being said, i have 0 experience programming the Propeller.

In my goal list, I intend to first create a wall avoidance robot ( and get through with motor control, sensor input coding, etc..)

but i have a bigger idea:

Use visual sensing ( webcam, kinect, idk ) on a robot.

Now, I know it's hard for a propeller to receive, and analyze webcam input out of the blank, but idk, maybe it is possible(?)

The main goal would be object tracking ( such as a white ball, or whatever ) and then following it.

I also would like to implement a learning algorithm, so the Robot teaches itself which is the best way to find and follow the ball, so this algorithm stays dynamic and can be adapted to other situations ( such as inclines, or obstacles )


My guess is that I would need to send the webcam info to a computer ( could be mounted on board a big robotic platform), the computer would then analyze this information, and send coordinates ( or moving instructions ) to the Propeller, which would then control the actuators.


I'd like some feedback on this, has it been done? CAN it be done? can it be done with JUST a propeller? maybe some more ideas??


Thank you very much,

Roddiy

Comments

  • ercoerco Posts: 20,256
    edited 2011-08-01 11:15
    Machine vision & recognition is high-end art & science. Have you experimented with PC-based RoboRealm? That's a good place to start; color & blob recognition. It's helpful to know the strengths & weaknesses of the state of the art technology on a full-blown PC platform, then knowing that results & expectations will have to be managed (lowered!) when you port it to a system with lesser capabilities.
  • RoddiyRoddiy Posts: 13
    edited 2011-08-01 11:24
    Erco,

    thank you for the prompt response, I've never experimented with RoboRealm, but ill definitely look it up!

    I'm looking for anything like this: http://www.youtube.com/watch?v=-KxjVlaLBmk

    but obviously not as fast! i think if I could track at 30 fps ( real-time, not some supersonic tracking like that robot! ) i could accurately track a ball or some other slow moving object,

    my expectations are somewhat high, but not industrial-sized high, i think it can be managed with a PC and a Propeller ( or 2!)
  • RoddiyRoddiy Posts: 13
    edited 2011-08-01 11:30
    Also, to follow up on the High-End perspective, i beg to differ,

    In one of the demo videos for ViewPort, it shows an OpenCV integration (video here:http://www.youtube.com/user/mydancebot#p/u/9/Teb-HTAg4_Q)

    so this is definitely feasible on a computer, but

    1) I don't feel like shedding >$50 for some software, I believe its possible to code OpenCV to function standalone,

    and 2) I would like for OpenCV to then send the coordinates/info back to the bot, so it would control movement.

    Building and placing a small (Micro ATX) motherboard and pc inside a Robot platform is not a problem, as I have extensive experience building PCs and have some* Steel machining tools available at home, including a Welding station.
  • Duane DegnDuane Degn Posts: 10,588
    edited 2011-08-01 11:32
    I think machine vision is possible with the Prop. I've been working on a project using Hanno's method.

    Right now it's limited to black and white but I think color could be added by using color filters in front of the camera's lens.

    Here's a picture of what my camera sees when it's limited to 120 pixels.

    attachment.php?attachmentid=80519&d=1303582009

    Notice the word "HI" can be seen both as written on the white board and as seen by the camera.

    The camera is being held by my fingers in the top left of the photo.

    This looks a lot better with some motion. I keep meaning to make a video of it.

    Did you see Ken's video of the CMUCam4 Kye was working on? (I think Bump posted the video.) I think that will be an easier way of adding vision to a robot.

    Duane
  • RoddiyRoddiy Posts: 13
    edited 2011-08-01 11:43
    @Duane,

    Your project is coming out awesome! I have a blog too, in which I will be documenting all my progress on this project ( and others ) feel free to follow: http://suburbantech.blogspot.com/

    As for the vision, I have a question, Hanno's method uses a $10, non-DIP ADC chip, is there any specific reason? right now I also have a 3208 that I ordered from Parallax to use as an ADC, would that suffice on the Video conversion?

    Roddiy
  • Duane DegnDuane Degn Posts: 10,588
    edited 2011-08-01 12:11
    I think the main advantage to the $10 ADC is its speed. It can output data 8-bits at once. I think Hanno often only uses the four most significant bits in some of his projects. I think someone (Perry?) made a video capture program for the Prop that doesn't need an external ADC. I think he called it "Stupid Video Capture" or something like that. I don't know much about it.

    I'm not sure how useful a 3208 would be for capturing video. My guess is it would be much too slow.

    Duane
  • RoddiyRoddiy Posts: 13
    edited 2011-08-01 12:36
    Ah I see,

    on your thread, you mentioned getting the ADC with a breakout board, which one did you get and where? If the problem is in fact speed, it seems reasonable, as I want to get as many FPS as possible.

    on the CMUcam website, they stated being able to get up to 26 FPS with a CMOS camera, is there any open method of achieving this out there? I wouldn't mind having one Propeller dedicated to Vision, while another propeller ( or even Arduino ) Responsible for Actuating.
  • Duane DegnDuane Degn Posts: 10,588
    edited 2011-08-01 14:05
    Roddiy wrote: »
    on your thread, you mentioned getting the ADC with a breakout board, which one did you get and where?

    I purchased the breakout board from SparkFun.

    I purchased the chip from Digi-Key.

    Duane
    422 x 602 - 648K
  • HannoHanno Posts: 1,130
    edited 2011-08-01 15:28
    Hi Roddify,
    The 3208 is a nice ADC- but isn't fast enough for sampling video. I used the ADC08100 sampling at 10Msps to grab raw video into Propeller RAM with one cog. That let me do very simple computer vision on the Prop itself to guide my balancing robot. ViewPort supports two methods of experimenting with computer vision:
    - grabbing video with the Prop, processing on the Prop, and optionally debugging via ViewPort. ViewPort shows you a live stream of your camera and filter outputs over the USB connection. Once you're finished debugging, your robot is totally mobile.
    - grabbing and processing video on the PC with help from OpenCV. I've included several filters to find color blobs, faces, circles- etc... The position and size of found objects is continually streamed to the Propeller over USB- so your Prop can do things like sort objects, guide cameras, etc...

    More info on both in the Official Propeller Book and a Circuit Cellar article.
    Good luck with your project!
    Hanno
  • RoddiyRoddiy Posts: 13
    edited 2011-08-01 17:17
    Hanno and Duane, thanks for some great input!

    @ Duane, that sounds like a good idea, I'll probably do that.

    @Hanno, Firstly, what are the advantages/disadvantages of processing straight on the prop, Coding-Wise, is it THAT much harder? is it doable in an accurate fashion? with good FPS? and also, will I have enough cogs and processing power left to do all the other stuff, ( Controlling motors, servos, other sensors as well)

    Also, is there a good, not TOO hard to understand ( I actually just finished learning/doing motor control on the Spin language, which I've had no experience with, but it was pretty straight forward) guide or reference that I can use to actually code the object recognition and tracking on the prop?

    Thank you guys very much so far, this has cleared a lot of mist on which path I should take!
  • Duane DegnDuane Degn Posts: 10,588
    edited 2011-08-02 09:48
    Roddiy wrote: »
    Also, is there a good, not TOO hard to understand . . . guide or reference that I can use to actually code the object recognition and tracking on the prop?

    I have a couple of suggestions.

    One. The book in Hanno's signature Programming and Customizing. . . Read the chapters by Hanno.

    Two. The Hydra manual. It's now a free download. The Hydra manual has a lot of great information about video. You'll need to learn about horizontal sync ,vertical sync and bunch of other stuff.

    Machine vision is really pushing the Prop. You'll also want to read JonnyMac's (Spin Zone) articles (they're on a Prop downloads page somewhere). I think JonnyMac does a good job of teaching PASM. You'll need PASM for machine vision.

    You might want to work on other aspects of the robot as you learn about machine vision. Reading encoders will be easier than reading shapes from video signals but the knowledge you gain from reading encoders will be useful when you start working on machine vision.

    Duane
  • RoddiyRoddiy Posts: 13
    edited 2011-08-02 11:45
    Duane,

    Thanks for all the help and support so far,

    I do in fact plan on working all the other aspects of the robot before getting into machine vision, and eventually Generic Algorithms, Again, I'm new to Propeller, but I have sensors and a small workshop at home, and knowledge from the Arduino days, although not exactly applicable, the notions are still transferable.

    Just one thing I got a bit fussed about:
    You said Machine Vision is pushing the prop, but just how much? I would LIKE to do it all on the prop, but only if it doesn't limit my ability to do other things, such as motor control, and communication for example, I also plan on coding a location-tracking algorithm, so the Prop can know and transmit it's location in an environment ( I haven't expanded much on this idea, but I've decided GPS wouldn't be applicable, since I'm talking about movement in Meters and Centimeters)

    Thank you so much!
  • HannoHanno Posts: 1,130
    edited 2011-08-03 14:37
    Hi Roddify,
    The nice thing about the Prop is that managing resources is easy- you just have to track how much hub memory and how many cogs you're using. In the book I describe building a vision guided balancing bot- which uses all 8 cogs and most of memory to do many of the things you're talking about. Main issue with video is it can consume lots of memory- but if you keep it simple you can get away not using any- by analyzing video data as it comes in. I'm a big fan of Genetic Algorithms- how are you planning on using them on your bot?
    Hanno
  • Martin_HMartin_H Posts: 4,051
    edited 2011-08-03 19:55
    I'll chime in and say that if you constrain the environment enough, the Parallax line scan camera can do some pretty neat things. Take a look at the product page (http://www.parallax.com/StoreSearchResults/tabid/768/List/0/SortField/4/ProductID/566/Default.aspx?txtSearch=line+scan+camera) and follow the links to some of the projects built using it.

    Granted it is only a single scan line, but that makes the information easier to process and you can infer more from a single line than you would initially think.
  • RoddiyRoddiy Posts: 13
    edited 2011-08-06 22:31
    @ Hanno,

    Yes I've noticed the resource management on the prop is really great, thats a big plus when doing a lot of things at once, as for GA, I'm not ENTIRELY sure yet, there's a lot of material I need to cover before touching that iceberg, but so far I have a basic notion that goes somewhat like this:

    1) Set standard actions(components, functions, whatever) i.e. Functions A,B,C and D, each does something different, such as, A moves forward, B moves left, etc etc

    2) test each function in an order, A-B-C-D

    3) During each function, check to see if the goal is closer to being reached ( i.e, the light sensor reported you were closer to the light)

    4)Find the last function that caused robot to move closer to the goal, and swap the one after it( the one which failed) with another random one.

    5) do this in a loop until goal has been reached.

    All the while, storing this information, so going back or doing it again would simply be a recall of the best-functioning set of functions.

    Again, I'm extremely new to GA, and there's a LOT to cover, especially with work right now, and university starting soon, this is a long term project.

    @ Martin

    Thanks for the great input, but line following is a little under-powered for the sort of applications I want to use my robot, and it must be fully environment adaptable, so I want to constrain the environment as little as possible :/
  • Duane DegnDuane Degn Posts: 10,588
    edited 2011-08-07 00:16
    Roddiy,

    I think you missed what Martin was talking about. A "line scan camera" is not the same as "line following".

    A normal digital camera has a two dimensional array of light sensors. A line scan camera has a one dimensional array of light sensors. So a line scan camera can capture a two dimensional image the same way a flatbed scanner also captures two dimensional images from a one dimensional sensor.

    One advantage of a line scan camera is the amount of data coming into the microcontroller isn't so extreme and the data can be more easily processed.

    Phil has used a line scan camera to take a nice two dimensional picture of the moon.

    Duane
Sign In or Register to comment.