Shop OBEX P1 Docs P2 Docs Learn Events
stereo 3d robotic vision? — Parallax Forums

stereo 3d robotic vision?

WhelzornWhelzorn Posts: 256
edited 2005-05-17 21:33 in Robotics
Ok, this is a concept I just thought of: suppose you were to take 2 black and white cameras and mount them side-by-side. This would give you a classic black and white stereoscopic view. But, is it possible to have some software make a stereogram of the two images, showing the closer objects as a lighter shade and farther objects as a darker shade? Im absolutely positive that I am not the first person to think of such a thing, but I cannot find any software/images/anything showing that someone has successfully done this before. Is there anyone here who can point me in the right direction?

Thanks,
Justin

Comments

  • steve_bsteve_b Posts: 1,563
    edited 2005-05-16 22:49
    I'm not sure if this helps or hinders....but here's something to check out anyhow!
    http://www.seattlerobotics.org/encoder/200110/vision.htm

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    ·

    Steve
    http://ca.geocities.com/steve.brady@rogers.com/index.html
    "Inside each and every one of us is our one, true authentic swing. Something we was born with. Something that's ours and ours alone. Something that can't be learned... something that's got to be remembered."
  • Paul BakerPaul Baker Posts: 6,351
    edited 2005-05-17 04:36
    I would think that if you were able to align them properly you could perform the cross correlation of the two scan lines to try to find object shift due to the parallax effect.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
  • steve_bsteve_b Posts: 1,563
    edited 2005-05-17 12:32
    aligning them properly would be the issue.

    In order to have good stereo resolution...you have to have your images closely aligned. The bigger the difference from image to image...the bigger the fudge-factor you need in looking at a block of pixels rather than 1 or 2....

    IF you could use a laser pointer as a single reference....then you could use some geometry/trig/whatever lol to determine where the laser spot is in relation to the center of either image....then paste each image in relation to where the laser point falls in each camera view.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    ·

    Steve
    http://ca.geocities.com/steve.brady@rogers.com/index.html
    "Inside each and every one of us is our one, true authentic swing. Something we was born with. Something that's ours and ours alone. Something that can't be learned... something that's got to be remembered."
  • kelvin jameskelvin james Posts: 531
    edited 2005-05-17 16:41
    This kind of technology is mostly used in industrial robotic applications for alignment, inspection,etc. If you search in that area, you might be able to come up with some useful info to see if it is worth pursuing. You would still need a fast computer to be able to process the data at a reasonable rate.

    kelvin
  • Paul BakerPaul Baker Posts: 6,351
    edited 2005-05-17 17:33
    If a scan line of one camera (say line 14) corresponds to a different scan line of the other camera (say line 16), you can perform an intialization routine to find the correspondance between the two by performing cross correlations until you find the highest correlation and hence the mapping of of scan lines between the two. However if the two cameras are on a different roll axis angle (the scan line of one camera crosses several scan lines of the other camera because they are in a twisted alignment) then the mathmatics to try to get the two pictures aligned will make the task much much more difficult.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
  • steve_bsteve_b Posts: 1,563
    edited 2005-05-17 17:39
    Way too much coding for that Paul!

    It'd be stereo-vision, so the two camera's would have different perspectives at about 3-4inches apart. Which is why I suggested the laser point reference. Dunno, maybe 2 CMUcams .....

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    ·

    Steve
    http://ca.geocities.com/steve.brady@rogers.com/index.html
    "Inside each and every one of us is our one, true authentic swing. Something we was born with. Something that's ours and ours alone. Something that can't be learned... something that's got to be remembered."
  • Paul BakerPaul Baker Posts: 6,351
    edited 2005-05-17 17:44
    I hadn't gotten into this before, but none of what I have described·is well suited for the Stamp or even the SX. This type of application is best handled by a DSP·because of·the massive number of multiply/accumulate operations required, or using a PC to do the processing.
  • WhelzornWhelzorn Posts: 256
    edited 2005-05-17 19:43
    Paul, a computer would be fine for the processing, and a normal sized desktop is not at all out of the question, in fact, thats what I thought I would have to use in the first place. The comparing line 14 to 16 example is EXACTLY what I had in mind. I have 2 black and white cameras (280 lines of resolution).
    If you can't visualize what I'm after here, I found an example:
    original left stereo(left), depth map(right)
    orig-aqua1.gifaqua-depth10.gif

    anyway, I guess this is going to involve some coding, but I think that I can handle that OK, I was just wondering if anyone had seen any thing that looks like that.

    Thanks!
    Justin
  • Paul BakerPaul Baker Posts: 6,351
    edited 2005-05-17 21:33
    Yeah, I think a few labs at UF were doing things along these lines. While I wasn't a member of any of them, its likely talking with researchers in the medical imaging lab and the sattelite imaging lab that the cross correlation thing came from. Most depth plots Ive seen use white as the foreground color but your example is the opposite, my personal experience is·from creating sterograms.
Sign In or Register to comment.