Stereoscopic 3D VGA and VR Goggles
laser-vector
Posts: 118
hey, i just bought some HeadPlay VR goggles and was going to use them by sending a camera feed down from my remote helicopter so i could fly with a first person view (FPV).
the easiest way to do this is with one camera through the headplay's RCA port.
however the goggles do support two different types of stereoscopic 3D VGA input (Nvidia 3D or Standard 3D).
im not sure how the 3D works exactly, im guessing it syncs with the glasses and just sends left frame then right frame then left frame again and so on. so i was wondering if any of you might have a clue as how to sync up.
so id like to take a propeller and two cameras, then combine them inside of the propeller, then the propeller output a stereoscopic signal out to VGA and into the glasses...
the easiest way to do this is with one camera through the headplay's RCA port.
however the goggles do support two different types of stereoscopic 3D VGA input (Nvidia 3D or Standard 3D).
im not sure how the 3D works exactly, im guessing it syncs with the glasses and just sends left frame then right frame then left frame again and so on. so i was wondering if any of you might have a clue as how to sync up.
so id like to take a propeller and two cameras, then combine them inside of the propeller, then the propeller output a stereoscopic signal out to VGA and into the glasses...
Comments
I'm not sure that the Propeller is going to be able to do that internally, however, a lot of things that were impossible are common place with the Prop today.
With that said, however, it might be better to utilize something like a Max 9526 chip to handle your video... and just have the Prop drive the display shutters.
With the current state of software, the Prop can't handle much video. Hanno has the best implementation I've see so far and it's low resolution monochrome. I'm sure you'd want something a lot better. (Also converting from NTSC to VGA is right up the Max 9526s alley... it does that quite easily.)
Bill
I've also wanted to try using stereo vision on a RC helicopter. I think the cameras would have to have a significant distance separating them in order to have useful depth perception.
Please let us know what you find out.
Duane
Example, if both camera's are 100% parallel; your focal point is infinity. As you an object gets closer to you, the eyes start to "tow" in, point towards each other, until you get to the closest distance an eye can see, about 5~7 cm. Just to be clear, we're talking about the point at which both cameras are pointed at, not the actual focus of the cameras.
For the 3D FPV idea to work well, the cameras would need a separation of about 12~13cm and a geared; slip free linkage between each camera so the focal point can be moved. Decisions on the travel would need to be made, but useful travel would between .1 (not quite infinity, 99.9% parallel) to about 7 cm (the closest "in focus" focal point practical for this application), which would result in a focal travel from about 15cm to roughly 200m. Electronics would handle the switching between each camera and the timing pulse needed for the glasses to know which eye would need to be on. The camera's would need to be matched in capability; Auto focus; neutral zoom (as in 0 zoom), focal range to match the distances. Most cameras can do the infinity, so it's the close up that needs to be looked at... if the camera's focal point is further out then the focal point created by the intersection of the camera's, it will never focus.
Having played with this back in the early 00's, I discovered that it's extremely important to get the left camera on the left eye, right on right, and to get the focal points of the cameras to match the focal points of the hardware. If there's too much "difference" between the focal points, things will not come into focus, or worse, the perspective becomes distorted.
When I was playing with this, lasers were extremely helpful in the setup and design of the system. I also discovered that if you actually got to 100% parallel, I (and the others on the project) lost all form of depth; both images started to "stand" on their own. Lastly, DON'T ZOOM! It was nearly imposable to match zoom's and retain a constant focal point for both camera's to focus on.
Good luck, and I really look forward to see your progress on this. All of our work on this was done with a C64 and video camera's "view port"; back then we didn't have the USB and or micro cam's like today.
KK
Another issue is syncing the two camera's frames. Essentially what you're doing is interleaving the two camera's frames: L,R,L,R... If the two cameras are in sync (frames starting and the exact same time), it would require very minimal circuitry to interleave them, simply detect the begining of the frame and switch to the other camera. If the two camera frames are not in sync then you will need to buffer at least one of the camera's frames to achieve the frame interleaving.
I wish you luck, its not an easy challenge you have chosen, but it is not impossible. Try to break the problem down into as small of chunks as you can and work on each one, gradually scaling up the complexity of the project
@KaosKidd, from Depth Perception page "Convergence is effective for distances less than 10 meters." So yes, if you only what depth perception up to 10 meters then what you said is important. I personally got interested in RC aircraft to use with aerial photography. Aerial photographs are often viewed with special viewers that allow each eye to view a different photograph (taken from a slightly different location) providing a 3D image.
From the Steroscopy page:
"For making stereo images of a distant object (e.g., a mountain with foothills), one can separate the camera positions by a larger distance (commonly called the "interocular") than the adult human norm of 62-65mm. This will effectively render the captured image as though it was seen by a giant, and thus will enhance the depth perception of these distant objects, and reduce the apparent scale of the scene proportionately."
I often thought it would be cool to view these 3D stereo images live instead of using a special viewer after the fact using two still photos. It would make the world look miniaturized but you'd have enhanced depth perception by positioning the two video cameras farther apart than normal human eyes. I personally don't want to have to keep the helicoper within 30 meters of the ground to enjoy a 3D view.
I appreciate the information about focal point. I'll make sure to keep it in mind if I want 3D vision on a ground based robot (where I think the 10 meters depth perception is sufficient).
Duane
Consider leaving the two streams separate, and having one channel for each. The amount of data from a single stream can be large, combining two stream could make the resulting stream "jerky".
A big consideration is image stabilization. Otherwise, the vibration can come out like "the Blair Witch Helicopter" and we'll get seasick from the video.
As others have said, increasing the distance between cameras can increase the 3D affect on distant objects. Aerial topography mapping uses overlapping photos hundreds of yards apart.
This sound like a very cool project. I hope the hear more.