Can a Propeller do NTSC Video Capture?
Martin_H
Posts: 4,051
Given the two propeller based oscilloscopes I've seen. I've been wondering if video capture is possible using a propeller chip. Given that the propeller can generate video, might it be able to digitize it? Obviously you would need to use PASM and fast A/D conversion. You also wouldn't store the video, but send it as a data stream to a computer.
I believe NTSC has 345,600 pixels per frame with 29.97 frames per second. So that's 10,357,632 measurement per second. For a chip running at 80 MHz that seems within the realm of the possible. So one cog would measure and write to a buffer. Another cog would read from the buffer and send to a computer. The buffer would need to be circular to stay within the tight RAM constraints.
The next problem is sending the digital signal to a computer. USB 2.0 is fast enough, but is there a choke point in getting the data out of the propeller?
I believe NTSC has 345,600 pixels per frame with 29.97 frames per second. So that's 10,357,632 measurement per second. For a chip running at 80 MHz that seems within the realm of the possible. So one cog would measure and write to a buffer. Another cog would read from the buffer and send to a computer. The buffer would need to be circular to stay within the tight RAM constraints.
The next problem is sending the digital signal to a computer. USB 2.0 is fast enough, but is there a choke point in getting the data out of the propeller?
Comments
He recently posted an update with a much nicer version
While it might be possible for the prop to acquire data at that rate I do not believe it is possible to acquire and process, transmit, or store it even to a local device such as an sd card or external ram.
There! I said it. Prove me wrong! Go ahead! Make my day!
Theoretically my system could have dealt with about 1/23rd of a VGA signal due to output bandwidth. I was using one serial port at 12.5Mbps (I overclocked). NTSC has half the number of frames per second so again theoretically, you could pass through maybe 1/12th of the signal.
I did all of my capturing with one COG going full out, using one cycle per pixel. If I simplified my code to just capture a vertical area of the screen then I could maybe capture 100 to 150 pixels per scan line I would guess. Maybe more. The limitation was COG RAM. With the slower NTSC rate, maybe you could get the whole thing with one COG, but at the least, you could dedicate COGs to different vertical areas of the scan lines to get the whole thing.
The trouble with NTSC is that the image is interlaced. You aren't going to *really* capture the whole image because HUB RAM can't hold all of one field while the other is being captured. Still, capturing a 240x640 field would be quite useful. You could use the time taken by the other field to do something useful.
I'm currently working on a project that needs an image sensor. I'm tentatively going to use a Taos linear image sensor but the one that I want is currently not being made at the moment because of the Japanese Tsunami. Supposedly there is replacement part coming but I haven't found out when it will arrive. I'm going to continue developing with that part anyway for now because it is easy to use and I already have code written for it but if the replacement doesn't show up soon, I may switch to an Omnivision part since a 2D RGB array seems like a natural upgrade to my product anyway. I don't have to use all of its features right away. If I were to do that, I would be using the OV7670 sensor which is 640x480 full color. It seems like it would be pretty easy to interface with.
Do you have anything serious in mind that you want video for? Maybe we could work together on this forum to develop code for the Omnivision part? The Prop I should be just able to deal with some of the stuff that I would eventually like to do and it will be an easy port to the Prop II once that comes out.
I have a thread on the Project forum about it.
I thought it would be easier on the Propeller (and my limited programming skills) to handle a smaller set of pixels. I've modified Hanno's code to write a 120 byte array. These 120 bytes represent the brightness of 120 pixels. I can then display these pixels to a LED array I made using shift registers. I personally think it is super cool. I'm still a bit amazed I got it to work as quickly as I did(which is do to Hanno's great tutorial and code).
Post #5 of the above linked thread has a picture of display showing what the small B&W camera sees. High contrast images are easier to see. I wrote the word "HI" on a white-board to use as a test image.
I like Jack's plan of using a 640 x 480 color camera(I plan to find more information about the Omnivision part). I know of at least two other forum members who have similar plans (of using a color camera with the Propeller). I think with that many pixels, some sort of fast external memory would be useful. I've wondered if using an 8-bit data connection with external memory (allowing data to be read and written a byte at a time) would be fast enough to use machine vision algorithms on the data coming from a color camera.
For now, I'm having fun with my 120 pixels.
Duane
Duane Degn, that's a cool project.