Shop OBEX P1 Docs P2 Docs Learn Events
Can a Propeller do NTSC Video Capture? — Parallax Forums

Can a Propeller do NTSC Video Capture?

Martin_HMartin_H Posts: 4,051
edited 2011-05-21 18:50 in Propeller 1
Given the two propeller based oscilloscopes I've seen. I've been wondering if video capture is possible using a propeller chip. Given that the propeller can generate video, might it be able to digitize it? Obviously you would need to use PASM and fast A/D conversion. You also wouldn't store the video, but send it as a data stream to a computer.

I believe NTSC has 345,600 pixels per frame with 29.97 frames per second. So that's 10,357,632 measurement per second. For a chip running at 80 MHz that seems within the realm of the possible. So one cog would measure and write to a buffer. Another cog would read from the buffer and send to a computer. The buffer would need to be circular to stay within the tight RAM constraints.

The next problem is sending the digital signal to a computer. USB 2.0 is fast enough, but is there a choke point in getting the data out of the propeller?

Comments

  • RaymanRayman Posts: 14,877
    edited 2011-05-20 19:04
    Perry did a nice job with "stupid video capture". Captures B&W with just a couple cheap components
    He recently posted an update with a much nicer version
  • Mike GreenMike Green Posts: 23,101
    edited 2011-05-20 21:12
    You might look at Hanno's ViewPort which includes video capture with transmission to a PC
  • TubularTubular Posts: 4,717
    edited 2011-05-20 22:14
    Hanno did a stand alone tutorial here too
  • kwinnkwinn Posts: 8,697
    edited 2011-05-20 23:06
    Martin_H wrote: »
    I believe NTSC has 345,600 pixels per frame with 29.97 frames per second. So that's 10,357,632 measurement per second. For a chip running at 80 MHz that seems within the realm of the possible. So one cog would measure and write to a buffer. Another cog would read from the buffer and send to a computer. The buffer would need to be circular to stay within the tight RAM constraints.

    The next problem is sending the digital signal to a computer. USB 2.0 is fast enough, but is there a choke point in getting the data out of the propeller?

    While it might be possible for the prop to acquire data at that rate I do not believe it is possible to acquire and process, transmit, or store it even to a local device such as an sd card or external ram.

    There! I said it. Prove me wrong! Go ahead! Make my day!
  • Martin_HMartin_H Posts: 4,051
    edited 2011-05-21 03:28
    Thanks everyone for the great links. I'll file this away for future projects. I was wondering if building some sort of machine vision system was possible and it looks like it is.
  • Jack BuffingtonJack Buffington Posts: 115
    edited 2011-05-21 13:51
    It is certainly possible. The issue as you pointed out is processing all of that data. I built a propeller-based solution in the past that captured portions of a VGA (640x480x60Hz) signal and then passed them on in an uncompressed manner. I think that I was able to capture up to 64 pixels in 16 bit color per scan line. I could have done more except that my application required that I capture from predefined sections of the screen that could be different for every scan line. You could do more if you dealt in 8-bit color and implemented some sort of simple RLE compression or something like that.

    Theoretically my system could have dealt with about 1/23rd of a VGA signal due to output bandwidth. I was using one serial port at 12.5Mbps (I overclocked). NTSC has half the number of frames per second so again theoretically, you could pass through maybe 1/12th of the signal.

    I did all of my capturing with one COG going full out, using one cycle per pixel. If I simplified my code to just capture a vertical area of the screen then I could maybe capture 100 to 150 pixels per scan line I would guess. Maybe more. The limitation was COG RAM. With the slower NTSC rate, maybe you could get the whole thing with one COG, but at the least, you could dedicate COGs to different vertical areas of the scan lines to get the whole thing.

    The trouble with NTSC is that the image is interlaced. You aren't going to *really* capture the whole image because HUB RAM can't hold all of one field while the other is being captured. Still, capturing a 240x640 field would be quite useful. You could use the time taken by the other field to do something useful.


    I'm currently working on a project that needs an image sensor. I'm tentatively going to use a Taos linear image sensor but the one that I want is currently not being made at the moment because of the Japanese Tsunami. Supposedly there is replacement part coming but I haven't found out when it will arrive. I'm going to continue developing with that part anyway for now because it is easy to use and I already have code written for it but if the replacement doesn't show up soon, I may switch to an Omnivision part since a 2D RGB array seems like a natural upgrade to my product anyway. I don't have to use all of its features right away. If I were to do that, I would be using the OV7670 sensor which is 640x480 full color. It seems like it would be pretty easy to interface with.

    Do you have anything serious in mind that you want video for? Maybe we could work together on this forum to develop code for the Omnivision part? The Prop I should be just able to deal with some of the stuff that I would eventually like to do and it will be an easy port to the Prop II once that comes out.
  • Duane DegnDuane Degn Posts: 10,588
    edited 2011-05-21 15:13
    I've recently been experimenting with Hanno's machine vision method.

    I have a thread on the Project forum about it.

    I thought it would be easier on the Propeller (and my limited programming skills) to handle a smaller set of pixels. I've modified Hanno's code to write a 120 byte array. These 120 bytes represent the brightness of 120 pixels. I can then display these pixels to a LED array I made using shift registers. I personally think it is super cool. I'm still a bit amazed I got it to work as quickly as I did(which is do to Hanno's great tutorial and code).

    Post #5 of the above linked thread has a picture of display showing what the small B&W camera sees. High contrast images are easier to see. I wrote the word "HI" on a white-board to use as a test image.

    I like Jack's plan of using a 640 x 480 color camera(I plan to find more information about the Omnivision part). I know of at least two other forum members who have similar plans (of using a color camera with the Propeller). I think with that many pixels, some sort of fast external memory would be useful. I've wondered if using an 8-bit data connection with external memory (allowing data to be read and written a byte at a time) would be fast enough to use machine vision algorithms on the data coming from a color camera.

    For now, I'm having fun with my 120 pixels.

    Duane
  • Martin_HMartin_H Posts: 4,051
    edited 2011-05-21 18:50
    Jack Buffington, I have no concrete plans at the moment, but I've seen video's of robots using the CMU cam to fetch balls. Rather than buy a turn key solution I would find it more interesting to understand the innards of such a system. I have a line scan camera and at the moment I'm just trying to deal with sections of lines are bar codes. So this is more in the future than in the near term. I'm always up for collaboration, but this is a bit farther in the future.

    Duane Degn, that's a cool project.
Sign In or Register to comment.