Shop OBEX P1 Docs P2 Docs Learn Events
Video read and processing question? — Parallax Forums

Video read and processing question?

TookingsTookings Posts: 18
edited 2007-10-02 17:18 in Propeller 1
I had a video processing question:

I've always wanted to build a device that would take an NTSC signal, split it into four quadrants, pixel double each quadrant, and then send each quadrant out to 4 different NTSC outputs. (Or, possibly just centered and boxed at original size. Think 4 player classic GoldenEye or Perfect Dark on the N64, with no radar, and no cardboard covering parts of the screen... eyes.gif)

The only thing I could find related to reading a video signal with the Prop was http://forums.parallax.com/forums/default.aspx?f=25&m=172358 -- and that kind of signal work is currently very beyond me. (I'm at about the reading/controlling servo's level... [noparse]:)[/noparse]

I'm really not looking for a howto answer unless it's been done, but if anyone had any ideas on where to start, or whether or not it even seams feasible, I'd appreciate any tips on what looks like a steep learning curve. (I'm assuming I would need to use other pre-processing IC's as described in that post, too...)

Thanks very much!

-Rick

Comments

  • deSilvadeSilva Posts: 2,967
    edited 2007-10-01 07:11
    Such project lies not necessarily out of the scope of the propeller. A little bit at the marginal side, but we especially like that, don't we smile.gif

    A black and white solution needs little external circuitry (four level ADC, using 4 op-amps from a quad pack for level detection

    --- Edit: Thinking of it... you can also use 3 pairs of resistors as voltage devider, but have to boost the video signal to 3V before it = one transistor

    ), a grayscale version ( 4 levels?) seems possible with little additional circuitry, a color solution will need more...
    .
    You see, an analogue signal will arrive with around 4 MHz giving you 5 instructions to process them. And you must be absolutely synchronious within one video line..

    So first you have to choose an appropriate crystal, which might reduce the time you have to 4 instructions :-(
    Second you certainly have to split the 4 MHz signal between 2 or 3 COGS alternatively

    So you could "shift-in" 8 or 16 video clocks and then use the time the other COG works to store it at the appropriate place for the output (40 to 80 instructions time...)
    You also will need 4 COGs for Video-Out.

    You will also need a full screen memory, so the "lower" monitors should know what to output when you are handling the "upper" part, and vice versa....

    There can also be some minor technical problems with interlacing...

    Your prerequisites: Very good knowledge of machine code, good understanding of the video signal. Try deSilva's Tutorial smile.gif

    Post Edited (deSilva) : 10/1/2007 8:05:29 AM GMT
  • Ken PetersonKen Peterson Posts: 806
    edited 2007-10-01 14:21
    Seems to me you would have to buffer each quadrant in order to generate four full-screen NTSC signals. I really doubt you can support color this way. The sampling rate would have to exceed 7 MHz.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔


    The more I know, the more I know I don't know.· Is this what they call Wisdom?
  • deSilvadeSilva Posts: 2,967
    edited 2007-10-01 16:18
    Oh dear, seems I forgot Nyquist smile.gif
    But you can sample @ 10MHz by
       AND pinmask, INA WC,NR
       RCL theSignal, #1
       AND pinmask, INA WC,NR
       RCL theSignal, #1
       AND pinmask, INA WC,NR
       RCL theSignal, #1
       AND pinmask, INA WC,NR
       RCL theSignal, #1
       AND pinmask, INA WC,NR
       RCL theSignal, #1
       AND pinmask, INA WC,NR
       RCL theSignal, #1
       AND pinmask, INA WC,NR
       RCL theSignal, #1
       AND pinmask, INA WC,NR
       RCL theSignal, #1
    



    And then the next COG takes over...
    But color seems definitely out....
    Maybe BTX has made some progress?

    Edit: Changed it to the better on request from Fred smile.gif

    Post Edited (deSilva) : 10/1/2007 10:45:09 PM GMT
  • TookingsTookings Posts: 18
    edited 2007-10-01 17:33
    Wow, this is making RC model control look like easy... [noparse]:)[/noparse] I'm certainly going to give it a shot -- but I think I'll wait until I can really work with the ASM first, as this seems a given prerequisite.

    For the actual project -- yeah, color would be important. What could work, is "just" splitting the video into two outputs, and "blanking to black" the top half of the frame on one output, and the bottom half on the other output. No idea (personally) if that would be any easier, but probably what I'd try as my first step once I had some sense of what I was doing.

    Thanks again for all the help!

    -Rick
  • Harrison.Harrison. Posts: 484
    edited 2007-10-01 18:00
    That is an ingenious idea. It would be quite easy to use the Propeller to blank out 3 of the 4 quadrants. This would be like overlaying white/black over color or b/w video.

    All you would need is a video buffer/amp (I believe Maxim makes these) and a sync separator chip (like the EL1883). Then use the Propeller to count the video lines and blank out what you don't want shown. The issue here is you probably can't black out the entire screen easily without killing the sync signals.

    Link of Interest: http://www.hittconsulting.com/products/hcosd/ .

    Post Edited (Harrison.) : 10/1/2007 6:07:01 PM GMT
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2007-10-01 19:08
    You'll also need a DC restorer on the incoming signal to establish a fixed black level. You can do this by AC coupling the incoming video with a 1uF cap, and using an analog switch, tirggered by the '1883's composite sync output, to ground the downstream side of the cap connect the downstream side of the cap to a 1.2V supply. This gives a more predictable output than a simple diode restorer would.

    Also, you can ignore the 1883's vertical sync output to save a pin on the prop. The composite sync output has enough info buried in the pulse widths to know when the vertical sync interval is occurring.

    -Phil

    Update: Grounding the cap will make it difficult to employ a transistor output stage due to the base-emitter forward voltage. Keeping the sync tips at 1.2V makes the output part easier. A 1.2V reference can be built using two forward-biased diodes in series to ground or an LM317L set to its lowest voltage output. -P.

    Post Edited (Phil Pilgrim (PhiPi)) : 10/1/2007 7:28:19 PM GMT
  • deSilvadeSilva Posts: 2,967
    edited 2007-10-01 22:41
    Fred Hawkins said...
    deSilva, what's SLC?
    It's a typo, looked nice after so and so many pastes
    I'll fix it smile.gif
  • TookingsTookings Posts: 18
    edited 2007-10-01 23:59
    I had an idea.... now, I'm ignoring bandwidth considerations for a moment, and pushing the video into the digital world for processing, that I understand better. What if you used this approach:

    NTSC --> FrameGrabber --> Color Bitmap/Matrix of frame --> Digital processing of pixel data --> Pixel data written to NTSC --> NTSC out

    1. Framegrabber loads matrix "buffer A" with a frame (and then starts to load buffer B, while 2-5 complete)
    2. While buffer B if filled, multiple processes read from Buffer A (4 processes, for a quad output)
    3. Digital modifications are done to the local pixel matrix
    4. The 4 x processes write the modified image matrix to a local buffer
    5. The local buffer is converted back to NTSC output
    6. Repeat swapping frame Buffer A and Buffer B

    Storing all that data, and doing it in the time of one frame is probably optimistic... (Although the overlay's are probably more feasible -- being able to do something like this would allow lots of processing possibilities. For processor intensive applications you could drop every X frames in, and output the same finished frame X times -- if being choppy wasn't an issue...or even one-shot for analysis issues.)

    Is this approach worth even thinking about? (I think it's time I go do some of my own research too...I know I've seen people do the pieces, I just haven't seen it all together.)
  • deSilvadeSilva Posts: 2,967
    edited 2007-10-02 06:22
    You can try a "feasibility study!

    Both main concerns: RAM and speed per COG can be easily overcome by "undersampling": Just sample 128 pixels per line , each other line, in the "upper field" only.

    This will give you much headroom in any respect!

    You might want to add an "aliasing" low pass for the reduced pixel clock on the luma signal. Color however will not work with this simple scheme.
  • TookingsTookings Posts: 18
    edited 2007-10-02 17:18
    Thanks -- talking this out makes it seem much more approachable, at least to start working with "undersampled" as you said.

    Harrison, thanks for that link. Looks like a great place to get some similar examples from, esp w/ code and schematic.

    Post Edited (Tookings) : 10/2/2007 5:26:20 PM GMT
Sign In or Register to comment.