Video read and processing question?
Tookings
Posts: 18
I had a video processing question:
I've always wanted to build a device that would take an NTSC signal, split it into four quadrants, pixel double each quadrant, and then send each quadrant out to 4 different NTSC outputs. (Or, possibly just centered and boxed at original size. Think 4 player classic GoldenEye or Perfect Dark on the N64, with no radar, and no cardboard covering parts of the screen... )
The only thing I could find related to reading a video signal with the Prop was http://forums.parallax.com/forums/default.aspx?f=25&m=172358 -- and that kind of signal work is currently very beyond me. (I'm at about the reading/controlling servo's level... [noparse]:)[/noparse]
I'm really not looking for a howto answer unless it's been done, but if anyone had any ideas on where to start, or whether or not it even seams feasible, I'd appreciate any tips on what looks like a steep learning curve. (I'm assuming I would need to use other pre-processing IC's as described in that post, too...)
Thanks very much!
-Rick
I've always wanted to build a device that would take an NTSC signal, split it into four quadrants, pixel double each quadrant, and then send each quadrant out to 4 different NTSC outputs. (Or, possibly just centered and boxed at original size. Think 4 player classic GoldenEye or Perfect Dark on the N64, with no radar, and no cardboard covering parts of the screen... )
The only thing I could find related to reading a video signal with the Prop was http://forums.parallax.com/forums/default.aspx?f=25&m=172358 -- and that kind of signal work is currently very beyond me. (I'm at about the reading/controlling servo's level... [noparse]:)[/noparse]
I'm really not looking for a howto answer unless it's been done, but if anyone had any ideas on where to start, or whether or not it even seams feasible, I'd appreciate any tips on what looks like a steep learning curve. (I'm assuming I would need to use other pre-processing IC's as described in that post, too...)
Thanks very much!
-Rick
Comments
A black and white solution needs little external circuitry (four level ADC, using 4 op-amps from a quad pack for level detection
--- Edit: Thinking of it... you can also use 3 pairs of resistors as voltage devider, but have to boost the video signal to 3V before it = one transistor
), a grayscale version ( 4 levels?) seems possible with little additional circuitry, a color solution will need more...
.
You see, an analogue signal will arrive with around 4 MHz giving you 5 instructions to process them. And you must be absolutely synchronious within one video line..
So first you have to choose an appropriate crystal, which might reduce the time you have to 4 instructions :-(
Second you certainly have to split the 4 MHz signal between 2 or 3 COGS alternatively
So you could "shift-in" 8 or 16 video clocks and then use the time the other COG works to store it at the appropriate place for the output (40 to 80 instructions time...)
You also will need 4 COGs for Video-Out.
You will also need a full screen memory, so the "lower" monitors should know what to output when you are handling the "upper" part, and vice versa....
There can also be some minor technical problems with interlacing...
Your prerequisites: Very good knowledge of machine code, good understanding of the video signal. Try deSilva's Tutorial
Post Edited (deSilva) : 10/1/2007 8:05:29 AM GMT
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
The more I know, the more I know I don't know.· Is this what they call Wisdom?
But you can sample @ 10MHz by
And then the next COG takes over...
But color seems definitely out....
Maybe BTX has made some progress?
Edit: Changed it to the better on request from Fred
Post Edited (deSilva) : 10/1/2007 10:45:09 PM GMT
For the actual project -- yeah, color would be important. What could work, is "just" splitting the video into two outputs, and "blanking to black" the top half of the frame on one output, and the bottom half on the other output. No idea (personally) if that would be any easier, but probably what I'd try as my first step once I had some sense of what I was doing.
Thanks again for all the help!
-Rick
All you would need is a video buffer/amp (I believe Maxim makes these) and a sync separator chip (like the EL1883). Then use the Propeller to count the video lines and blank out what you don't want shown. The issue here is you probably can't black out the entire screen easily without killing the sync signals.
Link of Interest: http://www.hittconsulting.com/products/hcosd/ .
Post Edited (Harrison.) : 10/1/2007 6:07:01 PM GMT
Also, you can ignore the 1883's vertical sync output to save a pin on the prop. The composite sync output has enough info buried in the pulse widths to know when the vertical sync interval is occurring.
-Phil
Update: Grounding the cap will make it difficult to employ a transistor output stage due to the base-emitter forward voltage. Keeping the sync tips at 1.2V makes the output part easier. A 1.2V reference can be built using two forward-biased diodes in series to ground or an LM317L set to its lowest voltage output. -P.
Post Edited (Phil Pilgrim (PhiPi)) : 10/1/2007 7:28:19 PM GMT
I'll fix it
NTSC --> FrameGrabber --> Color Bitmap/Matrix of frame --> Digital processing of pixel data --> Pixel data written to NTSC --> NTSC out
1. Framegrabber loads matrix "buffer A" with a frame (and then starts to load buffer B, while 2-5 complete)
2. While buffer B if filled, multiple processes read from Buffer A (4 processes, for a quad output)
3. Digital modifications are done to the local pixel matrix
4. The 4 x processes write the modified image matrix to a local buffer
5. The local buffer is converted back to NTSC output
6. Repeat swapping frame Buffer A and Buffer B
Storing all that data, and doing it in the time of one frame is probably optimistic... (Although the overlay's are probably more feasible -- being able to do something like this would allow lots of processing possibilities. For processor intensive applications you could drop every X frames in, and output the same finished frame X times -- if being choppy wasn't an issue...or even one-shot for analysis issues.)
Is this approach worth even thinking about? (I think it's time I go do some of my own research too...I know I've seen people do the pieces, I just haven't seen it all together.)
Both main concerns: RAM and speed per COG can be easily overcome by "undersampling": Just sample 128 pixels per line , each other line, in the "upper field" only.
This will give you much headroom in any respect!
You might want to add an "aliasing" low pass for the reduced pixel clock on the luma signal. Color however will not work with this simple scheme.
Harrison, thanks for that link. Looks like a great place to get some similar examples from, esp w/ code and schematic.
Post Edited (Tookings) : 10/2/2007 5:26:20 PM GMT