Propeller video question
luminas
Posts: 3
Hi all,· my first post here [noparse]:)[/noparse]
I am planning to use propeller as a video processor : it recieves composite video signal, crop to specific region, zoom the region to fill the screen, finally it send the composite signal out.
Can it be done with propeller ?
Thank you
Jon
I am planning to use propeller as a video processor : it recieves composite video signal, crop to specific region, zoom the region to fill the screen, finally it send the composite signal out.
Can it be done with propeller ?
Thank you
Jon
Comments
1) the raw bandwidth requirements, (Medium resolution video output is a challange on the propeller - especially if you want lots of colour control)
2) high-speed bulk memory accessing, (the propeller has no dedicated memory interface, so all the memory accesses are bit/byte/word bash. You'll need most of the propeller 32k or RAM just for the code itself)
3) DSP style scaling calcs. (Could be split across cogs, but I think you'd be running out of cogs)
Don't get me wrong, some variation of what you're wanting to do may be possible, but it would probably take a very lateral approach. If you're expecting to hook up ADCs, DACs and some memory in the middle then you should expect some challenges.
If there was some fancy chip that could do the zooming and scaling for you (with say an SPI or I2C interface), then you could use the propeller as a controller with the Bean's OSD (video overlay) component to give the user an onscreen user interface.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
I'd like to work with color PAL ( 720 x 576 ) composite.
So what is your chip suggestion that can do crop & scale ? Simple chips that is controlled with I2C is preferable [noparse]:)[/noparse]
Jon
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
These days these functions are done as a portion of a larger ASIC usually handling HDMI as well. And in those cases normally destined for set top boxes. Manuf such as Zoran or Silicon Image come to mind, but they are not too interested in selling small quantities of parts.
If I were to tackle this type of problem, an FPGA would be the best candidate, as it can handle the high bandwidth required. There are many demo boards and downloadable software readily available. BUT this is not a trivial task.
On the front end you'll need a Flash Converter, sync seperator, and a video PLL.
First, you're going to need to take the analog video and digitize it. For colour one way is to synch with the colorburst (4.3361875Mhz) and sample at 4x colorburst (17.34475MHz) to extract Y+U,Y+V,Y-U,Y-V using an external ADC. You'd also need to somehow sync the prop sampling with the ADC.
Alternately, there might be an analog part out there which would filter & separate the composite signals into Y,U & V, which the Prop could sample (see the counter application note), though at fairly low res.
Sync detection & separation could be done on the Prop (although you might still need an external sync separator to gate the signal to an external colorburst PLL). The next challenge would then be scaling the image and preparing it for output. If you have Y+/-U/V samples, then you need to replicate the correct bytes. With separate Y,U,V samples, you need to combine them in the correct order and generate the video at 4x colorburst. That might be easier to do.
Oh drat. How the heck is the output going to vertically scale the input? Best case the system is going to need to buffer the number of input lines which are shown in the output. That's really going to hurt with only 32K of HUB RAM. That also means either the input routine are going to have to handle the synch detection, or you're going to need a separate cog to just do sync detection and image selection. (Hmm... that might be the better way to go.)
Hmm... I wonder whether it would make sense to try to handle the B&W case first, and then try to enhance that for color after you get it working.
Input -> B&W LP filter -> 2.5MHz 5bit ADC cog -> sync, vertical crop, horizontal scale cog -> HUB RAM fieldbuffer (133x246 max) -> PAL output cog -> output DAC -> output
I like the idea of doing it with a Prop... first of all, you will get plenty of help... as this thread demonstrates. The "getting help" aspect of Propeller design is sometimes the most compelling argument for "doing it" with a Propeller[noparse]:)[/noparse]
I'm not an engineer... but I tried to ask an engineering level question once[noparse]:)[/noparse]
Basically I asked if one could "gang up" on a signal, by using multiple props with a common clock. The answer I got was "yes." So, using that logic, Mike's answer in this context would be "16 gray levels per Prop (version 1)."
I find the Prop to be a perfect heuristic device... and a project like yours seems to be a perfect heuristic approach to signal analysis.
Look at http://forums.parallax.com/forums/default.aspx?f=25&p=1&m=179824... this is a multiprop system for producing images implemented using one master Prop and 16 slaves.
It makes sense to me that a similar set-up could be used to do all kinds of signal analysis, which require more bandwidth and memory than are available from a single Prop.
I agree that there might be other more direct ways to accomplish what you describe... but putting together a Propeller system to do it will generate a host of Spin-offs that are uniquely your own.
Good luck, and PLEASE keep us posted on your progress.
Rich
There are now two options for developing a low cost, parallelized system... the Proto Board and a new fully socketed rapid prototyping board...http://www.parallax.com/detail.asp?product_id=32202
Well it seems that I may change the project goal, as this suppose to be very hard to do for a beginner.
I think there were some analog chips that could perform analog video manipulation, could you point me where I can find those chips ?
Thanks,
Jon