View Full Version : Propeller Backpack: COLOR NTSC Capture

Phil Pilgrim (PhiPi)
10-19-2011, 03:24 AM
In a previous thread (http://forums.parallax.com/showthread.php?135214-Propeller-Backpack-Capture-NTSC-Video), I presented a grayscale NTSC image capture program for the Propeller Backpack, along with a hint that color capture might be possible. Indeed it is, with the same circuitry. The only thing I changed was to replace the 330-ohm external resistor with 560 ohms.This permits a lower gain for the sigma-delta ADC, enabling it to capture more detail from the chroma burst at the beginning of each line. Here are a couple of the images I've captured from the system:



For years (literally), I was was hung up on the notion that you needed to gen lock to the NTSC color burst to make color capture possible. I know now that this is not necessary. Borrowing from the work I did on the Propeller AM receiver (http://forums.parallax.com/showthread.php?105674-Hook-an-antenna-to-your-Propeller-and-listen-to-the-radio!-(New-shortwave-prog&highlight=hook+antenna), I realized that all that was necessary was an I/Q demodulator (i.e. synchronous detector) to sample the color burst at the beginning of each line at the chroma frequency (3.579545 MHz) and then again for each pixel in the line. Once you have the I and Q components of each, you can compute the chroma (I and Q in the YIQ color space) from the equation:

hue = atan2(qburst, iburst) - atan2(qpixel, ipixel)
saturation = sqrt(ipixel2 + qpixel2)

I = sin(hue) * saturation
Q = cos(hue) * saturation

From these and the gray level (Y), you can compute the RGB components of each pixel, where Y, I, and Q are suitably (i.e. empirically) scaled to produce a 0 .. 255 range for R, G, and B:

R = 1.969 Y + 1.879 I + 1.216 Q;
G = 1.969 Y - 0.534 I - 1.273 Q
B = 1.969 Y - 2.813 I + 3.354 Q )

(formulae from Video Demystified (http://www.amazon.com/Video-Demystified-CDROM-Keith-Jack/dp/187870723X/ref=sr_1_2?ie=UTF8&qid=1318993209&sr=8-2))

To sense the i and q components of each color burst and each pixel, I created two clock outputs at the chroma frequency, that are 90 out of phase. Each is combined XOR fashion in a logic mode counter that serves as a mixer/demodulator and sums the response over the programmed integration time. I chose four chroma cycles (i.e. two pixel widths) as the integration time, with I and Q staggered two pixels apart so I could do the chroma detection in one cog. This does result in some chroma smearing (evident in the above images), so I might split the I and Q demodulation into two cogs, so that each samples just one pixel's chroma at a time, without staggering samples.

Anyway, I just wanted to provide an early preview of what I've been working on. Neither the Spin source, nor the Perl analysis and image-generation code are "forum-ready" yet, but they will follow in the days to come.


10-19-2011, 03:46 AM
Incredible, Phil.

We really need to revive OBC's cookbook with a special section on "1 resistor recipes". Take 1 backpack, add one 330 ohm for B&W, or one 560 ohm for colour. I've got a recipe for 220kohm coming

10-19-2011, 03:50 AM
This is excellent!!

Very well done Phil..

10-19-2011, 06:54 AM
WOW Phil!!!!
This is incredible. Way to go ;)

Duane Degn
10-19-2011, 07:38 AM
WOW Phil!!!!
This is incredible. Way to go ;)

Double ditto!

My jaw dropped when I saw those pictures. Amazing!


10-19-2011, 09:33 AM
Hi Phil..


10-19-2011, 10:37 AM
Very impressive, Phil.

Or, as my daughter would say: "Awesome!"


10-19-2011, 11:15 AM
Very, very neat. Great work.

10-19-2011, 11:19 AM
Amazing. I can feel a little Gadget Gangster board coming on. RCA socket and just a few components, and your robot can see!

10-19-2011, 11:29 AM
That is amazing work. Could you provide some more theory ? And maybe some waveform diagrams ?
I'm not really understanding how it works, but it looks great.


10-19-2011, 01:12 PM
A while ago, DealExtreme.com had some cheap cameras for <$10. It'd be very nice to see if those would work...

Phil, can you capture a lower resolution on the Prop chip? Can you capture faster that way?
To me, I think a very small resolution, say 64x48 or maybe even smaller would the largest the Prop could process in real time...

10-19-2011, 02:33 PM
Excellent job Phil! As has been stated before, sometimes all that is required to get something accomplished is to tell someone it's impossible...

You might want to use the saturation of the burst to do AGC on the color signal, just like you're doing phase correction.

10-19-2011, 02:39 PM
A while ago, DealExtreme.com had some cheap cameras for <$10. It'd be very nice to see if those would work...

Phil, can you capture a lower resolution on the Prop chip? Can you capture faster that way?
To me, I think a very small resolution, say 64x48 or maybe even smaller would the largest the Prop could process in real time...

Just about any cheap camera will work. I have been switching back and forth between a cheap camera and a satellite receicer to get test inputs.
I have been capturing video and audio , at greater than than 64x48 resolution for some time now.(greyscale). with "stupid video capture"

Lately I have taken a side trip and started hacking over Eric Balls 8bpp driver to get greater pixel depth so that these images can be displayed on the Propelller

Philtastic work!!!

This is the first approach I tried, How many colors are you getting?

I like to do this in real time and have been using a protoboard at 108MHz clock so I have more time to process the incoming/outgoung data.

With low resoloutions it should be possible to build lookuptabels to do the conversions in real time


10-19-2011, 03:22 PM
Amazing, Phil!

These guys (Centeye - http://centeye.com/products/ardueye-shields-for-arduino/ ) had an interesting presentation at the Open Hardware Summit about what they were doing with low resolution image processing (16x16) cameras. They have a "shield" that had one of their cameras on it. It's in the never ending queue of fun things to look at.

Since Phil is a Dreamer and a Doer, maybe he can take something of use away from this instead of my shoulda/coulda/woulda approach.

Phil Pilgrim (PhiPi)
10-19-2011, 03:47 PM
Thanks, guys!

Today I'm going to try doubling the chroma resolution (to match that of luma) by rearranging cogs to have each of the I/Q local oscillators and its associated mixer in one cog, rather than the oscillators in one cog and the mixers in another. After that, I shall try to flesh out the presentation with more theory and some code.

I'm still doing the pixel computations on the PC in Perl, since it's easier to try different things in a fast, native floating-point environment. The Propeller is simply capturing the necessary data at this point and passing it on. Once I'm satisfied with the results, I can begin writing Prop code to to do the YIQ-to-RGB color conversion internally.

More to come!


10-19-2011, 03:55 PM
Nice gadget! Once Humanoido's Big Brain gets some eyes like this, it'll insist on having homemade oatmeal and Windex for breakfast every morning.

10-19-2011, 07:15 PM
Awesome job Phil!
If you're willing I'd love to integrate your color frame grabber with ViewPort to stream video at 1mbps- with my grayscale grabber I get ~10 frames/second. Once the video is inside ViewPort the integrated OpenCV computer vision toolkit can recognize things like human faces and send the coordinate back to the Prop. Several years ago circuit cellar published my article with grayscale grabber and propeller based computer vision- I'm sure they'd be interested in an update from you...

10-19-2011, 08:39 PM
Phil, I can't wait to see how you go about this!

10-19-2011, 09:08 PM
Here's a color NTSC camera that runs on 6-12V. Sensor has resolution of 628x582px- Phil's algorithm should be capable of supporting the vertical resolution. Supposedly goes down to .2lux.

It costs $11.83- including worldwide delivery


Phil Pilgrim (PhiPi)
10-19-2011, 09:55 PM
I rewrote the capture program to obtain an I and Q chroma value for each pixel, rather than staggering them across two offset pixels apiece. This resulted in 50% more data for each capture. Here are the results:

Individual I and Q chroma values per pixel.

Individual I and Q chroma values with (0.25, 0.5, 0.25) weighted average across three pixels.

Original staggered I and Q chroma values.

In the top image, you can see that the chroma values are not smeared. This is particularly apparent in the dark background. In the middle image, the weighted averaging helps to smooth them a bit. But I'm not convinced that either is better than the bottom (original method) image. I fact, I think the bottom image is more pleasing. As a consequence, I think I'll stick with the original program.


10-19-2011, 10:37 PM
Phil: I have to agree. I looked at the pics first, then read the captions, and lastly your comments. Right from the start I had the impression that #3 was the best.

Let me know if you get to the point of wanting to try multiple prop in parallel. My modules stack easily for running props purely in parallel. I want to try VGA this way, but never enough time!

I too am very interested in seeing your simple block explanations of the workings behind your code. I do understand it somewhat, basically because of your other explanations of DSP and research I did following those posts. Its a joy to find out that those maths I did so many years ago (~40) actually have an application that I may end up using after all.

Phil Pilgrim (PhiPi)
10-20-2011, 03:42 AM
Here's an explanation of the theory behind the Propeller Backpack color capture. First the schematic, showing the external 560-ohm resistor (all other components being included in the Backpack itself):


The sync is detected as a logic value on A22. The 0.1uF capacitor is charged by an occasional DUTY-mode output from A19 to clamp the signal on A22, so that the sync tips and nothing else go below the logic threshold. The video signal also transits the external resistor and the MOSFETs (which are always "on") to the sigma-delta ADC formed from pins A12 (feedback), A14 (input), and A17 (filter cap -- always pulled low). The ADC's counter provides the luma (Y) value for each pixel.

A color video signal includes not only sync and luma levels, but also chroma (color) information. The chroma is provided by a 3.579545 MHz subcarrier that rides upon the luma signal. (In fact, if you low-pass filter a color signal, you will end up with a grayscale signal that's still compatible with B/W monitors and receivers. Therein lies the brilliance of the NTSC and PAL color standards: they are backwards compatible with the earlier black-and-white standards.) The hue of any pixel is determined by the phase of the chroma subcarrier, relative to that of the "color burst" at the beginning of every scan line; the saturation, by its amplitude. Here is a photo of the oatmeal box, over which I've superimposed a trace of the video signal corresponding to a particular scan line:


After the low-going horizontal sync, in a section of the waveform called the "backporch", lies the color burst, which is a phase reference for the chroma information to follow. You will notice, as the word "OATS" is scanned, that areas of high brightness have a high amplitude and that areas of rich (i.e. more saturated) color (e.g. red vs. white or black) have a higher high-frequency component. The latter is the chroma subcarrier.

In the Propeller Backpack, the amplitude and phase of the subcarrier at any point in the scan are determined by "mixing" the signal with two oscillators (NCO counters) of the same frequency but with a 90 phase offset. These are output on pins A24 and A25, which are not otherwise used by the Backpack. This is adequate to determine both the amplitude and phase of the subcarrier over any interval within the scan line. The mixing is done by a pair of counters, each of which takes the ADC feedback output and XORs it with either the in-phase (I) oscillator or the quadrature-phase (Q) oscillator and counts the number of times this results in a logical "1". The average phase of the signal during that interval, relative to the I and Q oscillators, is given by the arctangent of the sine (I) component and the cosine (Q) component as counted by those counters.The phase relative to the color burst can then be computed by finding the color burst phase relative to the I and Q oscillators and subtracting. The saturation is just the Cartesian distance (square root of the sum-of-squares) of the I and Q terms.

Here is a scope trace of a color burst, the I and Q oscillator outputs, and the ADC feedback output:


You will notice that the ADC output is more predominantly low when the video signal is high, and vice-versa. It should also be apparent that this effect is very subtle, due to the fact that each cycle of the chroma frequency includes fewer than twenty-three 80 MHz Propeller clock intervals. This results in a fairly low signal-to-noise ratio and accounts for the proliferation of chroma noise in the acquired images. Errors in measuring the phase of the color burst will result in horizontal color striping in the acquired image; errors in the phase of each pixel, in color blotchiness within a scan line.

Here is a block diagram that ties the system together:


The output data going to the PC for construction of the image consists of luma (Y) data for each pixel, and interleaved chroma data, such that each pixel shares its I and Q chroma data with its right and left neighbor. In the PC, a Perl program does the work of computing the actual YIQ color signal relative to the color burst data and converting it to RGB. What remains is to have the Propeller do this work, so that the final image can be produced internally.

Hopefully, within the next day or so, I will have a Windows exe ready to download that receives data directly from the Propeller Backpack and displays the resulting image. Stay tuned!


10-20-2011, 08:12 AM
Nice explanation as always Phil :)

10-20-2011, 03:37 PM
I never realized the Propeller's sigma delta ADC could capture video.

Edit2: Doesn't this kind of qualify for the 'PropCam'?

Bill Henning
10-20-2011, 04:04 PM
Amazing work Phil. Had to pick up my jaw off the floor...

10-20-2011, 08:20 PM
Parallax already sells a product with a CMOS camera and lens on it, so the parts are already in-house to make a compact camera for this. A small breakout board or daughterboard for the Backpack should be straight forward The problem with all those cheapo cameras is you really never know what output you're going to get, as it's seldom 1v p-p. Makes troubleshooting a bit hard for people who don't understand video.

Oh, and yeah, good work Phil, but you knew that already. Tell Browz to treat you to dinner this weekend to celebrate. I understand he has a penchant for cheap fish and chips joints.

-- Gordon

Phil Pilgrim (PhiPi)
10-20-2011, 09:00 PM

Ditto the cheap cam caveat, for exactly the reason you've stated. In most situations, the problem can be corrected by supplying a 75-ohm load when the unloaded P-P is more than the 2V spec. But I'm not sure that would work in this case, due to the added series resistance in the video path before it reaches the output connection. One might have to add a load directly to the camera output before it gets to the Backpack.


'Not sure where to go from here. So far, it's only a proof of principle that the Propeller can grab the necessary raw data. Beyond that, what would make it useful, rather than a mere curiosity?


10-20-2011, 09:25 PM

'Not sure where to go from here. So far, it's only a proof of principle that the Propeller can grab the necessary raw data. Beyond that, what would make it useful, rather than a mere curiosity?


Phil I have been working on "stupid video capture" for some time. I just posted a 16bit NTSC video driver.

I have added new functions to the Pixelator and it is becomeing quite the audio/video platform.

changing in/out volume on the fly
speeding/slowing output play.

It could be used as a video doorbell, animal tracking recorder, time elapse recording

even a data recorder, the audio channel can be some other signal to be presented along side video

I'm hoping I can incoporate you color method

But other than my pet project how about add GPS and SD and you have a packager tracker that takes pictures periodically!

Don't worry once people see some thing that can't be done, new users/uses pop up ....



I have been using a 108Mhz clock lately, the increase in speed gives more pixel depth.

10-20-2011, 10:28 PM
Phil, There are a couple of applications that I can think of, in the Backpack itself, or in conjunction with another Propeller. I'm sure many of these have already occurred to you, but here's a short list.

Basic and intermediate vision analysis comes to mind, of course. I'm sure you've already looked at what Kwabena has done with the CMUcam4. And of course Hanno's pioneering work.

Since you're in color space now, you could do simple color blob analysis, looking for blobs of a specific size or larger, and reporting their pixel position (e.g. center of blob, plus X and Y dimensions).

With other tricks, you can find edges for simple object detection. With a grating etched in both planes, you can use a laser to create a topographical map of dots; the distance between the dots indicates distance. The usefulness of this system depends on how bright you can get the laser, and how well you can filter out all but its wavelength.

Frame rate doesn't need to be super fast, and you can have modes where you skip lines and pixels, and you certainly don't need to deal with both video fields. A lot of video analysis don't even require capturing a full frame (or even field) into a buffer. You can do much of this in real time, storing at most only a few lines worth of video, like the time base correctors of yore. All you're really looking for most of the time is lowest and highest.

Since there's good tonality in the image, the sensor could also be useful as a smart "compound eye" to look for and follow the brightest object. Outfitted with a wide angle lens the field of view could be greatly increased, acting as a very nice proximity detector, motion detector, you name it.

-- Gordon

Phil Pilgrim (PhiPi)
10-20-2011, 10:33 PM
One of the things that's been nipping at me is that all the yellows were coming out pink for some reason. Today I've been fiddling with the chroma computation and decided to try a trig identity for angle differences, rather than going the arctangent route. It's so much simpler, since there are no trig functions to compute at all -- just multiplies and divides. In the process, the yellows suddenly came out right:

http://forums.parallax.com/attachment.php?attachmentid=86104&d=1319061243 (old) vs. http://forums.parallax.com/attachment.php?attachmentid=86146&d=1319149753 (new)

'Not sure why, exactly (and the Quaker guy looks a bit jaundiced), but I'll take it!


10-21-2011, 01:48 PM

@Phil: Very nice work !

Does anyone know which MOSFETs could be use when building my own propeller backpack ? (modifying prop dev. board)
On the schematic there are n-channel depletion shown !?

Thanks in advance

Phil Pilgrim (PhiPi)
10-21-2011, 02:32 PM

The dual MOSFET used on the Backpack is a Rohm UM6K1NTN. It is characterized for gate drives as low as 2.5V. This is important. If the chosen MOSFET requires too high a gate drive, it will not switch efficiently with the Prop. However, for the circuit to work in this app, you can leave out the MOSFET altogether and simply make a connection between where the two sources were.


Duane Degn
10-21-2011, 04:11 PM

You can do this with just some caps and resistors?

As Gordon said above, I 'd like to see the ability to find color blobs.

I'd really like the Propeller to handle all the math on this. Is there a way of freeing the Prop from the PC?

What I'd like to see is the Prop computing and displaying the images you've shown. It would be nice to be able to add graphics/text on top of the displayed image.

Would the captured image be small enough to transmit over an XBee or Nordic module? How many times have we seen people want to use their XBees to transmit a video signal. Perhaps with your image cature one could send the image over a wireless connection. Not neccessarily at a full 30fps but maybe a frame every couple of seconds.

This is amazing stuff. Thanks for documenting it so well.


10-21-2011, 04:41 PM
... decided to try a trig identity for angle differences ...

Reading this last night got me to thinking and I realized you could use a dot product to extract U & V. I've done some calculations, but let's start with the RGB to YUV calculations. Assuming RGB are in the range 0...1:

Y = 0.299R + 0.587G + 0.114B
U = 0.492(B-Y)
V = 0.877(R-Y)

These are then scaled for output. sync is -40IRE, blank is 0IRE, black is 7.5IRE, white is 100IRE, burst is -U at 40IRE p-p, then U & V are scaled by 185IRE p-p. So working backwards:

-U = 40 * ( i.burst * i.pixel + q.burst * q.pixel ) / (185 * sqrt( i.burst * i.burst + q.burst / q.burst ) )
-V = 40 * ( i.burst * q.pixel - q.burst * i.pixel ) / (185 * sqrt( i.burst * i.burst + q.burst / q.burst ) ) (I think, it might be +V.)
Y = 40 * y.pixel / (92.5 * y.sync)

R = Y + 1.140251 * V
G = Y - 0.394731* U - 0.580809 * V
B = Y + 2.032520 * U

Note: in NTSC I and Q refer to the original modulator phases which are rotated 33 degrees WRT U & V.

10-21-2011, 04:44 PM
Is there a need to transmit digitized RGB video wirelessly? You might as well transmit the analog video directly from the camera using a cheap transmitter/receiver. The quality would be better (no pixelization) even on a low-cost transceiver. This isn't saying anything about Phil's capture, just that you're already an extra generation down, plus you're working with a low resolution digitization.

In any case XBee even at its highest speeds would be pretty slow, and you'll need to encode the transfer to something that at least detects errors, if not corrects for minor data loss. Bluetooth might handle it better.

-- Gordon

Phil Pilgrim (PhiPi)
10-21-2011, 05:05 PM

"You can do this with just some caps and resistors?"

Yup, but they have to be located very close to the Propeller chip.

"As Gordon said above, I 'd like to see the ability to find color blobs."

There's a possibility that that can be done.

"I'd really like the Propeller to handle all the math on this. Is there a way of freeing the Prop from the PC?"

Yes. By using the trig identities for angle differences, no trig functions are necessary. However, I would still not expect real-time capture plus color conversion.

"What I'd like to see is the Prop computing and displaying the images you've shown."

'Probably not gonna happen. Those images are essentially 90H x 238V pixels in size. Even compressed to one byte per pixel, we're looking at 21420 bytes, which doesn't leave much RAM for anything else. Is one byte per pixel even reasonable? It may be. The luma depth is about five bits, but that could be reduced to four for 16 gray levels. That leaves four bits for chroma, which would most efficiently be alternated between I and Q values from pixel to pixel. But that's an impossible representation for computing display pixels in real time. The Prop's composite video output wants phase angle (i.e. atan2(Q, I) ) for chroma, and VGA wants RGB.

"Would the captured image be small enough to transmit over an XBee or Nordic module?"

No image is too big to transmit over a wireless channel. It all depends on how long you're willing to wait to receive it. :)

IMO, acquiring an image using this method just so you can look at it (whether wirelessly or on a PC) is a waste of time and effort. There are better ways to do both. The only real benefit comes when the Propeller is actually able to do something with the image data it acquires, e.g. interpreting a barcode, tracking a color blob, etc. That's when things get interesting!


Duane Degn
10-21-2011, 05:13 PM
Is there a need to transmit digitized RGB video wirelessly?

Will the connection to an anolog video transmitter mess up the Prop's ability to read the signal? Wont this add a load to the signal line?

I have a nice video transmitter myself but I see a lot (well, I can remember three) people ask about transmitting video over their XBees. One reason I think of sending after capture image is because, I would think, the Propeller would have and easier time displaying a low resolution image.

Instead of XBees, I'd perfer to use nRF24L01+ Nordic modules (http://forums.parallax.com/showthread.php?130707-Looking-for-Faster-Objects-for-Nordic-Wireless-Modules-nRF24L01-and-nRF2401A). The can transmit 2Mbps and have a CRC error detection.

If adding an analog video transmiter wont mess up the Prop's ability to capture the image, then just some information about color blob size and location (transmitted with XBee or Nordic) could be used to overlay (with a second Backpack) data on the image.


Phil Pilgrim (PhiPi)
10-21-2011, 05:23 PM

There's no reason to convert to YUV space first, since you can get RGB directly from YIQ using the formula in my first post:

R = 1.969 Y + 1.879 I + 1.216 Q
G = 1.969 Y - 0.534 I - 1.273 Q
B = 1.969 Y - 2.813 I + 3.354 Q

Per the trig identity,

I = k ( pixel(i) burst(q) - pixel(q) burst(i) )
Q = -k ( pixel(q) burst(q) + pixel(i) burst(i) )

I'm not sure yet where the minus sign in the Q expression comes from, but making that change is what made the yellows come out right, instead of being pink.


Phil Pilgrim (PhiPi)
10-21-2011, 05:28 PM
Will the connection to an anolog video transmitter mess up the Prop's ability to read the signal? Wont this add a load to the signal line?
Maybe. It depends on the load impedance of the transmitter's input. If it's 75 ohms, you will want to decrease the value of the Backpack's external series resistor. Just be sure to split the transmitter's signal off before it reaches the Backpack board.


Duane Degn
10-21-2011, 05:29 PM
IMO, acquiring an image using this method just so you can look at it (whether wirelessly or on a PC) is a waste of time and effort. There are better ways to do both. The only real benefit comes when the Propeller is actually able to do something with the image data it acquires, e.g. interpreting a barcode, tracking a color blob, etc. That's when things get interesting!



Thanks for answering my questions. I agree, this isn't a good way to "look at" an image. I'm thinking about showing this to others. It's one thing to say the Prop is tracking a color blob but wouldn't it be cool if you could also see the object the Prop is detecting? I think this ability would add a lot to robot demonstrations.

When I show people my robot using Hanno's video capture method, they don't seem to care very much that the robot can see something, they want to see what the camera sees too. I can kind of show them this with an 120 pixel LED array but it would be cool to be able to display a color image.

I've got to figure out how to display and image using external memory (subject for a different thread). This would solve the problem of storing a large image.

I'm looking forward to seeing your code. I think having to abiltiy to identify colored markers could aid a lot in robot navigation.

(BTW, my I hadn't seen your post #36 when I wrote post #37.)


Duane Degn
10-21-2011, 05:34 PM
Maybe. It depends on the load impedance of the transmitter's input. If it's 75 ohms, you will want to decrease the value of the Backpack's external series resistor. Just be sure to split the transmitter's signal off before it reaches the Backpack board.


Thanks Phil. Time to order a second Backpack so I can overlay blob data on the transmitted image.


Phil Pilgrim (PhiPi)
10-21-2011, 09:10 PM
Here's an idea of what could be accomplished with reduced resolution and pixel depth:


The capture size would be about 92 x 68 pixels, which results in approximately-square pixels. This requires capturing every seventh line, which means every 14th line from the even field and, interleaved with those, every 14th from the odd field. The pixel depth is eight bits: four for luma and four for chroma, with adjacent pairs of pixels sharing the chroma values. IOW, one pixel would carry the I value, and its neighbor, the Q value. That's 6256 bytes of image data altogether, but not in a form that's readily displayed by either the NTSC or VGA drivers. (BTW, the thumbnail just about "nails" the native resolution.)


10-21-2011, 10:00 PM
Great stuff!
You don't need a lot of pixels to use artificial markers for navigation/robot control- 15x15 pixels is fine for a lot of tasks- just look at what we use for operating system icons.
Even a small picture is worth a thousand words!

The nice thing about a software grabber is that it can be flexible- zooming into particular areas or colors as needed.
As an example, the ViewPort grayscale grabber dumps 200x240 4bit pixels into memory for "hi-res" images that can be streamed to the PC, or analyzed in memory. It can also dump 1/4 that size and use the other 3/4 for computer vision "registers" for complex manipulations like finding a bar code pattern in a cluttered environment. This still left enough memory and cogs for a balancing robot using a kalman filter.
As I've posted previously- I'd love to update this with your color algorithm- my backpack and camera are standing by!

10-21-2011, 10:43 PM
This requires capturing every seventh line

That sounds interesting. Would it be possible to change that value? Could you capture one frame every 7th line starting at line 0, then do another frame every 7th line starting at line 1 etc. Then do some image manipulation to improve the resolution?

but not in a form that's readily displayed by either the NTSC or VGA drivers

I've been doing lots of work with image manipulation in vb.net - it should be possible to change any format to any other format - eg scaling, pixel averaging, dithering etc. Harder to do on the prop but if you can get it working in C on a PC then maybe port to catalina C?

10-21-2011, 11:25 PM
The low-res capture, which is excellent by the way, has me thinking of a new form of line following, capable of differentiating not only line width and embedded patterns, but colors as well. PVC electrical tape is commonly available in a wide variety of colors. With white LEDs to illuminate it, you could create all kinds of interesting courses where robots are programmed to follow lines of certain colors, rejecting others ("follow the yellow brick road!!").

I know this is all done with a Backpack, which is a terrific stand-alone product, but I'm also seeing this as an all-in-one board. Ideally the camera, mounted on the back side of the board (or a sandwiched board affair) would use a mini screw lens, allowing different types of lenses to be exchanged, depending on the application. I'm sure sure what focal length is on the camera used in Joe's laser range finder, but that would be a good one to start with. Parallax basically has all the BOM for this already in-house.

-- Gordon

Phil Pilgrim (PhiPi)
10-22-2011, 04:00 AM
Here's an idea what the image would look like in the Propeller's native 6-bit RGB VGA color space:



10-22-2011, 09:56 AM
And this is what it looks like on a 4" LCD TV screen using the Propeller TV palette. The photo is a little washed out compared to the real screen - the red is redder on the screen.

I'm still amazed this works - keep up the good work.

10-22-2011, 06:23 PM
I'm thinking the Propeller can do this in real time. for both NTSC and VGA

use separate cogs for color and intensity input

store the data in byte interleaved format as I+Q, Y+Y, I+Q, Y+Y

since there are only 256 possible inputs for the color conversions,
calculate them all in advance and save a lookup table for VGA or NTSC

For such low resolutions the output video cog can use some delay registers and table lookups to reformat for desired output.

I even have a dump screen to SD BMP file code that Rayman wrote That I hope to try!

Possible ?


10-23-2011, 02:15 AM
Truly a most clever and unbelievable feat!
Never thought anything like this would be achievable with the prop 1.. Just when you think all the secrets if the prop have been unlocked... Remarkable phipi!

This would probably be one of the top downloaded objects!

10-23-2011, 04:27 AM
Truly a most clever and unbelievable feat!
Never thought anything like this would be achievable with the prop 1.. Just when you think all the secrets if the prop have been unlocked... Remarkable phipi!

I believe there is still a huge amount of things to be discovered that the prop can do. Really, the counters have so many possibilities - rmember, we actually have 16 of them, each with so many different modes. And then with the multicores, we have lots of parallel processing capabilities.

Phil is just showing us some of those capabilities. Put a lot of the different parts he has done together, and this chip is capable of some awesome things.

Keep it up Phil. Your work is so inspiring and with your explanations and diagrams, they make it look so simple.

10-23-2011, 10:31 AM
WOW, PhiPi, that's mightily impressive work matey! well done!
I've barely ( not at all really ) had time to do any prop stuff lately :( but have to pop on the forum once in a blue moon to try and keep up with the amazing progress that has been going on!
This however is freaking amazing!
Will look forward to seeing this in action for real!
My mind is buzzing with many ideas for uses for it!

Well done!

10-23-2011, 11:14 AM
Yes my mind is still buzzing too.

The theory says that it is too hard to increase the number of pixels/color resolution of the prop because it is not fast enough to calculate the phase and hence the color.

Yet here is PhiPi who went from B&W to color with some extremely clever software techniques.

First thing I am thinking - well if you are sampling every 7th pixel, and if you have 8 cogs that can run in parallel...

I'm still intrigued by the idea of capturing an entire screen in a SRAM chip like a storage CRO then clocking it out later with a fast (?external) clock for a better resolution picture. Having seen PhiPi's real life waveform for a picture, it doesn't look so hard. Just a bunch of sinewaves, right? And a very fast D to A. I know someone earlier said that the phase angle difference was 22ns and it is a 55ns SRAM, but even so, say you took a sine wave and stored it and then took the same sinewave at a slightly later phase angle and stored it, for the first one the first sample value might be, say 50 of 255, and the second one might be, say 70 of 255, and when you run that through a 6Mhz low pass filter, it ought to create a sine wave with a delayed phase.

I'm intrigued by PhiPi's work. I also have to say I am very impressed, because in all the research I have done on NTSC signals, I have never found anything so clearly explained as PhiPi's waveform overlaid on a picture, with the associated description. At last I am starting to understand how NTSC stores the three values of H, S and L, rather than just the two parameters of H and L.

Keep up the good work!

Phil Pilgrim (PhiPi)
10-25-2011, 07:25 AM
'Finally got around to doing the color computations in Spin and needed a way to see the result sans PC. So I built a VGA daughterboard for the Backpack using a Proto-DB, a DB15 connector, and some resistors; but then I had to push the stack to deal with some resistor issues:


I'm using Kye's 6-bit VGA color driver. ('Can't say enough good things about it; it just works. Thanks, Kwabena!) Anyway, each RGB component has to be scaled and reduced to two bits, which isn't a lot of dynamic range. Still, though, the shades and colors are recognizable rendered at 80 double-wide pixels x 120 lines:


I need to clean up and comment the source code. Then I'll post it here so people can experiment. After that will be to recode the rendering in PASM, so it doesn't slow the capture so much. Right now, it's on the order of about seven seconds.


10-25-2011, 01:19 PM
Is that all done on a Prop now, without the PC connection?
Can you make it 1 second by reducing the resolution?

10-25-2011, 03:33 PM
This is a perfect starting point for Big Brain image recognition. In another post, Phil described a very neat algorithm for image resolution by measuring the image by angles and comparing with stored data to find a match.

Phil Pilgrim (PhiPi)
10-25-2011, 04:06 PM

That's all done on the Prop now. The only thing I'm using PST for is to trigger the capture. Rather than reducing the resolution to speed things up, I'm going to try doing the computations in PASM. The way I'm doing them now in Spin is pretty inefficient.


10-25-2011, 04:16 PM
Right on. All on the Prop. I distinctly remember this being on the list of "it can't be done" things now being done. When you do get PASM sorted, please consider also releasing the SPIN version so people can see and manipulate the calcs easily. Might be slow, but it is accessible.

10-25-2011, 04:17 PM
Wow, if you can get that resolution in the 1 Hz range, that would really, really be impressive...

Still, I think a lower resolution would be more useful for robot vision...

But, now that I think about it, if the idea is to save to SD or transmit to PC, then I suppose higher resolution is better...

Phil Pilgrim (PhiPi)
10-25-2011, 04:25 PM

I could easily drop the resolution back to 90 x 68, which results in squarish pixels. I just wanted to see how much resolution I could squeeze into Kye's 160 x 120 VGA driver.


10-25-2011, 04:55 PM
Great. Then, one can think about blob detection and maybe eye detection...

Phil Pilgrim (PhiPi)
10-25-2011, 07:14 PM
Attached is the first color capture source installment. It requires a VGA board plugged into the Backpack. Or you could try duplicating the Backpack's passive array on a Prop Proto board and use its VGA connector. Just be sure to keep the passives close to the Propeller chip. (Frankly, if starting from scratch, the Backpack's circuitry could be modified a bit to optimize it for capture. A DC restorer ahead of the ADC would be the first place I'd start. But that's another project altogether. Maybe I should include that circuitry in the MoBoProp. :) )


10-25-2011, 07:59 PM
Yeah! I'm download #1 of Phil's latest masterpiece!
Looking forward to lots of fun with color computer vision with the prop- thanks Phil!

Phil Pilgrim (PhiPi)
10-26-2011, 05:14 AM
In case anyone is trying this and not getting the brightness or saturation levels they were hoping for, attached is a version that lets you adjust the gains of each. I noticed that, once it got dark and I wasn't getting any daylight in the shop, the camera's automatic gain wasn't doing as well, and I had to boost things in the program. Eventually, I'll come up with some automatic gain controls in the program. But for now, it will have to be a manual adjustment.

BTW, here's the kind of VGA display I'm getting with dithering:



10-26-2011, 06:09 AM
If it helps, here is my 128 x 96 pixel 64color VGA driver. Perhaps the pixel sizes fit better to a reduced picture resolution. And it needs only 12 kB bitmap memory.



10-26-2011, 09:49 PM
Phil- Sorry I haven't posted a success photo yet- I got swamped and my initial experiments with some cameras (my camcorder with suspect connectors and my grayscale NTSC camera) didn't work out- I'll have to confirm I'm doing everything correctly. A suggestion to ensure first time success- how about having the Prop generate some video test pattern with one cog, output it to optional device, and then try to grab it with your capture cog? That would eliminate problems caused by cameras with different video settings (PAL), voltage levels, etc...

Phil Pilgrim (PhiPi)
10-26-2011, 10:00 PM

Thanks for the link. I'll check it out.


No dice on the PAL stuff I'm afraid. I'd have to rewrite all the timings for sync and capture.

BTW, the Propeller video output circuit would have to be modified to input to the capture program, because it has the incorrect output impedance and drives an unloaded line with too much voltage for the Backpack to sync on. A 220R load resistor between the driving Prop and the Backpack would probably fix the problem.


I'm finishing up some mods that make it easier to control the luma and chroma gains and keep them in bounds. The programs I've posted ere now have some overflow issues.


Phil Pilgrim (PhiPi)
10-27-2011, 05:09 AM
Attached is the latest update to the program. I've fixed some overflow issues, limited Y, I, and Q to NTSC specs, and added automatic gain control for the luma and chroma.

Here's a photo of my setup. The image on the screen is direct VGA output from the Propeller Backpack proto daughterboard.



10-27-2011, 09:18 PM
Thanks for the MIT license Phil (at least, I think it's MIT, doesn't say MIT, but it looks like it.).

I've been thinking of cool things to do with this.
First thing that comes to mind is adding a camera to my BOE bot...

Phil Pilgrim (PhiPi)
10-29-2011, 07:53 AM
I noticed that, after running for a couple hours, the luma and chroma sensitivity both started to decrease, requiring ever larger software gains to maintain constant VGA brightness, with a consequent loss of detail and increasing chroma speckle. This is due to the voltage on the ADC side of the 4.7uF ADC input cap slowly sinking. In order to keep this from happening, I added code to recharge it by outputting a high pulse on A16 during sync times. This acts as a weak clamp and apparently provides enough DC restoration to keep the burst troughs above 0V. It seems to have cured the problem. Attached is the modified code.


10-29-2011, 08:11 AM
Phil you're probably already onto this, but it should be possible to find an off the shelf camera with the same kind of dimensions as the backpack.

Saw this one (http://www.ebay.com.au/itm/New-420TVL-1-3-Sony-Super-HAD-CCD-Color-Board-Camera-NTSC-lens-/110756481688?pt=LH_DefaultDomain_0&hash=item19c999ca98#ht_3590wt_1037) which is close but not sure if the mounting hole slots extend in enough

10-29-2011, 10:54 AM
Phil: Since you really understand the intricancies of NTSC, is it possible to generate limit color, but at least a mix of rgb by only using 2 pins? I have tried using 1 and 2 pins and existing drivers with some success. Am I correct in thinking that we can get 4 brightness levels including black from 2 pins and the rgb color can be superimposed on this???IIRC the second pin in the sequence is mandatory for any color. Basically what I am asking, is it possile to generate 4 or 5 colors (a red, a green, a blue, and black and/or white)?

The second pin must contain some critical form of the modulation.

10-29-2011, 11:13 AM
Hi Phil, can I join the line of people requesting a moment of your brilliant genius?!

I have some fast counter chips on the way and I have an idea that it can be possible to store a frame of video into a 512k ram chip and then clock it out and bypass all the video and color limitations of the propeller. Effectively, increase the information on a screen from 32k to 512k.

I think the ram chip is fast enough and I think the information can be stored in the ram chip as phase angle, amplitude etc. Maybe I'm crazy but...

Would it be possible for you to give me a listing of the values for a single line of your video driver? Maybe a .csv comma separated file, or just a .hex or .bin file. I'd like to study the colorburst waveform and the phase and amplitude of the waveform and just ponder how much information would be lost if you stored it into a ram chip.

It is just that to my simplistic way of thinking, NTSC is bandwidth limited to 6Mhz, you have 30 frames per second, so that is 200k of information per frame, so surely a 512k ram chip can contain enough information to replicate that signal (2x Nyquist sampling, right?)

Thanks for you consideration, and thanks for giving the gift of sight to all the Propeller driven robots!

Phil Pilgrim (PhiPi)
10-29-2011, 05:54 PM
Am I correct in thinking that we can get 4 brightness levels including black from 2 pins and the rgb color can be superimposed on this???
One of those brightness levels has to be the sync level and, for color, you need a burst on the backporch and modulation of the luma. I'm not sure what would happen if the burst troughs went to sync level, but my gut feeling is that, with most monitors, it wouldn't matter. Here's an example illustration (not to scale):


Another option might be to use filtered DUTY modulation for the sync and gray levels and superimpose the chroma on top of that with the other output.

BTW, Eric Ball is the NTSC guru here. Maybe he can weigh in on both queries.


Phil Pilgrim (PhiPi)
10-29-2011, 06:29 PM
It is just that to my simplistic way of thinking, NTSC is bandwidth limited to 6Mhz, you have 30 frames per second, so that is 200k of information per frame, ...
To satisfy the Nyquist criterion, you have to sample at the bandwidth * 2 or faster. More to the point, though, the chroma clock is 3.57 MHz. To get accurate phase shifts for the chroma, I think your output rate would have to be more like four times that, or 14.28 Msps.


10-29-2011, 06:54 PM
Correct picture to Phil's diagram.

Look on attached picture86440

Phil Pilgrim (PhiPi)
10-29-2011, 07:08 PM

You're right: that is correct. I was just trying to imagine how or if it could be done with four analog levels. The reason I think it's okay for the burst troughs to reach sync level is that the choma does that in the Propeller's fully-saturated colors without throwing the monitor out of sync.

BTW, generating NTSC, rather than capturing it, is an interesting topic in its own right that deserves to have a thread of its own.


10-29-2011, 07:16 PM
Hi Phil.

I found that on my PAL 7'' TV's it is one of problems that burst levels and phase are not correct


You're right: that is correct. I was just trying to imagine how or if it could be done with four analog levels. The reason I think it's okay for the burst troughs to reach sync level is that the choma does that in the Propeller's fully-saturated colors without throwing the monitor out of sync.

BTW, generating NTSC, rather than capturing it, is an interesting topic in its own right that deserves to have a thread of its own.


Ps. On Yours thread "Better VGA DAC resistors (http://forums.parallax.com/showthread.php?135385-Better-VGA-DAC-resistors)" I have posted one pic that can be of interest to

10-29-2011, 07:39 PM
Does this reduced circuit diagram look right?
I'm going to see if I can implement it on the Quickstart in that little ADC area...

Phil Pilgrim (PhiPi)
10-29-2011, 08:01 PM
Yup, that looks right, except that C3 connects to P14, not C1-R3. In fact, you could use P14 as both a sigma-delta input and as an output during sync for DC restoration purposes. Also, by using higher-valued resistors in the sigma-delta section, you can increase the effectiveness of the DC restoration, since P12 would have less of an influence on C2's long-term charge characteristics. Also, since you've got the pads to do so, you might as well connect a complement of C3 to Vdd.


10-29-2011, 09:50 PM
Ok, maybe I have it this time:

Phil Pilgrim (PhiPi)
10-29-2011, 10:08 PM
Yup, that pretty much summarizes it. But, as I suggested, if you're starting from scratch, it could be improved.


10-29-2011, 10:21 PM
Hi Phil,
An idea to reduce required pins. If you only want to capture say 10 frames/second- then you could switch between looking for the sync and grabbing data- with the same pins. That way you need just the 2 ADC pins + the I/Q pins for color.

Phil Pilgrim (PhiPi)
10-29-2011, 11:07 PM

That's an interesting idea. The problem I see is that you want the clamp signal for the sync detector on one side of the input cap, and the video signal on the other side, both to be low impedance, so the cap can be charged to the correct level very quickly at the end of each horizontal line and during sync. Then you have to tri-state the clamp signal so it doesn't affect the charge outside the frontporch and sync zones. But, with the clamp signal line doubling as ADC feedback, you want a high impedance on the input, so the sigma-delta circuit doesn't have too much gain. But that limits the ability to do the necessary clamping, since the time required to do it is lengthened by the higher input impedance. Moreover, you can't tri-state the clamp/feedback line during acquisition, so it will affect the charge on the cap. I wish I could see a way to reconcile the two requirements, but it escapes me at the moment.


11-12-2011, 03:00 AM
Hi Phil,
My color NTSC camera arrived today and I got my first picture! I wasn't able to get an image for any external resistor values above 100ohm. I think I'm a bit more colorful in real life, but I'm ecstatic about this first result!

PropScope measurement of input into ovl pin closest to vid pins:

Self portrait:

Phil Pilgrim (PhiPi)
11-12-2011, 03:13 AM

That looks pretty good. Try it with some high-saturation subjects, such as packaged consumer goods and see if the colors come through better.

I've pushed the stack on this project momentarily to pursue the output side of the equation. I'm working on a way to increase the number of colors/shades displayable on a VGA screen without resorting to external memory. This should help the rendering of captured images significantly. At least, that what I'm hoping.


11-12-2011, 06:49 AM
Hi Phil,
I tweaked the voltage for the camera and resistor to get this image. Red multimeter, blue PropScope and orange RC glider!
I'm using ViewPort to view the result on my pc. That allows me to easily capture screenshots, get pixel values by mousing over a position, see graphs of row/columns, and perform OpenCV video processing- for example, to find a human face. The screenshot shows the result of the OpenCV face finder, it found the position and size of my face. That data is streamed back to the prop to let the prop do things like steer a robot towards humans or guide industrial equipment to specific colors or shapes.
If you haven't already, I would like to improve the update rate to what the conduit can support after I submit my book hopefully later this week.

Phil Pilgrim (PhiPi)
11-12-2011, 07:22 AM
That looks good. I'm working on converting the Spin RGB rendering code to PASM to (hopefully) get a close-to-real-time update rate. It's about half done.


11-12-2011, 10:54 AM
You know, this video capture could be perfect for holding a quadcopter in a fixed location. Get the quadcopter to a GPS location within a few metres. Capture a picture from a camera looking straight down, then detect whether the picture has moved in a certain direction and use that to get an accurate lock. You don't need or want high res video for that application - it just adds to the processing time.

I believe this is the way an optical mouse works. You might even want to decrease the resolution - 16x16 might be enough.

It is very exciting to see you are working on close to real time update rate. This is top-notch work PhiPi!

11-24-2011, 01:10 AM
Hoo !! Rah!!

I have been trying to get this working with an NTSC output instead of VGA

finally some success.

Here are my first before and after shots of a video capture using my "pixelator" platform instead if the "backpack".

This old shopworn "proto board" is going to be used as an oscilloscope to help debug the workings of my last "proto board" to put all of this together.


05-05-2012, 12:26 AM
That looks good. I'm working on converting the Spin RGB rendering code to PASM to (hopefully) get a close-to-real-time update rate. It's about half done.


Thanks Phil for all your great work. I thought I would bump this thread.
How is the rendering improvement coming?.

One afternoon I got the idea that one could capture a whole screen and then do the rendering to to a bmp file on an SD card.
No display driver needed !

here are a few captured images.. conveted to PNG format

Phil Pilgrim (PhiPi)
05-05-2012, 12:35 AM
'Looks good, Perry!

How is the rendering improvement coming?




05-05-2012, 01:00 AM
I have been working on hue related drivers .

Here is a version that uses the 8bc_TV driver. I have also tamed my modifications to Eric Ball's NTSC_8bpp driver.

The strategy for decoding the I/Q signals is to build a hue table of expected values and then squeeze to input values to be in side the table.

I chose 32 vectors because 360/32 = 11.25 so I could rotate the table to get the 30 degree rotation to match NTSC coding.

This worked out well as the 8bc_TV driver needed much larger rotation to get the colors correct

I think it would be nice if someone could replace the VGA daughter boad with other boards , eg. SD card + TV video driver.

I am going to post my latest Pixelator code on the "stupid video capture" thread even though it only produces an image every 3 seconds.

06-12-2012, 07:42 PM
'Looks good, Perry!




Hi Phil
I have completed a PASM version of my NTSC changes of your capture program, and have started on the VGA version.
Even with PASM rendering it is difficult to get below one second per frame captures, and the VGA version requires more multiplies.

I added an extra cog to do rendering in SPIN and then tried to convert it to PASM. You can choose which one to use by commenting out the opposite version. During the translation to PASM one can also see options to optimize the SPIN version.

Spin seems to promote variables read to signed arithmetic. Hope to have these new versions in publishable form in a few days


06-23-2012, 12:46 AM
Still having problems with understanding when/how spin promotes to signed values.
my PASM code works for my TV driver( with minor )artifacts but the VGA code is still not useable.

But I found this app note http://www.digitalcreationlabs.com/docs/AN10_digital_video_overview.pdf
that shows a better RGB conversion algorithm.

where ii is Cb
and qq is Cr

and remembering something from my old "2920 Signal Processor handbook" .. you do multiplies by constants with shifts and additions so I tried this.

' red := yy + (constant(trunc(0.956 * 4096.0)) * ii + constant(trunc(0.620 * 4096.0)) * qq) ~> 12 #> 0 <# 255
' grn := yy - (constant(trunc(0.272 * 4096.0)) * ii + constant(trunc(0.647 * 4096.0)) * qq) ~> 12 #> 0 <# 255
' blu := yy - (constant(trunc(1.108 * 4096.0)) * ii - constant(trunc(1.705 * 4096.0)) * qq) ~> 12 #> 0 <# 255

' yy := 1.164 * (yy-16)
yy := (yy - 16)
yy := yy + yy~>3 + yy~>5 #> 0 <# 239
' r := yy + (constant(round(1.596)) * ii) ~> 12 #> 0 <# 255
red := (yy + ii + (ii~>1)) #> 0 <# 255
' g := yy - (constant(round(0.392)) * qq - constant(round(0.813 * 4096.0)) * ii) ~> 12 #> 0 <# 255
grn := (yy - (qq~>2) - (qq~>3) - (ii~>1) + (ii~>2)) #> 0 <# 255
' b := yy + (constant(round(2.017)) * qq ) ~> 12 #> 0 <# 255
blu := (yy + qq + qq) #> 0 <# 255

and got better colors and skin tones on VGA


06-28-2012, 04:24 AM
OK I finally got it working! searched throughout the code added debug routines and what was my problem?

Diddly SPLAT!!!! Yes I had forgotten to put a splat in front of an immediate constant.....

In side the attached ZIP file there is a mp4 that gives a taste of the code working.
it shows the propeller being turned on and doing real time display, followed by the playing of two recorded files.

the test pattern is VIDEO_58.VXR
and VIDEO_37.VXR, a funny commercial from China TV.

The last enhancement I added was an interleaved display strategy. which greatly enhanced speed.

As long as there is little motion the display is marginally OK.

I have been measuring performance in lines rendered / second. The 80MHz VGA does about 177 per second.t
It should be able to work on the backpack in "Show" mode simply by putting a few commented lines back in force.

This is actually a video/audio recorder so complete implementation would ad an SD card and audio input/output circuits.

Project still need some fine tuning ( the spin capture cog code probably does not work now I think this shows a model for how to translate SPIN to PASM

The PST terminal shares the last cog with FSRW so only one can be active at a time.


01-31-2013, 08:37 AM
Very interesting, PhiPi
How do you think, could it be implemented to Simple Pal Decoding. I'm working on the similar task with PSoC5, trying to put all activity inside the crystal.

07-09-2013, 05:34 AM
Wow. This thread might be the grail for that elusive Ambi-light-on-one-board so many people have been looking for. "Ambilight" is a Philips trademark.


Right now most everybody uses a PC running BobLight to average color contents in say 2, 4, 8 regions, but some folks go for more!


(talk about overkill!). They send the data from BobLight over a serial/USB link to an Arduino (or other MC) that controls sets of RGB LEDS mounted behind the TV. But that's a lot of extraneous gear, and precludes using non-PC video sources. In practice a 4 region (left, right, top, bottom) Ambilight clone using a 4 x 4 RGB frame capture with a 1 hz sampling rate seems like it would be pretty satisfying. A Prop should be able to do this standalone on any source material.

Time to dig into this possibility? I've never used the Prop for video or even soldered in the Delta/Sigma A/D components. If anyone could summarize the rigging of a QuickStart board to start exploring this, I'd be very grateful. The objective is to derive that 4x4 RGB array to drive the Ambi-light LEDs from.

Since composite is the Lowest Common Denominator of video, available even on HD set-top boxes, it stands to reason to use it. For openers a 4 x 4 would be a more than great start, initially using (plain averaging?) the 4 left edge ones to drive a left color and another 4 to drive a right, later adding top and bottom, maybe separating into 6 regions. Depending. The original Philips ones are rather subtle, nice.


The number of A/Ds aside, would component video be easier or harder to convert? How about adding some simple discrete hardware?