Shop OBEX P1 Docs P2 Docs Learn Events
Propeller Backpack: COLOR NTSC Capture - Page 2 — Parallax Forums

Propeller Backpack: COLOR NTSC Capture

24

Comments

  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2011-10-21 07:32
    Tharkun,

    The dual MOSFET used on the Backpack is a Rohm UM6K1NTN. It is characterized for gate drives as low as 2.5V. This is important. If the chosen MOSFET requires too high a gate drive, it will not switch efficiently with the Prop. However, for the circuit to work in this app, you can leave out the MOSFET altogether and simply make a connection between where the two sources were.

    -Phil
  • Duane DegnDuane Degn Posts: 10,588
    edited 2011-10-21 09:11
    Phil,

    You can do this with just some caps and resistors?

    As Gordon said above, I 'd like to see the ability to find color blobs.

    I'd really like the Propeller to handle all the math on this. Is there a way of freeing the Prop from the PC?

    What I'd like to see is the Prop computing and displaying the images you've shown. It would be nice to be able to add graphics/text on top of the displayed image.

    Would the captured image be small enough to transmit over an XBee or Nordic module? How many times have we seen people want to use their XBees to transmit a video signal. Perhaps with your image cature one could send the image over a wireless connection. Not neccessarily at a full 30fps but maybe a frame every couple of seconds.

    This is amazing stuff. Thanks for documenting it so well.

    Duane
  • ericballericball Posts: 774
    edited 2011-10-21 09:41
    ... decided to try a trig identity for angle differences ...

    Reading this last night got me to thinking and I realized you could use a dot product to extract U & V. I've done some calculations, but let's start with the RGB to YUV calculations. Assuming RGB are in the range 0...1:
    Y = 0.299R + 0.587G + 0.114B
    U = 0.492(B-Y)
    V = 0.877(R-Y)
    
    These are then scaled for output. sync is -40IRE, blank is 0IRE, black is 7.5IRE, white is 100IRE, burst is -U at 40IRE p-p, then U & V are scaled by 185IRE p-p. So working backwards:
    -U = 40 * ( i.burst * i.pixel + q.burst * q.pixel ) / (185 * sqrt( i.burst * i.burst + q.burst / q.burst ) )
    -V = 40 * ( i.burst * q.pixel - q.burst * i.pixel ) / (185 * sqrt( i.burst * i.burst + q.burst / q.burst ) )  (I think, it might be +V.)
    Y = 40 * y.pixel / (92.5 * y.sync)
    
    R = Y + 1.140251 * V
    G = Y - 0.394731* U - 0.580809 * V
    B = Y + 2.032520 * U
    
    Note: in NTSC I and Q refer to the original modulator phases which are rotated 33 degrees WRT U & V.
  • GordonMcCombGordonMcComb Posts: 3,366
    edited 2011-10-21 09:44
    Is there a need to transmit digitized RGB video wirelessly? You might as well transmit the analog video directly from the camera using a cheap transmitter/receiver. The quality would be better (no pixelization) even on a low-cost transceiver. This isn't saying anything about Phil's capture, just that you're already an extra generation down, plus you're working with a low resolution digitization.

    In any case XBee even at its highest speeds would be pretty slow, and you'll need to encode the transfer to something that at least detects errors, if not corrects for minor data loss. Bluetooth might handle it better.

    -- Gordon
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2011-10-21 10:05
    Duane,

    "You can do this with just some caps and resistors?"

    Yup, but they have to be located very close to the Propeller chip.

    "As Gordon said above, I 'd like to see the ability to find color blobs."

    There's a possibility that that can be done.

    "I'd really like the Propeller to handle all the math on this. Is there a way of freeing the Prop from the PC?"

    Yes. By using the trig identities for angle differences, no trig functions are necessary. However, I would still not expect real-time capture plus color conversion.

    "What I'd like to see is the Prop computing and displaying the images you've shown."

    'Probably not gonna happen. Those images are essentially 90H x 238V pixels in size. Even compressed to one byte per pixel, we're looking at 21420 bytes, which doesn't leave much RAM for anything else. Is one byte per pixel even reasonable? It may be. The luma depth is about five bits, but that could be reduced to four for 16 gray levels. That leaves four bits for chroma, which would most efficiently be alternated between I and Q values from pixel to pixel. But that's an impossible representation for computing display pixels in real time. The Prop's composite video output wants phase angle (i.e. atan2(Q, I) ) for chroma, and VGA wants RGB.

    "Would the captured image be small enough to transmit over an XBee or Nordic module?"

    No image is too big to transmit over a wireless channel. It all depends on how long you're willing to wait to receive it. :)

    IMO, acquiring an image using this method just so you can look at it (whether wirelessly or on a PC) is a waste of time and effort. There are better ways to do both. The only real benefit comes when the Propeller is actually able to do something with the image data it acquires, e.g. interpreting a barcode, tracking a color blob, etc. That's when things get interesting!

    -Phil
  • Duane DegnDuane Degn Posts: 10,588
    edited 2011-10-21 10:13
    Is there a need to transmit digitized RGB video wirelessly?

    Will the connection to an anolog video transmitter mess up the Prop's ability to read the signal? Wont this add a load to the signal line?

    I have a nice video transmitter myself but I see a lot (well, I can remember three) people ask about transmitting video over their XBees. One reason I think of sending after capture image is because, I would think, the Propeller would have and easier time displaying a low resolution image.

    Instead of XBees, I'd perfer to use nRF24L01+ Nordic modules. The can transmit 2Mbps and have a CRC error detection.

    If adding an analog video transmiter wont mess up the Prop's ability to capture the image, then just some information about color blob size and location (transmitted with XBee or Nordic) could be used to overlay (with a second Backpack) data on the image.

    Duane
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2011-10-21 10:23
    Eric,

    There's no reason to convert to YUV space first, since you can get RGB directly from YIQ using the formula in my first post:

    R = 1.969 Y + 1.879 I + 1.216 Q
    G = 1.969 Y - 0.534 I - 1.273 Q
    B = 1.969 Y - 2.813 I + 3.354 Q

    Per the trig identity,

    I = k ( pixel(i) burst(q) - pixel(q) burst(i) )
    Q = -k ( pixel(q) burst(q) + pixel(i) burst(i) )

    I'm not sure yet where the minus sign in the Q expression comes from, but making that change is what made the yellows come out right, instead of being pink.

    -Phil
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2011-10-21 10:28
    Duane Degn wrote:
    Will the connection to an anolog video transmitter mess up the Prop's ability to read the signal? Wont this add a load to the signal line?
    Maybe. It depends on the load impedance of the transmitter's input. If it's 75 ohms, you will want to decrease the value of the Backpack's external series resistor. Just be sure to split the transmitter's signal off before it reaches the Backpack board.

    -Phil
  • Duane DegnDuane Degn Posts: 10,588
    edited 2011-10-21 10:29
    IMO, acquiring an image using this method just so you can look at it (whether wirelessly or on a PC) is a waste of time and effort. There are better ways to do both. The only real benefit comes when the Propeller is actually able to do something with the image data it acquires, e.g. interpreting a barcode, tracking a color blob, etc. That's when things get interesting!

    -Phil

    Phil,

    Thanks for answering my questions. I agree, this isn't a good way to "look at" an image. I'm thinking about showing this to others. It's one thing to say the Prop is tracking a color blob but wouldn't it be cool if you could also see the object the Prop is detecting? I think this ability would add a lot to robot demonstrations.

    When I show people my robot using Hanno's video capture method, they don't seem to care very much that the robot can see something, they want to see what the camera sees too. I can kind of show them this with an 120 pixel LED array but it would be cool to be able to display a color image.

    I've got to figure out how to display and image using external memory (subject for a different thread). This would solve the problem of storing a large image.

    I'm looking forward to seeing your code. I think having to abiltiy to identify colored markers could aid a lot in robot navigation.

    (BTW, my I hadn't seen your post #36 when I wrote post #37.)

    Duane
  • Duane DegnDuane Degn Posts: 10,588
    edited 2011-10-21 10:34
    Maybe. It depends on the load impedance of the transmitter's input. If it's 75 ohms, you will want to decrease the value of the Backpack's external series resistor. Just be sure to split the transmitter's signal off before it reaches the Backpack board.

    -Phil

    Thanks Phil. Time to order a second Backpack so I can overlay blob data on the transmitted image.

    Duane
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2011-10-21 14:10
    Here's an idea of what could be accomplished with reduced resolution and pixel depth:

    attachment.php?attachmentid=86199&d=1319231157

    The capture size would be about 92 x 68 pixels, which results in approximately-square pixels. This requires capturing every seventh line, which means every 14th line from the even field and, interleaved with those, every 14th from the odd field. The pixel depth is eight bits: four for luma and four for chroma, with adjacent pairs of pixels sharing the chroma values. IOW, one pixel would carry the I value, and its neighbor, the Q value. That's 6256 bytes of image data altogether, but not in a form that's readily displayed by either the NTSC or VGA drivers. (BTW, the thumbnail just about "nails" the native resolution.)

    -Phil
    341 x 238 - 23K
  • HannoHanno Posts: 1,130
    edited 2011-10-21 15:00
    Great stuff!
    You don't need a lot of pixels to use artificial markers for navigation/robot control- 15x15 pixels is fine for a lot of tasks- just look at what we use for operating system icons.
    Even a small picture is worth a thousand words!

    The nice thing about a software grabber is that it can be flexible- zooming into particular areas or colors as needed.
    As an example, the ViewPort grayscale grabber dumps 200x240 4bit pixels into memory for "hi-res" images that can be streamed to the PC, or analyzed in memory. It can also dump 1/4 that size and use the other 3/4 for computer vision "registers" for complex manipulations like finding a bar code pattern in a cluttered environment. This still left enough memory and cogs for a balancing robot using a kalman filter.
    As I've posted previously- I'd love to update this with your color algorithm- my backpack and camera are standing by!
    Hanno
  • Dr_AculaDr_Acula Posts: 5,484
    edited 2011-10-21 15:43
    This requires capturing every seventh line

    That sounds interesting. Would it be possible to change that value? Could you capture one frame every 7th line starting at line 0, then do another frame every 7th line starting at line 1 etc. Then do some image manipulation to improve the resolution?
    but not in a form that's readily displayed by either the NTSC or VGA drivers

    I've been doing lots of work with image manipulation in vb.net - it should be possible to change any format to any other format - eg scaling, pixel averaging, dithering etc. Harder to do on the prop but if you can get it working in C on a PC then maybe port to catalina C?
  • GordonMcCombGordonMcComb Posts: 3,366
    edited 2011-10-21 16:25
    The low-res capture, which is excellent by the way, has me thinking of a new form of line following, capable of differentiating not only line width and embedded patterns, but colors as well. PVC electrical tape is commonly available in a wide variety of colors. With white LEDs to illuminate it, you could create all kinds of interesting courses where robots are programmed to follow lines of certain colors, rejecting others ("follow the yellow brick road!!").

    I know this is all done with a Backpack, which is a terrific stand-alone product, but I'm also seeing this as an all-in-one board. Ideally the camera, mounted on the back side of the board (or a sandwiched board affair) would use a mini screw lens, allowing different types of lenses to be exchanged, depending on the application. I'm sure sure what focal length is on the camera used in Joe's laser range finder, but that would be a good one to start with. Parallax basically has all the BOM for this already in-house.

    -- Gordon
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2011-10-21 21:00
    Here's an idea what the image would look like in the Propeller's native 6-bit RGB VGA color space:

    attachment.php?attachmentid=86206&d=1319256020

    -Phil
    341 x 238 - 9K
  • Dr_AculaDr_Acula Posts: 5,484
    edited 2011-10-22 02:56
    And this is what it looks like on a 4" LCD TV screen using the Propeller TV palette. The photo is a little washed out compared to the real screen - the red is redder on the screen.

    I'm still amazed this works - keep up the good work.
    640 x 480 - 65K
  • PerryPerry Posts: 253
    edited 2011-10-22 11:23
    I'm thinking the Propeller can do this in real time. for both NTSC and VGA

    use separate cogs for color and intensity input

    store the data in byte interleaved format as I+Q, Y+Y, I+Q, Y+Y

    since there are only 256 possible inputs for the color conversions,
    calculate them all in advance and save a lookup table for VGA or NTSC

    For such low resolutions the output video cog can use some delay registers and table lookups to reformat for desired output.

    I even have a dump screen to SD BMP file code that Rayman wrote That I hope to try!

    Possible ?

    Perry
  • RinksCustomsRinksCustoms Posts: 531
    edited 2011-10-22 19:15
    Truly a most clever and unbelievable feat!
    Never thought anything like this would be achievable with the prop 1.. Just when you think all the secrets if the prop have been unlocked... Remarkable phipi!

    This would probably be one of the top downloaded objects!
  • Cluso99Cluso99 Posts: 18,069
    edited 2011-10-22 21:27
    Truly a most clever and unbelievable feat!
    Never thought anything like this would be achievable with the prop 1.. Just when you think all the secrets if the prop have been unlocked... Remarkable phipi!

    I believe there is still a huge amount of things to be discovered that the prop can do. Really, the counters have so many possibilities - rmember, we actually have 16 of them, each with so many different modes. And then with the multicores, we have lots of parallel processing capabilities.

    Phil is just showing us some of those capabilities. Put a lot of the different parts he has done together, and this chip is capable of some awesome things.

    Keep it up Phil. Your work is so inspiring and with your explanations and diagrams, they make it look so simple.
  • BaggersBaggers Posts: 3,019
    edited 2011-10-23 03:31
    WOW, PhiPi, that's mightily impressive work matey! well done!
    I've barely ( not at all really ) had time to do any prop stuff lately :( but have to pop on the forum once in a blue moon to try and keep up with the amazing progress that has been going on!
    This however is freaking amazing!
    Will look forward to seeing this in action for real!
    My mind is buzzing with many ideas for uses for it!

    Well done!
  • Dr_AculaDr_Acula Posts: 5,484
    edited 2011-10-23 04:14
    Yes my mind is still buzzing too.

    The theory says that it is too hard to increase the number of pixels/color resolution of the prop because it is not fast enough to calculate the phase and hence the color.

    Yet here is PhiPi who went from B&W to color with some extremely clever software techniques.

    First thing I am thinking - well if you are sampling every 7th pixel, and if you have 8 cogs that can run in parallel...

    I'm still intrigued by the idea of capturing an entire screen in a SRAM chip like a storage CRO then clocking it out later with a fast (?external) clock for a better resolution picture. Having seen PhiPi's real life waveform for a picture, it doesn't look so hard. Just a bunch of sinewaves, right? And a very fast D to A. I know someone earlier said that the phase angle difference was 22ns and it is a 55ns SRAM, but even so, say you took a sine wave and stored it and then took the same sinewave at a slightly later phase angle and stored it, for the first one the first sample value might be, say 50 of 255, and the second one might be, say 70 of 255, and when you run that through a 6Mhz low pass filter, it ought to create a sine wave with a delayed phase.

    I'm intrigued by PhiPi's work. I also have to say I am very impressed, because in all the research I have done on NTSC signals, I have never found anything so clearly explained as PhiPi's waveform overlaid on a picture, with the associated description. At last I am starting to understand how NTSC stores the three values of H, S and L, rather than just the two parameters of H and L.

    Keep up the good work!
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2011-10-25 00:25
    'Finally got around to doing the color computations in Spin and needed a way to see the result sans PC. So I built a VGA daughterboard for the Backpack using a Proto-DB, a DB15 connector, and some resistors; but then I had to push the stack to deal with some resistor issues:

    http://forums.parallax.com/showthread.php?135385-Better-VGA-DAC-resistors

    I'm using Kye's 6-bit VGA color driver. ('Can't say enough good things about it; it just works. Thanks, Kwabena!) Anyway, each RGB component has to be scaled and reduced to two bits, which isn't a lot of dynamic range. Still, though, the shades and colors are recognizable rendered at 80 double-wide pixels x 120 lines:

    attachment.php?attachmentid=86294&d=1319526870

    I need to clean up and comment the source code. Then I'll post it here so people can experiment. After that will be to recode the rendering in PASM, so it doesn't slow the capture so much. Right now, it's on the order of about seven seconds.

    -Phil
    648 x 486 - 48K
  • RaymanRayman Posts: 13,897
    edited 2011-10-25 06:19
    Is that all done on a Prop now, without the PC connection?
    Can you make it 1 second by reducing the resolution?
  • HumanoidoHumanoido Posts: 5,770
    edited 2011-10-25 08:33
    This is a perfect starting point for Big Brain image recognition. In another post, Phil described a very neat algorithm for image resolution by measuring the image by angles and comparing with stored data to find a match.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2011-10-25 09:06
    Rayman,

    That's all done on the Prop now. The only thing I'm using PST for is to trigger the capture. Rather than reducing the resolution to speed things up, I'm going to try doing the computations in PASM. The way I'm doing them now in Spin is pretty inefficient.

    -Phil
  • potatoheadpotatohead Posts: 10,254
    edited 2011-10-25 09:16
    Right on. All on the Prop. I distinctly remember this being on the list of "it can't be done" things now being done. When you do get PASM sorted, please consider also releasing the SPIN version so people can see and manipulate the calcs easily. Might be slow, but it is accessible.
  • RaymanRayman Posts: 13,897
    edited 2011-10-25 09:17
    Wow, if you can get that resolution in the 1 Hz range, that would really, really be impressive...

    Still, I think a lower resolution would be more useful for robot vision...

    But, now that I think about it, if the idea is to save to SD or transmit to PC, then I suppose higher resolution is better...
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2011-10-25 09:25
    Rayman,

    I could easily drop the resolution back to 90 x 68, which results in squarish pixels. I just wanted to see how much resolution I could squeeze into Kye's 160 x 120 VGA driver.

    -Phil
  • RaymanRayman Posts: 13,897
    edited 2011-10-25 09:55
    Great. Then, one can think about blob detection and maybe eye detection...
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2011-10-25 12:14
    Attached is the first color capture source installment. It requires a VGA board plugged into the Backpack. Or you could try duplicating the Backpack's passive array on a Prop Proto board and use its VGA connector. Just be sure to keep the passives close to the Propeller chip. (Frankly, if starting from scratch, the Backpack's circuitry could be modified a bit to optimize it for capture. A DC restorer ahead of the ADC would be the first place I'd start. But that's another project altogether. Maybe I should include that circuitry in the MoBoProp. :) )

    -Phil
Sign In or Register to comment.