Shop OBEX P1 Docs P2 Docs Learn Events
How to approach software based composite video in P1+ — Parallax Forums

How to approach software based composite video in P1+

TubularTubular Posts: 4,622
edited 2014-05-05 16:47 in Propeller 2
I've been thinking this morning about how to approach a composite video driver for the P1+. Chip has mentioned this will need to be done in software, but since we'll have greatly improved MIPS compared to P1, this shouldn't be too much of an issue.

The P3- DE0 emulation we have now can do 80 MIPS, the upcoming P1+ probably 100 MIPs, so anything started now on the existing P3- platforms shouldn't be hard to port across.

I've been reading up on the RGB to YUV space conversion, and its requirement for scaling (multiplications) and blending in the carrier.

Because I'm new / not expert in this area I'm thinking it might be best to use 3 cogs, at least initially,
1 to do the 'U' portion of the signal and carrier and colorburst on the back porch
1 to do the 'V' portion of the signal and carrier
1 to do the Luminance Y signal, add the U and V signal (with carrier pre-calculated), timing, and send the final value to the pins

However I suspect there are a lot of tricks already employed over the years, and I'm not yet aware of these. The thing I like about the above approach is there would be plenty of room for lookup tables, and the U and V calculations can be slower and still get some kind of result even if not updated on every pixel.

Then try and work out how to interleave it into a single cog, if that looks possible.

Has anyone else had experience in this kind of area? Any advice would be appreciated
«1

Comments

  • potatoheadpotatohead Posts: 10,254
    edited 2014-05-03 18:36
    I'm going to take one of the P1 software drivers Eric Ball wrote and port it. Having the DACS and speed will very significantly improve signal quality, resolution and potential colors. A simple lookup for the various values will provide 8 bit color. There are a few out there, including my own. Mine was focused on NTSC and artifact / emulation related color generation, and I don't think it's suitable, though maybe appealing at some point. This may be one COG, or maybe two, depending on how the color lookups end up.

    One thing we might think about is getting a basic signal done. That will be my first step. Monochrome, interlaced and not. Then PAL. From there, add the colorburst for NTSC and PAL.

    At that point, it's an empty container. How color gets done and modulated can vary.
  • dMajodMajo Posts: 855
    edited 2014-05-03 19:41
    I think you can start directly for S-Video, with separate luminance and chrominance signals, then the composite output is a simple addition of the two.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-05-03 19:52
    I hope this works. There are really only two video modes with currency these days: composite video (e.g. NTSC) and HDMI. The former, because of the availability fo cheap monitors; and the latter because it's ... well ... current. VGA seems pass
  • David BetzDavid Betz Posts: 14,511
    edited 2014-05-03 21:25
    I hope this works. There are really only two video modes with currency these days: composite video (e.g. NTSC) and HDMI. The former, because of the availability fo cheap monitors; and the latter because it's ... well ... current. VGA seems pass
  • Roy ElthamRoy Eltham Posts: 2,996
    edited 2014-05-03 21:31
    It won't have the NTSC color burst stuff because of the new pins not working the same. or something.

    P2 (it's P2, not P1+) will have the ability to do component out (which a lot of tv's have as in input). That can be upconverted to HDMI or whatever you want externally.
  • LawsonLawson Posts: 870
    edited 2014-05-03 22:33
    Tubular wrote: »
    I've been reading up on the RGB to YUV space conversion, and its requirement for scaling (multiplications) and blending in the carrier.

    Because I'm new / not expert in this area I'm thinking it might be best to use 3 cogs, at least initially,
    1 to do the 'U' portion of the signal and carrier and colorburst on the back porch
    1 to do the 'V' portion of the signal and carrier
    1 to do the Luminance Y signal, add the U and V signal (with carrier pre-calculated), timing, and send the final value to the pins

    Why have an RGB frame-buffer? Wouldn't it be much faster to have a YUV frame-buffer and do any color conversions when drawing?

    Marty
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-05-03 22:46
    Roy Eltham wrote:
    P2 (it's P2, not P1+)
    Sorry, Roy, but for now, it's P1+, since the ill-fated P2 is still rather fresh in memory. Once there's actual silicon, I might be willing to call it the P2.

    -Phil
  • kwinnkwinn Posts: 8,697
    edited 2014-05-03 23:13
    Say what???? Every video monitor and and television set I look at has a VGA connector. IMHO if anything is disappearing it's composite video. Perhaps there are a lot of small and inexpensive composite monitors available right now but I am betting that will change very quickly.

    I hope this works. There are really only two video modes with currency these days: composite video (e.g. NTSC) and HDMI. The former, because of the availability fo cheap monitors; and the latter because it's ... well ... current. VGA seems pass
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-05-03 23:24
    kwinn wrote:
    IMHO if anything is disappearing it's composite video. Perhaps there are a lot of small and inexpensive composite monitors available right now but I am betting that will change very quickly.
    It's hard to argue with a two-conductor RCA connector vs. a 15-pin VGA plug. NTSC is far from dead, given the pletorra of cheap analog video cams and CCTV monitors. VGA, OTOH, is caught between a rock (NTSC) and a hard place (HDMI). The massive connector alone signs its death warrant in a world dominated by miniaturization.

    -Phil
  • potatoheadpotatohead Posts: 10,254
    edited 2014-05-04 00:22
    I'm a big fan of the NTSC SD signals because they have nice, slow sweep frequencies, which maximizes the time available to do things. :) Monochrome runs up to about 600-800 pixels x 400 or so pixels, depending on the display. We know what composite color does. Component runs the same as monochrome does.

    If we've got killer VGA, which Chip says we will, then doing composite, component, S-video NTSC color isn't going to be a problem. PAL might, and that one comes down to jitter, same as it did on P1. PAL requires a very precise color reference and phase shift every scan line where NTSC can tolerate very significant variations without serious artifacts. We shall see when we get the FPGA.

    IMHO, a smaller 4 conductor connector does component, and that works all the way up to 1080i, if desired. Nearly all TV's that have higher resolutions than SD have component inputs on them. Interestingly, they don't all have VGA. I see fewer VGA ports on TV's than I do component here in the States. I've not yet seen one that doesn't accept composite. I have seen a few that do not accept S-video.

    Anyway, shouldn't be an issue given P1 can do an all software NTSC color display --a pretty nice one actually, with just that 3 pin DAC. Seems to me, that same technique Eric Ball did for us should work very nicely with a real DAC. Or some variation on that... Should be one COG, assuming there isn't advanced color management going on.

    @Lawson: Totally. A native color buffer will be fast, and we can do a lookup table on 8 bits and less per pixel too.

    Analog is nice, because it does handle the full HD resolution, and it's free of the DRM / licensing hassles inherent in HDMI. I like the idea of having that packaged up and managed by anybody but us, just feed it clean analog signals and it's going to be tough to tell the difference. A clean component signal appears pixel perfect on the sets I've tried it on.
  • TubularTubular Posts: 4,622
    edited 2014-05-04 00:50
    potatohead wrote: »
    I'm going to take one of the P1 software drivers Eric Ball wrote and port it. Having the DACS and speed will very significantly improve signal quality, resolution and potential colors. A simple lookup for the various values will provide 8 bit color. There are a few out there, including my own. Mine was focused on NTSC and artifact / emulation related color generation, and I don't think it's suitable, though maybe appealing at some point. This may be one COG, or maybe two, depending on how the color lookups end up.

    That sounds like a good place to start. But do these need a PLL, or are they bitbanged? Do you have links?
    One thing we might think about is getting a basic signal done. That will be my first step. Monochrome, interlaced and not. Then PAL. From there, add the colorburst for NTSC and PAL.

    At that point, it's an empty container. How color gets done and modulated can vary.

    Yes good idea. For a proof of concept on P1 I even just did a single line repeated over and over. The monitors seemed happy enough with that
  • TubularTubular Posts: 4,622
    edited 2014-05-04 00:56
    Lawson wrote: »
    Why have an RGB frame-buffer? Wouldn't it be much faster to have a YUV frame-buffer and do any color conversions when drawing?

    Marty

    Thats an interesting idea Marty. Have them all pre-scaled and everything, and just add the values and apply color carrier as the final step.

    @all, I need to put out composite video, its non-negotiable unfortunately. But as Phil points out its a simple two wire output, and is second to none for ease of capture, recording etc.
  • TubularTubular Posts: 4,622
    edited 2014-05-04 00:59
    David Betz wrote: »
    What did I miss here. Doesn't the P1+ have hardware video capability?

    David see Chip's first post his thread, towards the end
    http://forums.parallax.com/showthread.php/155132-The-New-16-Cog-512KB-64-analog-I-O-Propeller-Chip


  • potatoheadpotatohead Posts: 10,254
    edited 2014-05-04 01:08
    Eric has a collection here: http://forums.parallax.com/entry.php/104-Links-to-Propeller-stuff-I-ve-done

    There are a few techniques. No, they aren't bit banged in the sense of manually stuffing values into the DACS, or toggling pins. Waitvid is used. Yes, they are bit banged, in that waitvid is used with "pixels" and "colors" that result in signals, including the colorburst, that the silicon would normally do. A couple of these were just enhancements on P1 color / signal characteristics. Others were to best exploit NTSC artifact color, seen in that Nyan Cat thing I did a while back.

    We will just need to see what Chip does with WAITVID in the next FPGA, then the work can start.
    For a proof of concept on P1 I even just did a single line repeated over and over.

    I have found it best to work out the whole signal with a basic horizontal test pattern, such as grey bars (simplest), checkerboards, etc... That can be evaluated on a scope and tuned nicely. Once that is all done, adding things one at a time works well. Add a color burst. Then phase shift it, etc...

    http://youtu.be/-gyO2lRXLyg BTW, dig that killer P1 sound, Retronitus by Ahle (miss you man, hope all is well)
  • jmgjmg Posts: 15,148
    edited 2014-05-04 03:49
    It's hard to argue with a two-conductor RCA connector vs. a 15-pin VGA plug. NTSC is far from dead, given the pletorra of cheap analog video cams and CCTV monitors. VGA, OTOH, is caught between a rock (NTSC) and a hard place (HDMI). The massive connector alone signs its death warrant in a world dominated by miniaturization.

    True, but I have seen some Chinese Monitors use Mini/Micro USB connectors for VGA, and they have USB-VGA passive cables.

    I'm not sure if this is has a documented standard pin mapping anywhere, but the existence of cable-sets suggests a standard ?
  • David BetzDavid Betz Posts: 14,511
    edited 2014-05-04 04:51
    Tubular wrote: »
    Yes, I know the original list had video support. I just wondered if I had missed a subsequent post that removed it. I know there was a discussion of whether video was needed. Anyway, I'm looking forward to the official P2 feature list. I think Ken said it would be posted soon.
  • Cluso99Cluso99 Posts: 18,069
    edited 2014-05-04 06:10
    I thought that composite video (NTSC or PAL) was not going to be available on P2.

    I understand that the P1 has some special hw that creates the color signals for both ntsc and pal. My understanding this cannot be done by sw.

    I suggested thatmaybe the NTSC hw couldbe added to one cog or to the hub. Composite monitors are plentiful and cheap and imho are not going to disappear anytimesooon.
  • dMajodMajo Posts: 855
    edited 2014-05-04 07:17
    potatohead wrote: »
    IMHO, a smaller 4 conductor connector does component, and that works all the way up to 1080i, if desired. Nearly all TV's that have higher resolutions than SD have component inputs on them. Interestingly, they don't all have VGA. I see fewer VGA ports on TV's than I do component here in the States. I've not yet seen one that doesn't accept composite. I have seen a few that do not accept S-video.

    You can do separate chrominance/luminance signals with internal pin dacs to a s-video connector and then use a s-video to rca conversion cable (as I know they are simply resistive/capacitive adders of the two signals) or add them on the pcb before going to an rca connector.
    This approach should easier the signals generation, unify the composite/s-video drivers and allows for a wider device compatibility.
  • potatoheadpotatohead Posts: 10,254
    edited 2014-05-04 10:51
    Yes, a simple cable handles that nicely. When we get the FPGA and we see what WAITVID looks like, it's go time! :) Maybe that will be the optimal way to get it done too. I'm liking that idea.

    @Cluso, it can and has been done in software on P1. Really, it's just going to be a question of what method makes sense.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-05-04 11:39
    I think the main issue will be picking a crystal for the system clock that comports well with the color burst frequency, since all the outputs are clocked and can't run directly from a PLL.

    -Phil
  • AribaAriba Posts: 2,682
    edited 2014-05-04 15:53
    I think the main issue will be picking a crystal for the system clock that comports well with the color burst frequency, since all the outputs are clocked and can't run directly from a PLL.

    -Phil

    I think in the P1+ the video shifter will go direct to the DACs, which are not clocked, or perhaps clocked by the PLL. We anyway need an analog output for composite, so "DAC only" is not a problem here (in opposite to to a TFT driver).

    About NTSC/PAL generation:

    I think the simplest methode to generate an NTSC or PAL signal with colors is to make a waitvid for every pixel with for example 8 samples. For every color you want to use you have a 8-sample table that contains the luminance level with overlayed color burst. Phase and amplitude of the color burst define the color. For PAL we need two tables per color, one shifted by 180°.
    Bitmap drivers should be quite simple if this works.

    Andy
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-05-04 16:08
    Ariba wrote:
    I think in the P1+ the video shifter will go direct to the DACs, which are not clocked, or perhaps clocked by the PLL.
    This is an important piece of information/speculation. Can anyone confirm, deny, or expand upon it?

    -Phil
  • potatoheadpotatohead Posts: 10,254
    edited 2014-05-04 16:13
    It's one piece I'm waiting for too.
  • jmgjmg Posts: 15,148
    edited 2014-05-04 16:23
    Ariba wrote: »
    I think in the P1+ the video shifter will go direct to the DACs, which are not clocked, or perhaps clocked by the PLL.
    Depends which PLL you mean, AFAIK in the P1+, there is no local-pll, just a single PLL for SysCLK, so everything is SysCLK based.

    I think the main issue will be picking a crystal for the system clock that comports well with the color burst frequency, since all the outputs are clocked and can't run directly from a PLL.

    Yes, the any-integer will allow base burst Crystals, and also the x4 burst crystals.
    ( I guess that PLL N can be > present 16, maybe 6-8 bits ? )
  • tonyp12tonyp12 Posts: 1,950
    edited 2014-05-04 16:43
    Many 8bit game consoles used 14.3mhz as base frequency for the whole system.
    But to generate the chroma-sine in software if we use those crystals (very common) and a Prop2 16xPLL for sysclock = 229.09 mhz
    A sysclk that should be the upper limit of P2, you should probably get all the colors NTSC can show.

    27.000 Mhz crystal (used in PAL/NTSC DVD players, exact multiple of the PAL and NTSC line frequencies) and with a 8xPLL =216MHz sysclk could also be good.
    syncgen.gif
  • TubularTubular Posts: 4,622
    edited 2014-05-04 17:31
    Ariba wrote: »
    I think in the P1+ the video shifter will go direct to the DACs, which are not clocked, or perhaps clocked by the PLL. We anyway need an analog output for composite, so "DAC only" is not a problem here (in opposite to to a TFT driver).

    About NTSC/PAL generation:

    I think the simplest methode to generate an NTSC or PAL signal with colors is to make a waitvid for every pixel with for example 8 samples. For every color you want to use you have a 8-sample table that contains the luminance level with overlayed color burst. Phase and amplitude of the color burst define the color. For PAL we need two tables per color, one shifted by 180°.
    Bitmap drivers should be quite simple if this works.

    Andy

    Interesting idea Andy. But wouldn't you need to also have the system clock a multiple of the colorburst, so you don't introduce a phase shift depending on which horizontal pixel you're outputting?

    I'm assuming for now we don't have separate video PLLs. If we do, it'll all be easier.
  • potatoheadpotatohead Posts: 10,254
    edited 2014-05-04 17:58
    Using the technique Andy just mentioned, the easiest answer would be to simply generate the colorburst through the same technique.

    One very nice advantage of that is precise alignment of the pixel clock to the color burst. This allows for artifact colors as well as ones generated normally.
  • Cluso99Cluso99 Posts: 18,069
    edited 2014-05-04 18:26
    potatohead wrote: »
    @Cluso, it can and has been done in software on P1. Really, it's just going to be a question of what method makes sense.
    Without using the video generator???
    Do you have a link?
    BTW I have never bothered to understand the color generation for either NTSC or PAL.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-05-04 18:31
    Cluso99 wrote:
    BTW I have never bothered to understand the color generation for either NTSC or PAL.
    It's not terribly difficult: hue is the phase of the color modulation relative to the color burst, saturation is the AC level of the modulation, and brightness is the "DC" level relative to "black." It's really quite ingenious, since it still fits in the allocated channel width and provides backward compatibility with B/W receivers, which simply ignore the color subcarrier.

    In the P1, the video PLL is 16x the color burst frequency, which allows for 16 distinct hues. Color modulation is done by wiggling the video output by one LSB. Although this results in fairly low color saturation, you can achieve "supersaturation" by wiggling it between max brightness and overflow to zero (sync level). Because this results in inverted modulation, hues are shifted by 8 places (180 degrees).

    -Phil
  • tonyp12tonyp12 Posts: 1,950
    edited 2014-05-04 18:34
    >understand the color generation for either NTSC or PAL.
    Step1: create a burst of color outside left viewable area, this will be the reference.
    Step2: You keep a sine wave going for color and the height of this sine determines the saturation you want.
    Step3: You offset this sine wave compared to the reference color burst to send the color (hue) you want.
    Step4: Mix this in (analog sum) with the black&white/sync information= composite video.

    P.S with Step 3 it's not nice to jump to opposite side of the color wheel as that phase shift will really mess up the sine wave.
    So graduated color changes at max 45deg per pixel is better looking.

    If we want to use external analog ICs (but we prefer not), we could do something like the C64

    The VIC-II used the 14.31818 MHz master clock input (4 times the NTSC color burst frequency of 3.579545 MHz) to produce quadrature square-wave clocks. These clock signals were then integrated into triangle waves using analog integrators. The triangle waves were then integrated again into sine waves (actually rounded triangle waves, but good enough for this application). This produced a 3.579545 MHz sine wave, inverse sine wave, cosine wave and inverse cosine wave.

    An analog summer was used to create the phase-shifts in the Chroma signal by adding together the appropiate two waveforms at the appropiate amplitudes. The Color Palette data went to a look-up table that specified the amplitude of the waves by selecting different resistors in the gain path of the summer. The end result was that we could create any hue we wanted by looking at the NTSC color wheel to determine the phase-shift and then picking the appropiate resistor values to produce that phase-shift.

    Color Saturation was controlled by scaling the gain of the summer. When we picked the resistor values to determine the output phase-shift, we also scaled them to produce the desired output amplitude. Luminance was controlled using a simple voltage divider which switched different pull-down resistors into the open-drain output. We could create any Luminance we wanted by choosing the desired resistor value.
Sign In or Register to comment.