So what's the deal with NTSC video and the Propeller 2?
cbmeeks
Posts: 634
in Propeller 2
I'm sure this has been discussed a million times but I simply couldn't find the time to read the entire, gigantic thread on the prop 2. lol
Last thing I read was that the prop 2 wasn't going to have any color generation in hardware? Instead, it's going to use 4 DACs (IIRC) on each cog for VGA?
I once generated NTSC color with a 20MHz PIC and some multiplexers. So, I'm sure it's *possible* on the beast P2. But I'm the kind of guy that hordes CRT TV's because they look better with my Atari's. :-D
Anyway, for those of us who still want that niche feature, how will it more than likely be done? I'm especially concerned with the chroma modulation.
Thanks.
Last thing I read was that the prop 2 wasn't going to have any color generation in hardware? Instead, it's going to use 4 DACs (IIRC) on each cog for VGA?
I once generated NTSC color with a 20MHz PIC and some multiplexers. So, I'm sure it's *possible* on the beast P2. But I'm the kind of guy that hordes CRT TV's because they look better with my Atari's. :-D
Anyway, for those of us who still want that niche feature, how will it more than likely be done? I'm especially concerned with the chroma modulation.
Thanks.
Comments
The P2 is VGA / DAC only. There are no specialized chroma circuits in this one. In fact, the specialized WAITVID has been replaced by a combination of features.
Color TV drivers will need to be software.
Next step for me is to get a signal done with the new features and proceed from there. I'll personally be doing a monochrome signal to start. Once that and pixels are sorted, color can be added. There are a couple of pretty easy ways I can think of to do that. The super easy one is to do it Apple 2 style. All that requires is a monochrome driver with consistent colorburst and phase added. But it's going to be resolution limited. In Atari speak, think "ANTIC E" and you've got it about right. 160 pixels in the "safe area"
Eric Ball did some software chroma on the P1, and that one seems more flexible, and can offer up a better overall resolution. ~228 pixels or so.
To get above that, like the earlier chip revision did in hardware, will require more work. I don't know how the current 50Mhz clock stacks up against that work. And a big part of that is I need to step through that process to understand what will be needed too. It's got a lot of 8 bit, fast DACS. Worst case, one COG reads pixels while another one builds the signal a scanline at a time.
What I do know, is the first two things I mentioned are capable of a lot of color and intensity levels. Should work out pretty great. Hopefully, others may chime in at some point, and we see what can happen.
For example, if you look at the Sinclair Vega (ZX Spectrum emulated on a ARM microcontroller), you see one chip do the Z80 emulating and the NTSC/PAL video decoding. Apparently good enough to run 1,000 games with no issues.
I think there is an external RAM chip and something to handle the SD card but if I'm not mistaken, the ARM does almost all of it.
We're not talking apples to apples comparisons here, but I wonder how the ARM is doing it? I mean, if you just had raw MIPS to throw at it, I wonder what it would take to drive color in software?
I have to imagine that the P2 will eventually get some awesome NTSC color drivers going (especially with people like you and Eric Ball pushing that thing).
The TV monochrome signal is the part below the colorburst frequency. The part above that is the color information. Create the signal, drive it through the DAC. It's all just one signal, and an 8 bit DAC is plenty to make a great quality one.
I've not done it that way yet, so I'm learning about how to do that, and ways, and then what fits into the features we've got on this chip now.
The Speccy is a simple color case too. Only 15 colors. A more general, "full color" chroma modulation is what I'll be after, once I get the basics working. With the nice DACS, why not?
I've also got a keen interest in component TV video. That still supports the nice, slow horizontal sweep and color modulation isn't required. Fun times ahead! I'm taking it a little bit at a time as I get chunks of time to work on the P2 FPGA.
I remember before the propeller existed, I migrated from my PIC to an SX48 and added a few multiplexers and was able to get some colors. That was so much fun. :-)
Anyway, I look forward to seeing what you create. I'm curious about the component though. I'm not too familiar with how it works.
After S-Video I jump directly to VGA. How is component handled? You're talking about those 5 (IIRC) connections that most devices don't use? LOL (j/k).
I do remember it being better than S-Video though.
You should be able to do the same at least on P2.
I'd expect Component Video to come first, and then some clever work around Composite Video (PAL or NTSC) using another COG.
Picture image level modulation is likely to be compromised, but Old Video Game level should be possible. eg 15 Colours, full saturation.
The REP opcode will allow phase-set SW playback of a series of small tables of pre-calculated modulated depth. (Maybe the streamer can do similar ?)
It is likely that will need a Colour-related SysCLK, and present 50MHz (no PLL) is not going to work too well.
At TV frequencies, it's a three channel signal. The main channel (Y), carries the sync and all the luma info. Basically, a black and white composite TV input, a useful and easy thing right there. The other two (Pb, Pr) are color difference signals. Unlike RGB, only one signal is needed for grey scale, and also unlike RGB, things like saturation and tint are simple transforms of the color difference signals. Some tricks are possible too, like say driving the color at a different resolution. (I think doing that might be useful, but I've yet to do it. Just didn't get there on the older P2. )
Unlike S-video, color info can be clocked at the same rate as luma or detail info is. So, you can get a really nice, pixel perfect display at higher horizontal resolutions too. The end result is like TV VGA. Most sets are monitor like when displaying component video.
On the earlier P2, I did some component video testing, and it's an excellent alternative to VGA, and it's found on many TV sets, and as far as I know, PAL / NTSC signal frequency compliant. (My two devices will take both frequencies and display them just fine.)
Doing this might be an alternative to a composite PAL signal. We had good PAL on the earlier P2, have never had good PAL on P1 due to timing jitter, and who knows on this one? PAL signal tolerances are an order more picky than NTSC is. Analog NTSC displays will gladly display anything even close. Most digital ones are flexible too, though I notice my Apple //e signal (which is a really crappy, non standard signal) won't display on some digital displays I've tried. Same for the Atari, which is a tiny bit better, but still not standard signal.
Supporting component it is a bit more work than a simple RGB VGA setup is, but it does have the advantage of accepting the TV sweep frequencies. And based on the results we saw on the earlier P2, worth doing.
Yes, Component is better than S (composite) because it skips a couple of modulation steps and it also skips the notch and bandpass filters in the TV side, so has much higher pixel bandwidth.
Luminance & sync (monochrome TV) is managed on one cable, and there is no Sound carrier, or Chroma carrier.
2 other wires send Video quadrature encoded, + and - and the blanking period clamps to 0.
Again, no carrier signals.
Hardware than can do VGA, and especially VGA-Sync-On-Green, should be able to also do Component Video.
https://en.wikipedia.org/wiki/Component_video
The down side is it needs 5 RCA connections for RGB+Audio, but I've been looking for cables that simple remap other connectors.
Choices look to be
HDMI-5 x RCA
HD15-5x RCA
Maybe it makes sense to use some variation of the P1 circuit to push the DAC values into the active video pixel range for TV output too.
Even if we don't, there are still something like 160 grey values possible. On component, that's a nice color space. On composite, it's all gonna depend on the chroma. Maybe a few thousand colors?
50Mhz isn't pretty. Chip mentioned some tricks to manage this. Will be interesting to explore those. Maybe we can get a build at a close, but much better frequency, or add a crystal to trigger an interrupt, etc... Since we don't have the nice PLL option, I wonder if this isn't something the smart pins can contribute to? I'm not clear on what they can or are supposed to do. Worth some thought when we get there.
It's all more of a PITA at this stage than P1 is, or the "hot" chip was, but once it's done, actually making pixels and all the higher level driver things should go pretty easy. In that regard, this is a nice design!
Maybe I'll make some progress this weekend, when I've got another block of time to jam on the FPGA. Chip might have some more docs by then too. I don't understand the various DAC modes well yet.
It uses the streamer, and while we don't know what other modes are available right at the moment (docs), what we have there should be good for component video.
For composite NTSC, the streamer could be used to output a really nice quality colorburst, but modulating further along the same line (active video) is tricky, because the phase shifts from pixel to pixel. Perhaps there will be some modulation assistance from other streamer modes, perhaps not. I agree with aiming for component for now.
So, from what I'm hearing, we're looking at least two cogs for old-school NTSC/PAL color signals? The P2 is going to have 16 cogs right? I guess that isn't too bad.
It also sounds like what we're having to do with the P2 is pretty much what the Arduino/ARM/PIC guys have to do and that is generate these signals manually. Or am I missing something?
Either way, no worries from me. I do hope that the P2 makes video easier, however.
Oh, BTW, what kind of FPGA dev kit do I need to play around with the P2 version that is out? Could I get a smaller kit if I could live with fewer cogs?
Thanks.
The 123 board makes the most sense. Some people have that NANO board, and it does two COGS, but to get DAC emulation, one also needs the add on board.
You should ask Ken about those add on boards. I'm not sure if any are left, or will be made, or what.
The signals always get made manually. WAITVID + the PLL made that easy, because each "pixel" could be a fraction of the base color timing. Everything is built off that. P1 didn't need chroma modulation circuits, but it's nice that it has them.
Now we don't have the PLL. But we do have streaming, timers, and great DACs on all the pins. I think the way to go is to use the timer interrupts to make the signal, which replaces WAITVID for sync, etc... and then use the streamer to make pixels. Then chroma ends up in there somehow, as discussed above.
That would cover terminal/text/graphics/palette type use, but images remain a challenge.
A run-length coded font has a higher cost for small fonts but less so at large fonts.
There's also the LUT, and an NCO for driving the rate the streaming pixels are output at, and this happens largely in the background, leaving the foreground free for other functions
I'm thinking about what kind of mode is required to generate composite signals, as there's a lot more action per pixel needed. We found on Prop2-hot that real sine waves make perfect composite video signals. The challenge is how to exploit a 512x32 lookup RAM to make a good mix of scaled and amplitude-offset sine patterns, maybe 16 byte samples long, that can be played at any phase offset, at any NCO rate, for some period of time.
Cool beans on the docs. I'm back on the FPGA this weekend.
And thanks. I know those are hard. We appreciate it.
Things are a little easier if the PLL is some Burst multiple, as that avoids needing live calculate of sine.
What is the time overhead for live sine calculations ?
16x Burst is 57+ MHZ or 70.93MHz
If REP can can a small block of LUT, 16 lines would allow a user-selected number of cycles playback, and the phase would be changed by the lines index values.
Faster playback, slightly longer palette phase-update.
That would take care of saturation and intensity, and the overall system clock to NTSC/PAL frequency difference, leaving the "chroma" problem of where to start within that 16 sample sequence, vs position across the scan line. However I believe that's a solvable NCO problem
You'd get 32 colors this way. A 16 color mode, with 32 long LUT sample lengths, might also be and interesting alternative, because you could always start in samples 0-15 and never have the glitch going from sample 31 to 0 (or 15 to 0 with 32 color version). What effect that 'rollover glitch' would have is hard to know, need to test it.
I don't think this would be expensive in logic, in effect you're just messing with the 4/5 LSB's at the point where they look up the LUT entry, and adding an NCO to track the starting point within the 16 sample sequence for each pixel.
It wouldn't be perfect - there would be a slight (1/16th or 1/32nd?) chroma error from one pixel to another, plus a small glitch where the LUT sample rolls over and the sine wave 'jumps', but these artefacts could well be minor
We could just load up the streamer as it is now, with some pre-calculated test cases, to see how good/bad it would look, were it to be implemented
There are some very economical patches we could make to the streamer to have it loop within limited LUT ranges, but there's also a separate matter of timing pixels.
Today, I'm working on the Google Doc file to explain what we've already got.
OK so far, but it does take time to prepare these 'LC-palettes'.
I think you mean 32 x 16 LUT tables, or Palettes, of Luna+Chroma settings.
The colour definition within those palettes can be quite good. DAC limited. One would be used for Burst ?
Not so sure about this bit - I think the playback does need to be Chroma locked, and that's likely best done with a SysCLK that is Chroma-N related.
The NCO can allow non-fractional /N, but it would be phase reset every scan line - not doing that could give strange beat effects.
The HW already support this Phase reset, I believe.
This looks like a [LUT.16]++ or [LUT.8]++ -> DAC form of REP ?
Do you mean bringing back the same way prop 1 handles color??
Sorry, but I thought I knew something about NTSC and TV signal generation until I started reading this thread. You guys really know this stuff!
BTW, does anyone else find it awesome that the designer of the Propeller is named "Chip"? lol
This would not need to have 16 copies
Sounds good,
Such a LUT form of REP would also drop code size, and to further compact code it would be great if a wait or parallel execute was possible.
ie Set LUT to replay N repeats of Block index P, and while that was happening, be able to run the code to ready the next block index.
Block content changes would be during flyback+border
Maybe all COGS could get this, and some COGs get the more comprehensive modulator ?
To be fair, you guys on this forum have had greater impact on the design than I have had. I'm quite happy being your chipper implementer, anyway.
Yes, you can drive VGA displays, already, just by using RDFAST-block-wrap interrupts to issue FBLOCK commands to steer the streamer.
Maybe some code examples of what is possible now would help ?
I can see a WAITRLE, but that is aligned to
"Set whenever location $1FF of the lookup RAM is read."
- close, but not quite a generic small table LUTREP ?
The semantics of the use of Streamer may need a qualifier, as there is HUB-LUT/COG streaming and then LUT to Pin Streaming.
Maybe name by Source-Destination, letter tags viz
LP stream is from LUT to DAC/Pin (etc)
IMHO there are just some things that are not worth being in every cog, but would be fantastic to have 1 of!!!
That would be awesome Chip and make things a lot easier especially for keeping the colours equal across all display types
Only if it's not too big and distracting/time consuming though.