Propeller as a Renderer II
Hey Guys,
In my first 'Renderer' post, I was using the Propeller alongside an Intel 8088 processor. The purpose of the Prop was to read from the VRAM in my circuit and display it as an NTSC video signal. Since then, I have moved to a different processor; the Z80 which will be clocked @ 6MHz. The purpose of the Prop is the same.
Before I waste any time trying to use the Prop with the Z80 as a GPU, I would like to know if it is within the Prop functionality. I will try to use a 3-3-2 RGB color space. Because the Prop works with 32 bits at once, is it still possible for it to generate a color signal with only 8 bits?
In my first 'Renderer' post, I was using the Propeller alongside an Intel 8088 processor. The purpose of the Prop was to read from the VRAM in my circuit and display it as an NTSC video signal. Since then, I have moved to a different processor; the Z80 which will be clocked @ 6MHz. The purpose of the Prop is the same.
Before I waste any time trying to use the Prop with the Z80 as a GPU, I would like to know if it is within the Prop functionality. I will try to use a 3-3-2 RGB color space. Because the Prop works with 32 bits at once, is it still possible for it to generate a color signal with only 8 bits?
Comments
One is getting pixel data from your CPU memory space. That data can be anything. If you are using bytes, then what you can do graphically all depends on how many of those bytes you can fetch during a scan line.
Let's say it's 40, which is a perfectly reasonable DMA expectation.
Those 40 bytes could be big, fat, pixels, where each pixel is one of 256 colors, or they could be lots of little pixels, where each bit of those bytes, 320 bits total, are one of two colors.
In the Propeller, you would then map the bytes to graphics and colors, and that depends on the scan rate of the display device, and the overall compute power of the Propeller.
If you plan on just using the reference video circuits, you can get 64 VGA colors mapped to your incoming pixel bytes. Easiest path is to map them 2 bits per pixel, for example, yielding 160 pixels per scan line, with each pixel being one of 4 possible colors, for that byte. Lots of schemes are possible. If you were to use one nibble per pixel, that would be 80 pixels, 16 colors per pixel, with the actual colors being determined in the Propeller.
Finally, if you don't want to use the reference video circuits, it's possible to drive video in lots of ways, with the trade-offs generally being number of pins, and pixels per line, depending on what is done for color depth.
Edit: This is a bit out of scope for your post, but I've been reading up on the Apple ][ computer I just got. Plan on interfacing a Propeller with it.
In the Apple, two clock phases were used to allow video DMA to happen without interrupting the CPU. With a fast RAM, it's highly likely the same could be done with the Z80, allowing byte fetches every cycle, if desired, without slowing the CPU down. What happens is the early phase is used by the video, or the Prop in your case, to get it's byte, leaving the later phase to be done by the CPU, leveraging the fact that the CPU only latches it's data at specific times. Something to think about.
If you want a reference to the text and timings, I'll post it up here. The Z80 won't align with that, but the overall concept might. I just don't know much about Z80's.
Your big trouble will be reading the data in fast enough.
eg: R=2, B=2, G=2, I=0 being equal to R=0, B=0, G=0, I=2??
The RGBI actually does produce more colors, becous you have 4 intensity levels for each combination of RGB values. Yes there are a couple of cases were intensity settings overlap the RGB only variant, though remember there is a two bit DAC for I that is equal to the other three DACs though is driving 3 75 Ohm loads not just one thus the equation changes do to the parallel load.
By reference video circuits, do you mean the hardware configuration with already written TV drivers? I could definitely use something to refer to.
Apples used 400ns RAMs, and were able to interleave the video with the CPU at 1Mhz. Z80's use more cycles, so that's probably about the same as a 2 - 3Mhz Z80, in terms of speed and scope. Much faster RAMs are to be had these days. Would be very nice to decouple that DMA from your CPU. That way, the CPU runs at a consistent clock, which may prove useful for a lot of stuff, depending on what you want that computer to look like.
For the video generator itself, I'd recommend reviewing http://propeller.wikispaces.com/Video+Generator
Eric has some nice composite templates you can use. Highly Recommended, if you roll your own video system.
Correct. VSCL counts down the number of PLLA clocks before shifting the Pixel data (PixelClocks) and the number of PLLA clocks before reloading the Pixel & Color data, the VSCL counters, and releasing WAITVID. For composite video synchronization signals are generated via "special" color values (0=sync), for VGA usually two of the bits/pins are tied the the H & V sync lines and again "special" color values change the pins appropriately.
Yes, the heart of video generation is very high speed and precisely timed and this has to be done in assembly. Because of the high throughput needed, some video drivers use more than one cog (processor), typically with the cogs alternating between loading up a line buffer or several scan lines of information, then doing the video generation for what just got loaded. Such cogs have to be precisely synchronized and that's straightforward to do with the Propeller. Best is to look at existing video drivers and how they manage this.
One item of note - the composite output of the Propeller video generator is based on 6 levels of grey and 16 hues (plus grey). You'd need a 256 entry color lookup table to translate from 3-3-2 RGB to Propeller hue+luma (and the results probably won't be satisfactory).
Have you run the template? It is a complete signal only reference, containing the color bars as something to fill the active graphics area with.
If you wanted to do a bitmap, as a example, you would setup a pointer to the screen RAM, and loops for a given scan line, and for the active lines in the display.
I've used that template to build this driver:
http://forums.parallax.com/entry.php?129-16-Color-Potatotext-8x8-NTSC-TV-Driver
It is a text display, using two COGs. Two COGs are needed for that one, because of the character density and color lookups needed for 16 color text definitions.
The primary COG is "the signal cog", and it gets it's data one scan line at a time from "the graphics cog", which looks at the Propeller HUB memory, building a line of graphics in time for the signal cog to render it to the display.
There are several code examples in that driver that you may find helpful. A scan line based signal COG could be fed graphics from your system that are fetched by a cog or two interacting with the Z80, for example.
http://www.parallax.com/dl/docs/prod/prop/PropDemoDschem.pdf
...and remove the case statement, or set the variable before it gets executed to point to the demo board case.
If that does not work, I would recommend establishing a known good video signal case with the Parallax drivers supplied with the Propeller Tool download. graphics_demo.spin is setup for demoboard by default. Continue with that one, until you get both A signal, and A KNOWN good signal, ideally connected to the display device to verify.
This is important to do because writing and debugging video drivers is not trivial. Best to have a solid working case, then change one thing at a time to author the desired driver, or make configuration changes.
How does a high-level driver (a graphics driver for example) interact with the low-level driver (a TV driver for example) that it is meant to work with assuming that the two drivers are two separate spin files? That is, which data bridges the two drivers?
How could I perform graphics routines without using a higher-level abstraction?
You pass the address of this common buffer to both objects / drivers.
You just set bits in the bitmap buffer to define the color of corresponding pixels on the screen.
Andy
Typically the PASM driver code is wrapped in a simple SPIN start routine which takes a pointer to a sequence of LONG parameters (that then gets passed along to the PASM code). One or more of the parameters are pointers to the data structures used by the driver. This may be a simple bitmap buffer and a color table, or it might be a pointer to a title array, tile bitmaps and a color table, or a pointer to a character screen buffer and a pointer to the character bitmaps, or a pointer to a sprite table and a pointer to sprite data. It all depends upon the driver, which then depends upon your particular requirements. Higher level functions may be provided by additional SPIN routines in the same object, or a separate SPIN object (e.g. Graphics.spin) which is then paired with an appropriate video driver.
I have been wondering; in the driver demo, what is the difference in the display_base and the bitmap_base? Are these the pointers to the bitmap buffer and the screen?
As shipped by default, with the propeller tool, that driver is setup double buffer. One page to draw on, one to display from.
I've got some good explanation of how that all works, tiles, colors, etc... in my blog here:
http://forums.parallax.com/entry.php?86-Ripple-Draw
Single buffer techniques are demonstrated there, as well as some methods to change tiles, colors, etc...