I found out yesterday that my TV-VGA converter also does this on the Sega Genesis; looks like they made the same "mistake" with regards to their video hardware.
I thought about building a circuit with op-amps on the VGA side that would clock the pixels in when they are lit, sample the value, and hold it while the pixel is black, but it would be a very complex circuit.
What about running the video through a filter to smear it a bit? You never know, it might just be really easy. Might get rid of the black elements of the display, but it won't enable the mixing.
IMHO, the mistake is in the VGA capture device. It makes too many assumptions.
256 colors would make for better graphics! but, do you suppose it would be possible to make a game on the next·Console that·Andre' makes using the new propellor chip at a resolution of 640 X 480 using VGA without using all the cogs?
BTW: How many colors would be possible on VGA if we changed this program a bit to work with VGA?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔ Realize that I am really a mad scientist··· and
Bob: The unusual colors that pop out from combining adjacent pixels are a side effect of the somewhat complicated way NTSC stuffs lots of information into a fairly low-bandwidth signal. Essentially by putting too many different color pixels too close together, you spoof the TV hardware into coming up with a different color than either of the scrunched together pixels. VGA sends all the signals on separate lines, has a much higher usable bandwidth, and a simple encoding, so VGA pixels' colors are "pure" and don't smear like NTSC pixels. VGA signals are straightforward enough that whatever colors it has available by default in the Hydra hardware (I can't remember at the moment how many that is), those are pretty much exactly the colors you'll get.
However, VGA does have a high enough resolution to employ dithering, which is where you use a pattern of two different colors and rely on the human eye to smear them. In the case of dithering, it doesn't produce a "surprise" color, it just resembles the expected mix of the two hues or shades, but the dithering pattern is visible if you get up close.
potatohead: Yeah, I agree, the fault lies in my TV adapter. I thought about just hanging resistor-capacitor integrators off the RGB lines to smear the colors like you said, but the problem is that it would smear them past the point of the next pixel if the next pixel is black. Right now it is difficult to read the text on the screen when my Tandy Color Computer 2 is hooked up because the text is black on green, but there are also black columns between the green pixels. It's difficult for my eyes to differentiate between a pixel that is black because it is a column between green pixels, and one that is black because it is part of a black-on-green letter. But if I smeared the pixels with some capaciatance, not only would it connect the black columns with green, but it would also smear the green across the black pixels in the letters. Anything like an "I" or a "1" or a "!" would almost disappear because it'd get smoothed over. That's why I'd have to have an accurate pixel clock timed to the H-sync and sample-and-hold each pixel, so that when it comes to a pixel that is supposed to be black, it'd catch that. I think I could design such a circuit, but it'd be a pain in the butt.
So is sorting out the color palette. I worked on that some today, hoping to come up with a solid 256 color set. I thought having mixed them in an orderly fashion, sorting them would be orderly as well. That's proving to not be the case.
The closest I got was taking 8 bits of color specification, 5 bits for hue, 3 for intensity and only using intensity levels 0-5. With some shuffling around, this ended up building a 160 color palette that's not bad, but real heavy on the brighter and not so saturated colors, and it wastes a lot of bits.
I then tried building a smaller table of mixed colors that were better, hoping to just multiply to get through the different hues, without having to deal with a 512 byte CLUT. (two for each unique color) Some hues are better represented than others... that's a mess too.
It's down to either some more thinking or matching colors, one by one into the large CLUT. Maybe doing the matching will reveal some other shortcuts.
From what I've experienced so far, having the CLUT in the HUB is a big speed drain. It seriously limits what other things can be done, thus invoking more cogs to draw scanlines. I'm wanting to just have one COG do the bitmap at 8 bits of color. This actually fits into a COG, even with the large CLUT, when talking just bits of information. However, the lack of byte wide COG addressing, more or less makes a redundant set of CLUT entries easy for a lot of reasons, and that's too big.
So, some bit shuffling has to occur in order to load the pixels properly for each waitvid, limiting resolution because of time. It's a set of tradeoffs right now between resolution, logic, etc....
potatohead said...
Color is interlaced, every other scanline, interlaced vertical display or not.
I think I know what "interlaced vertical display" means, but I don't know what you mean by interlaced color. Could you rephrase or expand a bit, please?
The color timing is the same exact timing on every scanline, every time, with this code. It was done this way, so that artifacting could be used to open up the number of colors, and provide a jitter free color display.
Interlacing the color, maybe a bad term... involves shifting the pixel timing by 1/2 pixel, every other scan line. There may be more elements to it than that too. I didn't go through and do it, due to RAM constraints. 160x96 with 8 bit color consumes half the prop ram.
The easiest way to see this is to run the Parallax reference TV driver and look at colored elements. The big, pink Parallax text box at the lower right is an easy example to see. Notice the edges of it flicker back and forth, but do not flicker up and down. That's interlaced color, and it's stable at resolutions above the ones in this thread.
On a larger TV, you will easily see this. On a smaller one, you might consider a lens to blow up the pixels. If you stop the graphics display, by changing the main repeat to a specific value, say 200, then the blue lines show this effect very well. They are thin, making it easily seen.
I notice that on my VGA to TV converter, my own experimental NTSC driver is displaying every medium-luminosity color as a colored pixel and a black pixel, but the order of the two (which one's first) is opposite on every line. I wonder if I'm inadvertently interlacing the color?
Actually, upon closer inspection it is a 2x2 square of pixels on the VGA monitor for every one TV pixel. The four VGA pixels in this 2x2 square are apparently dithered to get a dimmer color, which is totally bizarre considering VGA supports many more colors directly than NTSC. My converter is just doing goofy things with the TV input again.
While the high color buffer effectively consumes all available memory, I was pondering a fractal generation algorithm that uses the remaining COGs -- each individual COG has about 512 bytes, which is probably enough to hold a small program in assembly language. If each individual cog wrote to the memory buffer, using it to hold the state of the fractal, we could have a very nice demo that's not just a bunch of boxes and pipes. [noparse]:)[/noparse] I was pondering segmenting the screen into four quadrants, and assigning a cog to each. And then if we had a cog remap the colors, we could do color rotation of a plasma effect.
I'll let you guys know if I get anywhere with this.
Well, at the 160x96, you've got half the RAM left. Not a bad resolution for fractals, particularly if one can zoom them. Fractals don't need a double buffer display, as it's fun to watch them draw.
The 160x192 is 30K, but you are right! The COG's do have 2K apiece. If you loaded all their code into HUB ram, got them started, and had them wait for a trigger, then the 160x192 screen could be initalized, cleared, leaving some memory for the master COG, running SPIN to operate in. At that point, you've 1.5K or so for the master program, and all the COG's running assembly code. I'll bet that works out nicely enough.
IMHO, there is totally enough RAM in the COGs to do fractal computation. I was kicking around cellular automata as another alternative. Got distracted with the character driver and video overlay efforts...
At least pixel plotting is really easy @ 1 byte / pixel!
Have some fun. I sure am. It's a total kick going back to the basics.
I was writing a prototype fractal algorithm in SPIN, until I realized that I couldn't easily use the screen buffer as a workspace for the fractal, because what's written to the screen buffer are color values -after- the CLUT translation.
So I could use a 'getpixel' routine, but that returns the CLUT color rather than the index value used to get the CLUT value.
Has anyone managed to move the CLUT into the TV driver? Alternatively, is there a suggestion on how I could do a reverse lookup from CLUT value to index value?
potatohead said...
You could just build another table.
Yeah, I thought of this, but was hoping to avoid that, because of memory constraints. That, and I tend to shy away from such 'inelegant' solutions. [noparse]:)[/noparse] I'd been looking at the assembler to see how to get a generic 6 or 8-bit color driver working, with a 2:2:2 weight, or a 2:2:3 weight as far as RGB. The CLUT would then translate the weights into Hydra colors.
You could go look through the existing table for a matching value. Compared to the fractal calculation, this overhead would be very low, and would fall in line with memory constraints, taking far less than another table would.
I don't think that file is here anymore. It was an intermediate file, posted while sorting out a timing issue. The files needed to explore artifact ed colors on the Propeller are in the first post.
I just got the Hydra book today and I'm reading about these techniques. It's a bit over my head for the first read and I'll probably have to re-read a few times before it sinks in. It's just amazing the tricks that are done to get better resolution games. It's also probably a reason to go digital .
Sample all the pixels, and build a palette file that way. That display is about half the possible colors, BTW. If one were to add the higher saturation standard propeller colors, another brighter set would be possible. The bars across the bottom are all the standard Propeller colors. Some of these lie outside the NTSC specification. I did the display to simply put all available colors out there, just because. The ones up the side, are a few that I some how missed. The center square are "derived" colors, meaning they are not directly addressable like Propeller standard colors are, but must be generated through specific timing and pixel combinations to realize. (that's why a color lookup would be quite handy)
This really was a proof of concept for NTSC. Nearly all the Propeller RAM is consumed to make the display above! (30K or so) Two bytes are required to represent one full color pixel. With these timings, the resolution is 160x192 with all the colors. The display above is effectively 160x96, due to system ram constraints.
What really is needed to make this useful is both a scan line renderer and a color lookup table so that the colors are available, but not required to be buffered full time.
Anyway, that color set is close to average. The thing is, what you see with a driver like this really depends on the settings of the display device. If you create a palette with the right range between colors, the desired set is only a tint control adjustment away! That image does have it's gamma adjusted a bit above what I've seen CRT displays return, but fairly close to what my HDTV ends up displaying. YMMV!
soshimo: Funny too. The tricks used to be the norm. That is part of the fun of the Propeller. It's got software video, meaning a lot of stuff is possible where it mostly wouldn't be on a design where there was a greater amount of hardware assist. I suspect we've not come close to seeing the scope of video tricks possible!
I used a capture card to obtain the color sample. These typically have good bandwidth and seem to tolerate out of spec signals far better than some TV's do. In general, the older the TV, the less appealing this image will look because of that!
Comments
I thought about building a circuit with op-amps on the VGA side that would clock the pixels in when they are lit, sample the value, and hold it while the pixel is black, but it would be a very complex circuit.
IMHO, the mistake is in the VGA capture device. It makes too many assumptions.
BTW: How many colors would be possible on VGA if we changed this program a bit to work with VGA?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Realize that I am really a mad scientist··· and
Don't forget it!
http://raydillon.com/Images/Illustration/GameArt/WildIsle/WildIsle-Ink-ScientistClose.jpg
·
However, VGA does have a high enough resolution to employ dithering, which is where you use a pattern of two different colors and rely on the human eye to smear them. In the case of dithering, it doesn't produce a "surprise" color, it just resembles the expected mix of the two hues or shades, but the dithering pattern is visible if you get up close.
potatohead: Yeah, I agree, the fault lies in my TV adapter. I thought about just hanging resistor-capacitor integrators off the RGB lines to smear the colors like you said, but the problem is that it would smear them past the point of the next pixel if the next pixel is black. Right now it is difficult to read the text on the screen when my Tandy Color Computer 2 is hooked up because the text is black on green, but there are also black columns between the green pixels. It's difficult for my eyes to differentiate between a pixel that is black because it is a column between green pixels, and one that is black because it is part of a black-on-green letter. But if I smeared the pixels with some capaciatance, not only would it connect the black columns with green, but it would also smear the green across the black pixels in the letters. Anything like an "I" or a "1" or a "!" would almost disappear because it'd get smoothed over. That's why I'd have to have an accurate pixel clock timed to the H-sync and sample-and-hold each pixel, so that when it comes to a pixel that is supposed to be black, it'd catch that. I think I could design such a circuit, but it'd be a pain in the butt.
The closest I got was taking 8 bits of color specification, 5 bits for hue, 3 for intensity and only using intensity levels 0-5. With some shuffling around, this ended up building a 160 color palette that's not bad, but real heavy on the brighter and not so saturated colors, and it wastes a lot of bits.
I then tried building a smaller table of mixed colors that were better, hoping to just multiply to get through the different hues, without having to deal with a 512 byte CLUT. (two for each unique color) Some hues are better represented than others... that's a mess too.
It's down to either some more thinking or matching colors, one by one into the large CLUT. Maybe doing the matching will reveal some other shortcuts.
From what I've experienced so far, having the CLUT in the HUB is a big speed drain. It seriously limits what other things can be done, thus invoking more cogs to draw scanlines. I'm wanting to just have one COG do the bitmap at 8 bits of color. This actually fits into a COG, even with the large CLUT, when talking just bits of information. However, the lack of byte wide COG addressing, more or less makes a redundant set of CLUT entries easy for a lot of reasons, and that's too big.
So, some bit shuffling has to occur in order to load the pixels properly for each waitvid, limiting resolution because of time. It's a set of tradeoffs right now between resolution, logic, etc....
I'm gonna think on it for a while.
I think I know what "interlaced vertical display" means, but I don't know what you mean by interlaced color. Could you rephrase or expand a bit, please?
Interlacing the color, maybe a bad term... involves shifting the pixel timing by 1/2 pixel, every other scan line. There may be more elements to it than that too. I didn't go through and do it, due to RAM constraints. 160x96 with 8 bit color consumes half the prop ram.
The easiest way to see this is to run the Parallax reference TV driver and look at colored elements. The big, pink Parallax text box at the lower right is an easy example to see. Notice the edges of it flicker back and forth, but do not flicker up and down. That's interlaced color, and it's stable at resolutions above the ones in this thread.
On a larger TV, you will easily see this. On a smaller one, you might consider a lens to blow up the pixels. If you stop the graphics display, by changing the main repeat to a specific value, say 200, then the blue lines show this effect very well. They are thin, making it easily seen.
One more dumb question: which program displays the big pink Parallax text box?
IMHO, that sounds like a timing issue, odd number of pixels, etc...
If they are shifted by half a pixel, then that's sort of what I was trying to describe.
While the high color buffer effectively consumes all available memory, I was pondering a fractal generation algorithm that uses the remaining COGs -- each individual COG has about 512 bytes, which is probably enough to hold a small program in assembly language. If each individual cog wrote to the memory buffer, using it to hold the state of the fractal, we could have a very nice demo that's not just a bunch of boxes and pipes. [noparse]:)[/noparse] I was pondering segmenting the screen into four quadrants, and assigning a cog to each. And then if we had a cog remap the colors, we could do color rotation of a plasma effect.
I'll let you guys know if I get anywhere with this.
-Wes
The 160x192 is 30K, but you are right! The COG's do have 2K apiece. If you loaded all their code into HUB ram, got them started, and had them wait for a trigger, then the 160x192 screen could be initalized, cleared, leaving some memory for the master COG, running SPIN to operate in. At that point, you've 1.5K or so for the master program, and all the COG's running assembly code. I'll bet that works out nicely enough.
IMHO, there is totally enough RAM in the COGs to do fractal computation. I was kicking around cellular automata as another alternative. Got distracted with the character driver and video overlay efforts...
At least pixel plotting is really easy @ 1 byte / pixel!
Have some fun. I sure am. It's a total kick going back to the basics.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Propeller Wiki: Share the coolness!
So I could use a 'getpixel' routine, but that returns the CLUT color rather than the index value used to get the CLUT value.
Has anyone managed to move the CLUT into the TV driver? Alternatively, is there a suggestion on how I could do a reverse lookup from CLUT value to index value?
-Wes
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Propeller Wiki: Share the coolness!
Yeah, I thought of this, but was hoping to avoid that, because of memory constraints. That, and I tend to shy away from such 'inelegant' solutions. [noparse]:)[/noparse] I'd been looking at the assembler to see how to get a generic 6 or 8-bit color driver working, with a 2:2:2 weight, or a 2:2:3 weight as far as RGB. The CLUT would then translate the weights into Hydra colors.
You could go look through the existing table for a matching value. Compared to the fractal calculation, this overhead would be very low, and would fall in line with memory constraints, taking far less than another table would.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Propeller Wiki: Share the coolness!
For some reason, I cannot locate this file: Real_Random_012
JB
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Propeller Wiki: Share the coolness!
·
http://propeller.wikispaces.com/file/view/08_25_2007_10_03_02.jpg
Sample all the pixels, and build a palette file that way. That display is about half the possible colors, BTW. If one were to add the higher saturation standard propeller colors, another brighter set would be possible. The bars across the bottom are all the standard Propeller colors. Some of these lie outside the NTSC specification. I did the display to simply put all available colors out there, just because. The ones up the side, are a few that I some how missed. The center square are "derived" colors, meaning they are not directly addressable like Propeller standard colors are, but must be generated through specific timing and pixel combinations to realize. (that's why a color lookup would be quite handy)
This really was a proof of concept for NTSC. Nearly all the Propeller RAM is consumed to make the display above! (30K or so) Two bytes are required to represent one full color pixel. With these timings, the resolution is 160x192 with all the colors. The display above is effectively 160x96, due to system ram constraints.
What really is needed to make this useful is both a scan line renderer and a color lookup table so that the colors are available, but not required to be buffered full time.
Anyway, that color set is close to average. The thing is, what you see with a driver like this really depends on the settings of the display device. If you create a palette with the right range between colors, the desired set is only a tint control adjustment away! That image does have it's gamma adjusted a bit above what I've seen CRT displays return, but fairly close to what my HDTV ends up displaying. YMMV!
soshimo: Funny too. The tricks used to be the norm. That is part of the fun of the Propeller. It's got software video, meaning a lot of stuff is possible where it mostly wouldn't be on a design where there was a greater amount of hardware assist. I suspect we've not come close to seeing the scope of video tricks possible!
I used a capture card to obtain the color sample. These typically have good bandwidth and seem to tolerate out of spec signals far better than some TV's do. In general, the older the TV, the less appealing this image will look because of that!
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Propeller Wiki: Share the coolness!
Chat in real time with other Propellerheads on IRC #propeller @ freenode.net
Safety Tip: Life is as good as YOU think it is!
Post Edited (potatohead) : 1/6/2009 4:35:39 AM GMT