Shop OBEX P1 Docs P2 Docs Learn Events
Full Color Tile Driver Thread - Page 2 — Parallax Forums

Full Color Tile Driver Thread



  • ericballericball Posts: 774
    edited 2010-11-15 06:19

    Translating from Propeller luma+hue to 24 bit RGB is easy, just start with the CCIR 601 YCrCb to RGB formulas. RGB to Propeller luma+hue is more problematic as there are only 102 colors and they aren't arranged nicely in the RGB color cube. It might be possible to try to do the remapping as the image is loaded into HUB RAM, but I don't think it would be possible to do it on the fly as part of the video driver.
  • Bill HenningBill Henning Posts: 6,445
    edited 2010-11-15 06:20
    Interesting driver... I'll keep an eye on this thread :)
    potatohead wrote: »
    Ok, I've reached a first pass.

    Tiles are 4x8. I know that is a tiny tile, but the trade-off is very good screen positioning. Groups of tiles can be handled in SPIN.

    Resolutions are: 40, 80, 160, 256, 320. Right now, 320 pixels requires 96Mhz. Probably that restriction can go away with some more Cluso out of order instruction mo-jo, like we did on Potatotext.

    Edit: BTW, if somebody has a fast board, the thing will probably do 512. If you try it, let me know the clock :)

    I'm thinking the way to go here is a tile file, where the binary values are just in the file, with up to 256 tiles possible. Any pixel may be any color within a tile. No restrictions, but for the number of tiles possible.

    This archive can be tinkered with. There is more to do, but 8K of full color tile data is possible with this one. Maybe try the SD card tricks with two of the menu tiles?? More to come, I'm sure, but this is enough to prototype with.

    **I need some sexy tile data, for screen shots.

    No screen SPIN routines are written yet. I've just written a few sample tiles, and put their values on the screen manually to verify the display driver is actually working as given.

    I would suggest taking a image, and encoding it in tiles. The tile format is simple. Starting from the base address of where tiles are stored, each tile is 8 longs, stored sequentially, up to 256 tiles.

    Tiles can be stacked on the screen such:


    That block of tile numbers would equal a 12x24 pixel image on screen. To place the image, just put the tile numbers in the screen array where you want the image to appear, multiple places, if desired.

    Stored in the RAM such:

    Tile one longs
    Tile two longs

    Each long is 4 pixels on screen. The screen is the number of pixels / 4, with one byte indexing the tile for each screen cell. 320 pixels = 80 screen cells horizontal, 25 screen cells vertical.

    Vertical resolution is 200 pixels.

    I'll try to build some some tiles to illustrate where it is so far. I'm also seriously thinking about trying it the Parallax tile way, because it might be more flexible. Deffo want a bigger tile, or larger number of tiles too.

    This is just barely enough to prototype reading some tiles from SD to see what might could happen.
  • potatoheadpotatohead Posts: 10,260
    edited 2010-11-15 07:15
    Thanks for looking Kye. I won't have a shot until later today. All I did was run your driver and look the really clean code over. Again, nice!

    Pretend the solution is not clean for a moment. Do you think a unrolled loop would do 320? Maybe just stuff 80 waitvids in there along with the add, required for the index!

    And the final product will only need to read from a HUB buffer, for a single scan line. Another COG will build from the "screen" tilemap, writing the product of that into the buffer, as the VGA COG is drawing it. What I do, is start the graphics building, right after the VGA cog enters the front porch. That gives the graphics COG a head start writing the buffer. It tends to run a bit more slowly at higher resolutions, just finishing as the VGA COG fetches the last visible waitvid frame.

    If any more speed is needed, multi-COG is needed, or the buffer needs to be a double buffer, which I'm considering anyway. A single buffer is lean, but causes a timing problem, if multiple COGs are all combining data for the buffer. Double makes that easier.

    Another thing that could be done is to adjust the porches, so that a different primary resolution is drawn, making something between 160 and 320, worst case, a factor, so that the highest possible at 80Mhz can be done, or we just do 96 or 100Mhz. I've xtals for both. It's possible to make a "remainder" waitvid frame too, so that non-factor resolutions work. If those things are done, the graphics COG gets a longer porch time for it's run up ahead of the VGA COG, and the display narrows by some small percentage. What I don't know here is whether or not digital monitors will sync to stuff like that. Analog is no problem. I've seen other PC applications do tricks like that, and TV's don't care what you do after the sync, happy to display a mess, if you code one. :) My VGA is a analog one, BTW, though I could test finished code on various digital ones at work.

    The end product will be a tile driver. It's by nature, double buffered, in that the tile can be written, while off screen. The "screen" memory, unlike a bitmap, is simply a index of tile numbers, where each tile can be positioned anywhere on the screen grid.

    A small stack of tiles acts very much like a bitmap too. That's actually almost how the Parallax drivers work, the difference being "the screeen" is a list of tile addresses, but same concept applies.

    Screen manupulation can happen two ways. One is to build the tiles off screen, then "pop" them on the screen by writing a few values to the "screen" array. They will appear on the next scan of the monitor. The other, is to write some tile values, so the tiles are seen, then operate as one would a bitmap driver, and that can be seen while drawing, and is basically single buffered.

    Finally, since the tiles are indexed, common graphic tiles can be used in multiple places on the screen. Unlike the Parallax driver, which has color redirection, this one works in absolute color, so the common graphics would need to have the same color, not true of the Parallax tile drivers, because they also have a colors array to address that.

    This particular driver allows 8K of unique tile data. That's going to get expanded on the next pass. This was a proto, "is it worth it?" pass, basically done at Dr_A's request, because I had code that was close.

    For what it's worth, the Parallax driver, can run single buffered, and use a small pool of tiles off screen to render things, popping that onto the screen, exchanging it for another small region, in a sense, double buffering a fractional screen, trading flicker for tearing in the image during changes. Doing that round robin style for a whole screen is something I've always called "ripple draw", because the tearing can be seen, as can image change errors over time, but it doesn't flicker much, if any at all. For some things, like perhaps a aircraft flight simulation, or instrument panel, it can work well. Partial buffered I guess is the right way to describe doing that with tiles.

    I'll have to code this as a demo one day, also doing common tiles with different colors and using color redirection to "pop" things onto the screen with no flicker. Tiles are very useful that way, and were used in older computers that had limited fill rates. Color can be used to hide a image already drawn. When a visible color is assigned, the entire thing will be displayed next frame. On the Parallax driver, progress bars, check boxes and other things could be on the screen, shared, and just made visible or not, kind of like those stereo displays with the elements just needing to light up, depending on state, using very little HUB RAM, and no requirement for a double buffer.

    Lots of great graphics, Propeller relevant, mo-jo is to be had by looking at the very late 70's through late 80's. That's when graphics really started to take off, and there were a lot of trade-offs in speed, and overall screen color / resolution capability. The Apple ][ is where I first saw "ripple draw" being used to manage a whole frame. It was possible to draw a smaller region quickly enough to avoid major flicker, and so that was done, dividing the frame into a few regions. Effective frame rate for the entire image to change was, like 15Hz, but there was no flicker, and detail regions could change much quicker than that. Apples had no color redirection though. Saw that used on many other computers, often to great effect. Best thing since sliced bread, if your fill rate is low, IMHO.

    Finally, there is dynamic drawing. Baggers has many examples posted here. With dynamic drawing, the screen does not exist as a static entity, like a bitmap. It's just created on the fly, as the beam is drawn. Best example of this is the old Atari VCS, which had NO frame buffer, and only 128 bytes of system RAM, yet was capable of drawing "Space Invaders" nicely. In that scenario, one draws only what is needed just in front of the monitor beam, doing it again, each frame.

    Props are well suited to that, and single buffer tricks because they are actually quite fast compared to their ~320 - 640 pixel resolution. (I know they go higher, but those two are the ones where the chip shines, IMHO) Props also can do stuff in parallel, which helps considerably. For example, on a bitmap, there could be multiple regions, where each is managed by a COG, contributing to one single buffer display, no flicker. That's difficult to do anyway, but way more difficult on a interrupt based CPU. Props just make it a management problem. Sweet!

    Anyway, in the end, the simple bitmap is the hardest one to deal with. It's very demanding of RAM, and requires the most operations to fill, but it's the easiest to code for. On a Prop, because of the HUB RAM, other display tricks are often needed to get a significant display. Honestly, that's one of the big attractions of the Prop for me. It's software defined video means pretty much all of those things are on the table, and that's fun for me, so here I am.

    Kye wrote: »
    So, I looked back at my calculations and its actually very possible to do this.

    By strenching the pixels across the screen by setting the vscl register you will have plenty of time to get the next four pixels.

    So, I think 160x120 at 1 byte per pixel will be very much doable. The only problem I see is that you won't be able to double buffer...

    So, now the real question is: Would this driver be usable if you can't do that. 160x120 = 19200 pixels... That's alot of elements to clear and change at 60 FPS. Changing them any slower will result in screen crawl.

    ... So, unless the resolution gets even smaller the screen will never be able to be updated nicely.
  • potatoheadpotatohead Posts: 10,260
    edited 2010-11-15 07:21
    The second line is a combination of hue and intensity. Only one byte is used for color definition, and it has a intensity bit-field, and a hue bit field.

    Yes, a third axis is needed, however the Propeller saturation is fixed at two levels. There is the faded one, common to most of the colors, and the overly intense one, common to a few of them. The color wheel idea was just to communicate how hue ends up on the screen. Sorry about that.
    Dr_Acula wrote: »
    I looked up NTSC and it makes more sense now.

    Ok, the color wheel is great for colors but it doesn't show shades of gray, nor shades of gray with slight color added (gray with a tinge of blue). I think you need three axes to describe all the colors? (hue/saturation/lightness or red/green/blue).

    So "Lower order bits are intensity, higher order bits are hue" - what does lightness?

    So say we take the first two lines of the data

    byte byte $02, $03, $04, $05, $06, $07, $07, $07 'Six intensities
    byte byte $09, $0a, $0b, $0c, $0d, $0e, $88, $9f

    Is the first line the shades of gray?

    And is the second line a combination of intensity and color, or do you pass these as two separate bytes?

    I'm just thinking if we are going to preprocess a bitmap, may as well process it so the cog does the absolute minimum. Eg if it is easier to pass a long to a value rather than a byte, then pre combine pixels into a long.
  • ericballericball Posts: 774
    edited 2010-11-15 07:49
    For a first pass RGB lookup table I took the screenshot potatohead made and generated the attached file for the 134 colors (6 grey + 8 * 16 colors).
  • jazzedjazzed Posts: 11,803
    edited 2010-11-15 07:57
    Ok. Silly question time.

    How do we add text to the screen?

    We can't have 80+ tiles just for text.
    There must be a way switch modes (per-tile?) for low resolution text.

    @ericball, I've been looking for that table. Thanks.

    It does map to the clut right?
    And the clut is used in this. Right?

  • ericballericball Posts: 774
    edited 2010-11-15 08:28
    jazzed wrote: »
    @ericball, I've been looking for that table. Thanks. It does map to the clut right?
    It's a simple list of color number to 8R 8G 8B value translations. The RGB values are not authoritative, but should be "close enough" if someone wanted to try to work out how to perform the reverse translation.

    Note: due to how the Propeller generates color, colors $x8 and $xF are the same hue and luma as $xB and $xC (respectively) but with triple the saturation.
  • potatoheadpotatohead Posts: 10,260
    edited 2010-11-15 08:31
    Right now, on this one, it's tiles = text.

    I've not had good luck mode switching waitvids mid-display. The decisions required in the video COG consume time, and will on occasion miss the waitvid data latch window, resulting in extra pixels, or snow. That said, modes per tile would be a excellent use for the unused bits present, if tile definitions were to move to a word per tile. Maybe that's worth another play. We know more about the waitvid now compared to the last time I personally tried that.

    Coupla ways this could go:

    Add a COG to overlay a text screen, right on top of the tile screen, crunching bits to fill the buffer. (maybe)

    Multi-mode tiles. (maybe) Divide screen into vertical regions, where a given region could be text tiles(2 or 4 color), or graphics tiles (all colors). (possible)

    Add to the number of possible tiles and just draw text, or define enough text to suit the display. (possible, and easiest) Probably costs 4-6K though.

    One last method would be to just fire off a monochrome text COG for TV, and output to the same pins. Works for text on black, but not text on white. Same for VGA, only that could be color. (maybe) Linus showed how to really sync COGs.

    Other??? Any and all comments welcome here.

    That's one of the "is it worth it?" task questions! :) Everything costs something.
  • evanhevanh Posts: 15,406
    edited 2010-11-15 09:07
    potatohead wrote: »
    What's the lowest sweep frequency most VGA monitors will display?
    Line-rate of CRT's bottomed out at 28 kHz. It was frustrating trying to find monitors that would be happy with the 18-24 kHz range. And even specialist scan-converter units were never spec'd for it either. It was always boring old video to VGA or VGA(+) to video. :(

    As far as I know that hasn't improved with modern LCD's even though they all have the fancy scan-converters built right in that, theoretically, wouldn't have any problem at all handling pretty much any lower frequency.
    And can't the pixels be stretched to fit? That's something I've not had a chance to learn yet. On a TV, one just makes a longer pixel, and have it be a factor of the optimal number of pixels.

    Can't that happen on VGA? So then, maybe it's good for 300 pixels, or so, right?
    Yep, just the same for VGA signaling. Tweak the pixel clock to suit.
  • KyeKye Posts: 2,200
    edited 2010-11-15 09:42
    @potatohead - Hmm, a double cog solution seems akward. Its doable but such a driver will instantly not be easy to use and ... requires two cogs.

    I much prefer the bitmap idea, but the memory requirement is too much.

    Maybe there is another way? I can code the bitmap driver in a few hours... not like I have a few hours in the coming days but I can post a 160x120 6 bit RGB bitmap driver in a couple of weeks if you would like.

    But, without some sort of trick to fix the buffering the image will look bad when being updated.

    But, on the bright side. Since each pixel is a byte it should be really easy to fill the screen quickly.

    ... You know more about this than me, is there a way in which a single bitmap frame buffer can be used to hold the screen... but be updated nicely? We have vsync to use still, and memory access should be fast.

    Having a scanline buffer means that the driver will not be user friendly.
  • jazzedjazzed Posts: 11,803
    edited 2010-11-15 10:55
    potatohead wrote: »
    Add a COG to overlay a text screen, right on top of the tile screen, crunching bits to fill the buffer. (maybe)
    While not optimal, I think this is a fine short term solution. Do something else later.
  • potatoheadpotatohead Posts: 10,260
    edited 2010-11-15 12:19

    Yeah, I hear you on the usability. Call it a built in conflict. If the video is really easy, it doesn't do as much :)

    A full color bitmap would be good. Saves a step in making a scan-line VGA COG. If you feel inclined, I'll put that effort to use for sure.

    VBLANK is kind of good for some things, not so good for others. If a entire draw-erase cycle can exist in the VBLANK, then it's good. If not, then it doesn't help much, without either a small buffer, or dynamic drawing techniques (sprites, display lists, etc...), or tiles and such that change more or less instantly.

    If we want more than 4 colors, it's just going to require some cruft, because the RAM isn't there.

    One upside to a multi-COG technique is being able to very quickly build a graphics driver, without also having to manage a signal driver. Doing both is hard. Doing one or the other is considerably easier. Also, that does open the door for dynamically loading video kernels. So, have the prop do one thing well, then another thing well, instead of packing in lots of things, all done not so well. Ease of use goes down, capability goes up! It all costs something in Propeller land.

    The only bitmap techniques I know of that help with buffering are:

    1. Use XOR drawing, so that most of the screen pixels are visible at any one time.

    2. Draw portions of the screen during the blank. (limited by fill rate)

    3. Sort the drawing of things, largest to smallest, drawing larger things at 15Hz or so, smaller things faster. This can limit flicker, and manage what happens in the blank. Takes a lot of logic though. One has to build a draw manager, in addition to drawing, and a driver. You've really gotta want that, or it's better to surrender resolution, and or dynamically draw things, IMHO.

    4. Software tiles. Use a small amount of off screen RAM to queue things to be drawn, draw them off screen, then blast them to the screen. Effective if there are smaller regions to be drawn, or the drawing can be spread out over time.

    In all cases, the idea is to keep the number of lit pixels at max, never doing a full on erase that could exceed the blanking time. That flickers badly. Also keeping the changes to less than half the scan frame rate is good, particularly when there is no sync with the video device. 20Hz max for screen updates would be a good bitmap target, if the draw tasks exceed the blank. That way, draws can be spread out over three blanks, rippling the screen some, but not flickering, and there I've come around full circle to ripple draw again.

    If those won't work, then a bitmap is not indicated, and tiles, sprites, etc... are, in my limited experience doing and watching these things get done. IMHO, tiles arranged as bitmaps have a lot of advantages, which is why the Parallax driver operates as it does.
    Kye wrote: »
    @potatohead - Hmm, a double cog solution seems akward. Its doable but such a driver will instantly not be easy to use and ... requires two cogs.

    I much prefer the bitmap idea, but the memory requirement is too much.

    Maybe there is another way? I can code the bitmap driver in a few hours... not like I have a few hours in the coming days but I can post a 160x120 6 bit RGB bitmap driver in a couple of weeks if you would like.

    But, without some sort of trick to fix the buffering the image will look bad when being updated.

    But, on the bright side. Since each pixel is a byte it should be really easy to fill the screen quickly.

    ... You know more about this than me, is there a way in which a single bitmap frame buffer can be used to hold the screen... but be updated nicely? We have vsync to use still, and memory access should be fast.

    Having a scanline buffer means that the driver will not be user friendly.
  • Dr_AculaDr_Acula Posts: 5,484
    edited 2010-11-15 15:15
    For this particular thread/code, small full color tiles are much better than big tiles with less colors. If 90% of the screen is black or gray or white, the rest easily fits into hub ram without clever buffering techniques.

    Given all the external ram solutions, including a very cheap/cheerful/slow spi sram with catalina, can we assume you have at least 30k of hub free? I think it will greatly simplify things to not have to worry about buffering on the fly nor bringing in data from elsewhere between frames. Or as Cluso says, devote an entire propeller just to this and do other processing on another propeller. Then you have all the hub free.

    Re text, I'd be inclined to completely reload a text based binary from an sd card. Click on a picture with a big W for Word Processing, and then up comes a text based word processor.

    Having read the Propeller TV LUT file, I am now completely confused about NTSC and colors!

    I like the idea of a 3d cube with red/green/blue on the axis. From a simplistic point of view, I suspect one would want to sample points evenly in that cube. Doing that for a VGA driver with 2 bits per color gives the attached palatte.

    Top left corner is black, bottom right corner is white, and there are two grays, row 2, column 6 and row 3 column 11.

    But looking at the TV, there are more gray values. But, thinking simplistically about the 3d cube, this ought to come at the expense of other color values (even allowing for 96 vs 64 colors).

    I found this formula but I last studied matrices a long time ago

    Then there is another way to look at it - you have TV values 0 to 256 and some map to colors and some don't but you determine where those colors are in the RGB space?

    With some of the posts earlier, how is that being done? Are we talking a photo of a TV screen, or is there a more definitative way to determine the colors?

    For example, in the TV table the first line is
    color R G B
    2 5 6 8

    But I would have thought the RGB values would be 0,0,0.

    Or at least, if you are building a table that you then go and map a RGB value to, you at least want to map it to the correct value. Were those values determined by taking a photo of a screen, then putting it into paintshop and seeing what the RGB value was?

    Sorry to get hopelessly confused about this, but the simple task of taking a little bitmap picture and converting it into the TV byte values for a propeller is looking quite complex!

    And just to throw another spanner in the works, I see on that other thread comments about using the 4th pin (the audio pin?) as part of the solution. So is this part of the 'standard' design?

    Advice here would be most appreciated.

    (Maybe it is as simple as taking a RGB value and searching that TV LUT table for the closest match?) Addit - no it can't be that, because with an RGB value of 0,0,0, would you choose a match with 2 (which is 5,6,8) or with 137 (5,2,8).
    290 x 73 - 8K
  • Cluso99Cluso99 Posts: 18,069
    edited 2010-11-15 15:20
    potatohead: Keep going. In the end, it does not matter how much of the prop is used (cogs & hub) for this as long as the actual app doing the updating can also live in the prop.

    If this all works well, then I see the prop as a cheap solution to the video section, and just as I have a RamBlade for emulation, I could see a VideoBlade for the video. Another prop would be fine for the remainder of the I/O and use a high speed link between props.
  • potatoheadpotatohead Posts: 10,260
    edited 2010-11-15 21:02
    I think I'm going to move to a word per tile address. The current byte only tile definition isn't optimal.

    After thinking text through, it's really only about 6K to get both the additional tile address space, and 96 characters, or so. Another choice people would have is to simply make a little bitmap, and draw text in there from SPIN.

    The only real, for sure, cost is doubling the screen array. That's 1.5K, and in return, the max possible tiles. Tile data goes up from 8K max, to the limits of the HUB. No brainer.

    This is really a graphics driver. If the BLOB idea works out, options can be done. One could load the graphics driver, do icons, load a 4 color text / tile driver, still do graphics, and text, or a bitmap, etc... IMHO, prototyping that is compelling, because it can see use in basically all the Prop dev environments. Fetching video and starting it from SD probably can happen on the order of a second or two. That's quick!

    Coley and Baggers did PropGFX, using a Prop as a slave graphics device. That's actually a impressive device, but it comes with a lot of dependencies, like comms, API, etc... The idea of little, useful binary pieces appeals to me, largely because I can write them, and it's very Prop like. It takes a lot to just do a PropGFX, and that means limited use overall. On the other hand, little, useful pieces can see a lot more use, and on a single prop, maybe from any language that can read from storage and COGNEW. That's likely the more potent path.

    This same TV COG can drive other things, like a bitmap, etc... The Potatotext TV COG is wired to the graphics cog in too complex of a way. I think I'm also going to just leave that one as it is, but for maybe making it binary loadable, just because.

    So then, IMHO, the right thing to do here is make a fast, clean, hi-color graphics COG that can have text on it, but isn't specifically optimized for that purpose, then do some other drivers, modifying the TV COG slightly to support each one. Only takes a few options to then permit most things a Propeller can do!

    I really want to see the SD card, external RAM program, binary object idea play out, and if some pieces get made, others here will probably do that. Of all the things I've seen video wise, doing this is the most portable, and if it's somewhat modular, maybe others can jump in, and we get a nice library. Toward that end, I'll get the TV COG cleaned up, so that graphics / text / tile / sprite COG BLOBS can be written over time.

    Also a VGA cog needs to get done. Maybe the graphics COGs can just work with it.
  • potatoheadpotatohead Posts: 10,260
    edited 2010-11-15 21:10
    I'm confused about the color problem. Can't we just take some good captures, blur those, sample those, and make a RGB to TV Byte table? From there, a simple closest RGB match works fine, right? So, it's just one table, no real math, and only a matter of just sampling all the TV colors possible.

    Am I missing something?

    On the color computer, I did 256 artifact colors, and another enthusiast arrived at that solution. All they did was take the capture, blur it (that computer had more noisy video than the prop does), index the byte values to the RGB, and make a palette for their graphics program. At that point, both a program can be written, and graphics manipulation are possible, arriving at the right byte value rather easily.
  • jazzedjazzed Posts: 11,803
    edited 2010-11-15 21:23
    I don't have 6KB to spare ... ever.

    Guess I'll diddle with the Text COG when I have a chance.
  • potatoheadpotatohead Posts: 10,260
    edited 2010-11-15 21:31
    It's going to cost a fair amount of that to do text anyway. It all costs something. A text screen is 1K, and a COG needs to be written and loaded. That technically doesn't cost, once it's in the COG. Then there is the font. That's 2K more, or so...

    Let's just say that's three. If you want colors on it, that's another K, so now it's 4, and a COG, and a very high degree of added complexity.

    Basically, you don't have room to do text and tiles at that point, in a full color environment with that constraint in place. The difference in tile address space is less than 2K, with usability way up in terms of the tiles and graphics options, all of which are not doable right now, where text is --and doable very well, in a lot of different ways.

    (which is a big part of why I'm going to take the next step by maxing out the tiles. I thought that part through already, not seeing the return)

    If you want to do that, I encourage you to do so! The existing graphics cog can be modded to do it. One problem to solve is getting this done, in one scan lines worth of time:

    convert font nibble %0101 to long for the buffer $00FF00FF, then mask it against a color, then write it to the buffer. That's after looking up the text value, indexing to the font scan line entry to get the byte, of which both nibbles need that conversion, from 2 to full color. IMHO, that's a tough call to get done with one COG. It's a lot of HUB fetches and COG lookups.

    It may be that mode switching could work. I can't take that path right now and be productive with this project, because I want to see the VGA part of it go as well. IMHO, that's a significant diversion, and there still is the RAM cost, and the COG cost, because that effort would also require a COG.
  • Dr_AculaDr_Acula Posts: 5,484
    edited 2010-11-15 21:37
    The challenge is to produce a little tile, say 12x8, with all the 96 colors in it. I am sure we can do color matching later, and text etc will be easy as we can do captures of existing fonts including fancy ones, and with catalina on exernal ram or with a dual prop solution, there will always be 30k hub ram free, but do you think the code is possible for a little 'all color' tile?
  • AribaAriba Posts: 2,685
    edited 2010-11-15 21:45
    In this thread. you find my 128x96 bitmap VGA driver with 1 byte per pixel.
    The code si very small, so maybe it is a good start to make a tile driver out of it.

  • potatoheadpotatohead Posts: 10,260
    edited 2010-11-15 21:49
    Thanks Andy. I'll be looking. Eager to finally do some VGA.

    Dr_A, If you just want all the colors, I can do that. Are you looking for swatches, to apply the techniques discussed? If so, no worries. Honestly, I'll just mash up some longs in EXCEL, cut, paste, and do a capture.

    Not hard to put all the colors up there. That's what this driver is for!

    Wait a minute... You want all the colors packed in, just one pixel per color?? Well, that's actually considerably easier than swatches. Only takes a coupla tiles, but on a TV, they are going to smear together, just FYI. A couple of tiles with all colors can just be quickly typed into a DAT statement, but they are hard to see.

    Do you care, if they are big pixels? Could run the driver at 80 pixels, build the tiles, and capture that. Let me know which.

    ...or are you asking for 96 pixel tiles? That one is harder, because it doesn't match up to a power of two, complicating the COG, limiting resolution.
  • Dr_AculaDr_Acula Posts: 5,484
    edited 2010-11-15 22:01
    Yes, random colors in a little tile is the challenge.

    What I've be very interested in is the max resolution you can go to. Eg, you put a yellow pixel next to a gray pixel. Do they bleed together?

    As you shrink the tiles down, yes they probably will bleed together. So that sets the minimum size you can go as pictures won't looks so good if the colors bleed. It would probably depend on the TV, eg on a big TV maybe you have what you call individual pixels (the concept is a bit vague on a TV, but ok, consider just lines, 525 lines, so one line is equivalent to one pixel), but maybe on a 7" LCD TV screen, you make the pixels bigger. Same ram requirements for each. Maybe you go two pixels per color. I guess that might take some testing. Put a yellow dot on a gray background and see if it looks like a yellow dot??

    With VGA, individual pixels on the text drivers look crisp and clear so the propeller is capable of driving a clear signal (so long as another vga screen is not too close, or any radio transmitters!).

    I just ran Ariba's file. Very nice! Random little squares filling the screen with random colors. I didn't know this was possible!

    So - would it be possible to make those little squares smaller and leave most of the screen blank? Even to the point of a tiny 8x8 tile, right in the middle of the screen with 64 colors in it?
  • potatoheadpotatohead Posts: 10,260
    edited 2010-11-15 22:44
    Ok, I get where you are headed now. Did your little TV sync?

    re: VGA: Yes. I think it's possible to get smaller pixels, and not fill the screen, and that's what I'm going to do with one of those VGA drivers.

    (this will be somewhat long, but it's good to know TV info --since I've done a lot with TV's, this is a topic close to my working experience)

    Yes, VGA will just do pixels, right next to one another with almost no issues.

    TV's don't generally do that at higher resolutions.

    The driver runs at 40, 80, 160, 256, 320 pixels. At the 80 pixel size, any color may be shown next to any other color, and it's going to work well on all but the really crappy TV's. The TV signal is encoded in a way that has fairly low color bandwidth. On most images, there is a smooth blending of colors, and this all works out very well. Computer graphics are not that way, because of sharp transitions.

    My general experience with most newer TV's is they will generally render unique colors at 160 pixels and below. Above that, the signal doesn't have enough bandwidth to clearly differentiate colors. That said, the newer the TV, the better it generally does.

    I like to think of the color wheel when mentally gauging what might be possible. The greater the angle on the color wheel, the sharper the color transition is, the more blending and smearing of the color pixels there will be.

    So, at 320 pixels, putting a red pixel, right next to a blue one will show some degrading of the pixels. A very nice TV will still resolve most of that, but it won't be perfect like a VGA is. Though, on your image icons, that blending actually helps smooth them some, giving them a higher resolution look than they actually are. Something to think about.

    Also, the transitions from black to white, and white to black can produce bits of color as well. It's the same limit. TV's basically operate at the 160 pixel resolution, in terms of this driver. Less will look perfect, 160 solid, greater has trade-offs.

    If a S-video connection is used, this improves significantly.

    Also, whether or not the signal is interlaced helps define the pixels better. This driver will do a full interlaced signal, and it's sharper than most because of that. The Parallax TV driver can also be made to do that, with similar results. Most other TV drivers are non-interlaced, and they show artifacting on high contrast color and intensity areas.

    Eric Ball posted up the S-video option, and Bill has had it running on his Propcade. I want to do one on the PPDB, just to see. If it's like my other devices, and I've no reason to believe it isn't, then the difference will be notable.

    What can be done to illustrate this is some color patterns, and then run the driver at it's various resolutions, demonstrating what is optimal.

    In general, with TV graphics, it's best to limit high resolution color transitions, and to use luma (intensity) to show detail. When this is done, color clash can be kept to a minimum, while still getting good use out of all the colors.

    For this driver, the 256 pixel setting is probably the best overall balance. It's still kind of a square pixel, instead of long one horizontally, and it's not too far over the sweet spot. 320 pixels can have the most detail, but one needs to be careful about colors in use.

    Some sample tiles are easy to create.

    Also, no special code is required. Truth is, the tile driver operates at 320 pixels. So then, all that is needed is to just stack up some tiles with the patterns, and then vary the driver resolution.

    On the VGA one, assuming we can get 320 pixels or so for the tiles, the same would be true, only the VGA would always show the intended color, as it has more than enough bandwidth to handle low pixel rates.

    Instead of trying to pack it into one tile, just think pixels, then stack up the tiles to equal that many pixels. In this case, for the 12x8 pixel test, it's only necessary to just build up three sequential tiles.

    The "max" resolution for TV's varies. The numbers I gave are good rules of thumb, though TV's really don't have pixels, I judge it by how much of the pixel I can see. If most of the pixel is rendered as intended, with only a small amount of not intended visual information present, that's a "pixel", and it counts toward "resolution". As that ratio changes, then it's exceeding the "resolution", and adjustments to the size of the image, or it's color / luma composition might be warranted.

    A great example of this is your TV newscast. They operate with very well chosen color sets, so that the transitions are modest, making the graphics appear sharp, even though resolution is somewhat limited. You can see this in sports graphics often as well. Say there is a blue background. They might sprinkle in little bluish yellow, or white, or light blue pixels in there, or even text, and it will look sharp, where if they had put red text in there, it wouldn't, because of the color limits.

    I'll do some captures that show off some of this stuff, and the signal options possible with this TV COG. It can run color, luma only (mono), interlaced, and not interlaced, and each has it's uses for graphics and overall screen quality trade-offs.

    So the "max" resolution then, with a TV, is the native pixel resolution of the driver, minus the artifacts that occur with a composite signal. That's why S-video looks better. The signals are separated, kind of like VGA, but not as separate as VGA is.

    If we were to do a component TV COG, then it would look about like a VGA does, because the signals would have a lot more bandwidth. That takes three COGs on a Prop though, which is why it's never been done. S-video can be done with one COG, because that's built into the Prop capability.

    I think Bill has the only S-video board.

    One other thing. If you want monochrome images, the composite TV output can be fed into the luma input of a TV with a S-video connection. This results in a very nice, up to 640 pixel, monochrome display! I often run my TV graphics this way, when I want detail. I'll capture that too.

    The one thing I've not captured is S-video running right off the prop. My gear is composite (RCA) only, though my displays are S-video capable. None of my boards are, but it's easy to wire that up.

    I think the Audio resistor, which is currently unused by all but Eric Ball's special sprite driver, goes unused. When the Prop is configured for S-Video, color is output on that pin. I think any board that has all four resistors can be modded for S-video output because of that, with only a code change in the driver to turn it on. That should go on the "to-do" list for the TV COG. I've always skipped it, because most of the Propeller displays are either at a low resolution, or are color limited, which largely avoids this whole discussion. Potatotext is the only thing I wrote, until now, that pushed it at all, and it's display was fine with the RCA, so I never built the S-video capable interface.

    (done now, sorry... that topic is never short)

    Re: VGA & tiny tiles. Probably. I think 320 pixels can be done. Going above that might require more than one COG. Don't know until I tear into a driver.

    320 pixels is going to look nice, but with clearly defined pixels on VGA. If the color clashes are not bad, TV will actually smooth that, looking better, IMHO. Depends on whether one needs to see the shape of the pixels, or "the image".
  • Dr_AculaDr_Acula Posts: 5,484
    edited 2010-11-16 00:55
    I haven't tested my TV yet but will do this.

    I spent a few hours today soldering up a board for you and loading up an sd card with binary programs. It boots into kyedos, and you can run other binaries from there - eg pacman is on there in "full blocky color" (my son cleared several levels). It even has sound if you connect up a speaker to one of the prop pins. The main function of the board is to show the concept of loading and reloading binary files on the fly from an sd card. So you don't have to try to fit everything into one spin program (which means that each spin program can be small, which means more room for graphics buffers).

    That post above is chock full of all sorts of useful information.

    So if I understand right, a small TV could kind of be considered to have about 160 pixels? So if you had, say, 64x64 pixel icons, you could fit 4 of these on a little TV screen with some black around them and that could be the start menu.

    Easily would fit in hub ram.

    Load in new pixels for sub menus.

    For VGA it will be very different. Four 64x64 icons will sit in the middle of the screen. But that is ok - it will still have the same 'look and feel', and maybe it could be the beginnings of a graphical operating system? You could have an icon for "date and time", and one could be some gears which means "operating system options" with boring things like screen settings and file transfers, and one could be a folder that brings up sub menus of binary programs. There are already binary files that spawn other binary files eg, I boot into kydos, then run catalyst to then run compiled catalina programs. It is a lot of tedious typeing though and icons would make the process much more intuitive.

    If you reckon the TV icons are working (yes, we will need a converter program) then is vga going to be possible - maybe using a variant on Ariba's code?
  • BaggersBaggers Posts: 3,019
    edited 2010-11-16 01:43
    Hi all, sorry I've been ultra quiet of late, but been busy with finding and doing paying work, but anyway, just to let you know what I've been working on in what little prop time I have, since Potatohead is doing this for TV.
    It's a VGA driver that will do 256x192 any pixel colour using scanlines, like the TV drivers that have been used to make games. ( at 6Mhz it will do 320x240, but I'm trying to optimize to run on a 5Mhz crystal )
    Will show pics soon, followed by release ;)
  • JT CookJT Cook Posts: 487
    edited 2010-11-16 05:14
    Baggers wrote: »
    Hi all, sorry I've been ultra quiet of late, but been busy with finding and doing paying work, but anyway, just to let you know what I've been working on in what little prop time I have, since Potatohead is doing this for TV.
    It's a VGA driver that will do 256x192 any pixel colour using scanlines, like the TV drivers that have been used to make games. ( at 6Mhz it will do 320x240, but I'm trying to optimize to run on a 5Mhz crystal )
    Will show pics soon, followed by release ;)

    That would be awesome! I have been wanting a driver like that for a long time. I even tried it myself a while back, but with no success (tho I think I only gave it a couple of days of trying). I may have to plug this into the new game I am working on ;) Also what is the actual resolution that is being pushed out to the VGA monitor?

    Keep us updated!
  • BaggersBaggers Posts: 3,019
    edited 2010-11-16 06:57
    it was based on the 640x240 @69Hz timings
  • potatoheadpotatohead Posts: 10,260
    edited 2010-11-16 10:15

    (the message you entered is too short. Please lengthen to 10 characters.)
  • ericballericball Posts: 774
    edited 2010-11-16 11:52
    Re: color table
    The table I provided is from the capture potatohead did from my TV.spin colorgrid. There are formulas to convert from "TV color" (aka YIQ, YUV, YPrPb) to RGB, but there's as much bad info out there as good info. Again, "TV color" to RGB is a simple lookup table, but getting good results for the reverse will be a challenge, especially with only 134 colors.

    Re: resolution
    For composite video the color difference (R-Y and B-Y) signals are quadrature modulated at 3.5795MHz (227.5 cycles per line, or ~160 active pixels). At 320 active pixels any luma transitions get demodulated as color signals (unless the colorburst is disabled, then you just get black & white). 240 active pixels is around the max resolution before color aliasing becomes significant. S-video keeps the luma and chroma separated so you can generate 320 active pixels, but you lose the Propeller high saturation colors.

    Re: Aural sub
    Both my 240H sprite driver and "better TV color" driver use the aural sub pin (but in different ways).
  • Dr_AculaDr_Acula Posts: 5,484
    edited 2010-11-16 21:43
    I know VGA better so here is a documentation of how to create icons:
    1) Find some images
    2) Do a screen capture with paint shop
    3) Do copy/paste and size to 64x64 pixels
    4) Paintfill the outside with black if it is white or another color.
    5) Colors/Load Palatte and load the VGA 64 color palatte. Changes to these colors
    6) Colors and increase the depth to 24 bit (makes the maths easier below)
    7) Save as a RAW file (raw binary data, 3 bytes per pixel, no headers)
    8) Run a tiny program
        Private Sub Button92_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button92.Click
            Dim b As Byte
            Dim i As Long
            Dim j As Byte
            Dim k As Long
            Dim Bitmap(12287) As Byte ' 64*64*3
            Dim Output(4095) As Byte
            Dim BinaryInput As New BinaryReader(File.Open("c:\" + TextBox46.Text, FileMode.Open))
            Dim BinaryOutput As New BinaryWriter(File.Create("c:\" + TextBox47.Text, FileMode.Open))
            For i = 0 To 12287
                Bitmap(i) = BinaryInput.ReadByte
            j = 0
            ' propeller format is RRGGBBxx
            For i = 0 To 4095
                b = Bitmap(i * 3)
                j = 0
                If b = 0 Then j = j + 0 ' 00 red
                If b = 85 Then j = j + 64 ' 01
                If b = 170 Then j = j + 128 ' 10
                If b = 255 Then j = j + 192 ' 11
                b = Bitmap(i * 3 + 1)
                If b = 0 Then j = j + 0 ' 00 green
                If b = 85 Then j = j + 16 ' 01
                If b = 170 Then j = j + 32 ' 10
                If b = 255 Then j = j + 48 ' 11
                b = Bitmap(i * 3 + 2)
                If b Then j = j + 0 ' 00 blue
                If b = 85 Then j = j + 4 ' 01
                If b = 170 Then j = j + 8 ' 10
                If b = 255 Then j = j + 12 ' 11
                Output(i) = j ' store value
            For i = 0 To 4095
        End Sub

    9) save as a name ending in .ico (they are actually binary files so rename as .bin to view)
    10) Copy file to sd card
    11) Run the attached spin code (a hybrid of Ariba's screen driver and Kye's sd card, with behind the scenes debugging using the VT100 VGA driver)

    I think the Prop is ready for graphics!

    So, next little thing -

    a) getting Ariba's code to work so the pixels are smaller (same hub ram, most of screen is black)
    b) translating these .ico files into TV

    Is there a simple table that converts the 64 vga colors into the nearest TV color?

    Failing that, a table for RGB to TV colors. (In some ways it would be better to start with the 24bit original file rather than the one that has been reduced to 64 bits).

    This operating system is called PotatoDOS.
    640 x 480 - 35K
    640 x 480 - 25K
    640 x 480 - 19K
    640 x 480 - 21K
    640 x 480 - 25K
    640 x 480 - 22K
    640 x 480 - 29K
    640 x 480 - 34K
Sign In or Register to comment.