Those require memory to buffer compute operations that end up rendered as simple bitmaps. It's not the format at issue, but Propeller chip resources. Most things of any resolution or complexity must either be dynamically drawn, or simplified / formatted for optimal display using the on chip display systems.
cavelamb, BMP is very convenient for Propeller use because they are normally not compressed.
Other image formats are normally compressed and require a lot of computations to extract the raw pixel data...
I suspect something like PPM would be preferable. There, you get the advantage that the top row is first, as opposed to the bottom row (Why does BMP do it that way?). It's also a much simpler image format to read.
Some simplified dxt1 real-time decompression maybe.
As you will only have 16 (or 64) colors,If you could describe a block of 4 by 4 pixels from the most 64 (or 256) pre calculated common gradient's.
1. Average the colors in every horizontal pair of pixels, replacing two pixels with one elongated one.
2. Convert the image to use the Prop's 64-color palette.
3. Do a color-frequency analysis on the paletted image, and create a sub-palette of the 16 or so most-used colors from the 64.
4. Reconvert the original image modified in #1, dithering with colors from the new sub-palette, such that every block of 16 pixels is restricted to the four most-used colors in the block.
5. Keep a separate list of which colors change from one 16-pixel block to the next and output it to the data file, along with the 4-color image.
6. During VGA output, read the pixel values, packed 16 2-bit pixels per long, along with the palette replacement data, and output to the port.
Based on the images I've analyzed, I think I can get reasonable quality and that the data will fit in hub RAM.
Now, about that "four most-used colors" thing I mentioned in #4 above. It's not optimum, since it does not minimize the pixel color errors. Consider the following row of 16 elongated pixels:
The four most-used colors are all gray levels. If they were chosen to represent this block, the red would never appear. However, if the red were included at the expense of one of the grays, the missing gray could easily be corrected in the next row by the dithering process. As a consequence, it might be better to weight the color counts by the corresponding error magnitudes from the previous row. That way minority colors would stand a better chance of representation -- sort of like affirmative action for palette building.
The four most-used colors are all gray levels. If they were chosen to represent this block, the red would never appear. However, if the red were included at the expense of one of the grays, the missing gray could easily be corrected in the next row by the dithering process. As a consequence, it might be better to weight the color counts by the corresponding error magnitudes from the previous row. That way minority colors would stand a better chance of representation -- sort of like affirmative action for palette building.
That can get complicated!
You almost need a palette that has some colors (RGBYBW?) that are always included, and then for the 10 left they can be made up on frequency?
I concluded that it was the dithering that made more of an impact than trying to optimise the palette. Of course, optimising the palette is an entire field on its own http://en.wikipedia.org/wiki/Color_quantization The first thing that is interesting reading that article is that areas of a picture near each other are likely to have similar colors. But that is in two dimensions, not in one dimension. So would you be better working with tiles than with lines?
Once you get into optimising palettes using color quantization it is half way towards jpg decompression, which I think works with 8x8 tiles and lots of cosine maths. Which is clearly impossible on the fly on the propeller...
Phil, your compression technique is pretty impressive, so I'm interested to see what you can come up with for images...
Just a thought, but maybe I'd try using your same compression technique with 5 or 6 more cogs, but these ones only working on the image part of the screen...
Maybe you can get more colors that way?
You almost need a palette that has some colors (RGBYBW?) that are always included, and then for the 10 left they can be made up on frequency?
I definitely agree. I was thinking of artificially augmenting the frequencies of certain high-intensity colors to raise them in the priority table, just to make sure they appear. The lower-intensity colors approximate well-enough visually by dithering.
1. Average the colors in every horizontal pair of pixels, replacing two pixels with one elongated one.
2. Convert the image to use the Prop's 64-color palette.
3. Do a color-frequency analysis on the paletted image, and create a sub-palette of the 16 or so most-used colors from the 64.
4. Reconvert the original image modified in #1, dithering with colors from the new sub-palette, such that every block of 16 pixels is restricted to the four most-used colors in the block.
5. Keep a separate list of which colors change from one 16-pixel block to the next and output it to the data file, along with the 4-color image.
6. During VGA output, read the pixel values, packed 16 2-bit pixels per long, along with the palette replacement data, and output to the port.
Based on the images I've analyzed, I think I can get reasonable quality and that the data will fit in hub RAM.
Now, about that "four most-used colors" thing I mentioned in #4 above. It's not optimum, since it does not minimize the pixel color errors. Consider the following row of 16 elongated pixels:
The four most-used colors are all gray levels. If they were chosen to represent this block, the red would never appear. However, if the red were included at the expense of one of the grays, the missing gray could easily be corrected in the next row by the dithering process. As a consequence, it might be better to weight the color counts by the corresponding error magnitudes from the previous row. That way minority colors would stand a better chance of representation -- sort of like affirmative action for palette building.
-Phil
You might be able to evaluate them in YUV instead of RGB. Pick your colors based on chroma change instead of luminance. JPG does this, it eschews luminance information in favor of chroma, there are a variety of encodings, MPG uses 4:2:0.
EDIT: Upon further investigation, it appears that HSV would be easier to program to. YUV doesn't evaluate the Luminence by itself like HSV, as Y contains the Green component too.
HSV, on the other hand, you could throw out V and evaluate just S or a combination of H*S. Saturation will tell more than Hue I think, so more saturated colors would dominate. I think experimentation is the best way to go.
Here's an example of an image converted to an optimized (by sorting only) 16-color palette based upon Prop VGA colors. I have not yet restricted blocks of 16 pixels to four of the 16 colors, so the ultimate result won't look quite as good:
Oddly enough, there is not a single fully-saturated color in the reduced palette; and, yet, the wires and LEDs, which have been troublesome in past attempts look reasonably okay.
One might well ask, "Why limit to 16 colors when the Prop can do 64?" The answer is that the fewer colors that are available, the fewer palette color swaps that are necessary between blocks of pixels. Each color swap requires at least one byte of RAM. Limiting to 16 colors in an image this size can reduce the number color swaps from thousands to hundreds.
PhiPi, that is a very nice demo. Only 16 colors and dithering and if I lean back a bit from the computer and squint a little the picture looks almost the same as the original. Amazing!
So to describe a pixel, you only need 4 bits. That means two pixels per byte or 8 pixels per long. I guess you can then work back and ask how much hub is free once there is the code and the data for the text part, and that determines the largest picture size?
Hmm - half the hub ram ballpark = 16000 bytes, square root is 126x126 but you get two pixels per byte, so that is a square picture say 250x250 pixels?
Um hmm. And the above image would require nearly 24kB. I've got to get that number down by quite a bit: two bits/pixel, plus palette restrictions on a block-by-block basis. That's my next step...
If you can show that with 24kB, I think what I would do is push the text data out to flash and bring it in just in time, line by line so that most of hub can be used for the image...
Um hmm. And the above image would require nearly 24kB. I've got to get that number down by quite a bit: two bits/pixel, plus palette restrictions on a block-by-block basis.
Just out of interest, what would that picture look like shrunk to 250x250?
I have attached a palette for GIMP that should represent the 64 Propeller colors. I *think* it's right, I couldn't easily find an existing graphic to verify against.
Here's a sampler of various palette reductions. Each began with the same 16-color sub-palette of the original 64-color Propeller VGA palette. The each block of 16 horizontal 2X-elongated pixels used a subset of the sixteen colors, the size of which ranged from one color to four colors. The bottom middle image is aloows each pixel to have its own color from the sixteen, and the bottom right image is the original 24-bit color photo.
It looks like the four-color-per-block quality will suffice. It would be helpful if I could limit it to three (or limit the number of color changes from block-to-block to three), since the number of changes to read from the change list could be encoded into two bits.
Those require memory to buffer compute operations that end up rendered as simple bitmaps. It's not the format at issue, but Propeller chip resources. Most things of any resolution or complexity must either be dynamically drawn, or simplified / formatted for optimal display using the on chip display systems.
... can you change that dither to a Floyd-Steinberg dither ...
I'll give it a try tomorrow. It probably would look better and won't require a huge change to the Perl script, but I've been at this all day and my mind is turning to mush.
Some months ago I wrote my own Floyd Steinberg dithering program but rummaging around on the internet I found a nice free one http://www.gimp.org/ (which I see pedward is using as well)
To do the FS transform and reduce the palette to an optimised 16 colors, menu Image/Mode/Indexed
This is 190x214 which is 20.2kb and is putting two pixels in each byte. Would there be room with the code and the 'one bit per pixel' data for the text? Or would it need to be a bit smaller to get it in 16k?
Hey everybody, just checking in after a very busy day to let you know I'm reading as often as I can.
Today started with a visit by SparkFun crew - we're doing a cultural exchange of sorts, learning and sharing with one another. Then we had a visit from Joey Hudy http://lookwhatjoeysmaking.blogspot.com/ Next we had a number of other visitors I can't name appear at the office - ones of particular importance who I am unable to discuss.
Bottom line is that it's 11 pm and I'm finally out of the office, but will need to be in Walnut Creek in the morning.
The VGA progress looks great to me. I'm a bit concerned about how much time I've taken up from this generous forum membership, however. I'll be ready to use whatever we wind up with.
I hope to return to the forums in earnest some day, but I think it'll be about July the way things are going. In the meantime I'll put to use any code examples.
I'm afraid it would have to be a lot smaller. I'm trying to get a 288 x 330-pixel (144 wide pixels x 330 normal pixels) down to less than 13kB or so. The text can take up 9kB or more, and there still needs to be room for the SD reader and I'm not sure yet what else. Anyway, the smaller the better, since it keeps Ken's options open for slide complexity.
BTW, I downloaded Gimp yesterday, in hopes that their UI had improved since the last time I tried it on Linux, thinking maybe Ken could use it to prepare the slides. The UI is still pretty hideous, though, with multiple floating windows, instead of one unified GUI. It's a powerful program, to be sure, but Corel PhotoPaint will do almost everything that Gimp will, including the palette optimization and FS dithering. However, when you tell it to palettize an image with an optimized n colors, it won't start from an image that's already been palettized with, say, the 64 Prop VGA colors. Maybe I should take another look at Gimp, but I still prefer coding it in Perl, since I have complete control (issues ).
It's been a time-eater, for sure, but don't worry about that. I wouldn't be doing it if it wasn't a fun challenge (i.e. an obsession); and even if you never use it, I'm learning some techniques that might come in handy for future projects.
BTW, I think you need to get a pilot's license and your own plane with all the roadshow stuff you're doing.
Monochrome pictures are certainly a possibility, and they would automatically be supported with the program I'm working on. There just wouldn't be have to be any color swaps between pixel blocks, since the Prop VGA has only four gray levels.
Comments
Other image formats are normally compressed and require a lot of computations to extract the raw pixel data...
As you will only have 16 (or 64) colors,If you could describe a block of 4 by 4 pixels from the most 64 (or 256) pre calculated common gradient's.
2. Convert the image to use the Prop's 64-color palette.
3. Do a color-frequency analysis on the paletted image, and create a sub-palette of the 16 or so most-used colors from the 64.
4. Reconvert the original image modified in #1, dithering with colors from the new sub-palette, such that every block of 16 pixels is restricted to the four most-used colors in the block.
5. Keep a separate list of which colors change from one 16-pixel block to the next and output it to the data file, along with the 4-color image.
6. During VGA output, read the pixel values, packed 16 2-bit pixels per long, along with the palette replacement data, and output to the port.
Based on the images I've analyzed, I think I can get reasonable quality and that the data will fit in hub RAM.
Now, about that "four most-used colors" thing I mentioned in #4 above. It's not optimum, since it does not minimize the pixel color errors. Consider the following row of 16 elongated pixels:
The four most-used colors are all gray levels. If they were chosen to represent this block, the red would never appear. However, if the red were included at the expense of one of the grays, the missing gray could easily be corrected in the next row by the dithering process. As a consequence, it might be better to weight the color counts by the corresponding error magnitudes from the previous row. That way minority colors would stand a better chance of representation -- sort of like affirmative action for palette building.
-Phil
That can get complicated!
You almost need a palette that has some colors (RGBYBW?) that are always included, and then for the 10 left they can be made up on frequency?
I did some experiments with the windows 16 color palette. Not very impressive with no dithering http://en.wikipedia.org/wiki/List_of_software_palettes#Microsoft_Windows_default_16-color_palette but on this page http://en.wikipedia.org/wiki/List_of_color_palettes (scroll down a bit) even with only 8 colors and dithering it is not too bad.
I concluded that it was the dithering that made more of an impact than trying to optimise the palette. Of course, optimising the palette is an entire field on its own http://en.wikipedia.org/wiki/Color_quantization The first thing that is interesting reading that article is that areas of a picture near each other are likely to have similar colors. But that is in two dimensions, not in one dimension. So would you be better working with tiles than with lines?
Once you get into optimising palettes using color quantization it is half way towards jpg decompression, which I think works with 8x8 tiles and lots of cosine maths. Which is clearly impossible on the fly on the propeller...
Just a thought, but maybe I'd try using your same compression technique with 5 or 6 more cogs, but these ones only working on the image part of the screen...
Maybe you can get more colors that way?
-Phil
You might be able to evaluate them in YUV instead of RGB. Pick your colors based on chroma change instead of luminance. JPG does this, it eschews luminance information in favor of chroma, there are a variety of encodings, MPG uses 4:2:0.
EDIT: Upon further investigation, it appears that HSV would be easier to program to. YUV doesn't evaluate the Luminence by itself like HSV, as Y contains the Green component too.
HSV, on the other hand, you could throw out V and evaluate just S or a combination of H*S. Saturation will tell more than Hue I think, so more saturated colors would dominate. I think experimentation is the best way to go.
Oddly enough, there is not a single fully-saturated color in the reduced palette; and, yet, the wires and LEDs, which have been troublesome in past attempts look reasonably okay.
One might well ask, "Why limit to 16 colors when the Prop can do 64?" The answer is that the fewer colors that are available, the fewer palette color swaps that are necessary between blocks of pixels. Each color swap requires at least one byte of RAM. Limiting to 16 colors in an image this size can reduce the number color swaps from thousands to hundreds.
Good idea. I may give that a try.
-Phil
So to describe a pixel, you only need 4 bits. That means two pixels per byte or 8 pixels per long. I guess you can then work back and ask how much hub is free once there is the code and the data for the text part, and that determines the largest picture size?
Hmm - half the hub ram ballpark = 16000 bytes, square root is 126x126 but you get two pixels per byte, so that is a square picture say 250x250 pixels?
-Phil
That is NOT the result I got over here when attempting 16 colors. You're a PropGOD man.
OBC
Just out of interest, what would that picture look like shrunk to 250x250?
It looks like the four-color-per-block quality will suffice. It would be helpful if I could limit it to three (or limit the number of color changes from block-to-block to three), since the number of changes to read from the change list could be encoded into two bits.
-Phil
Here's the picture rendered with the prop palette and F-S. It *doesn't* limit the colors per 16 pixels, I was just tinkering with GIMP.
Ok, Thanks guys. I do see your point...
I'll give it a try tomorrow. It probably would look better and won't require a huge change to the Perl script, but I've been at this all day and my mind is turning to mush.
-Phil
To do the FS transform and reduce the palette to an optimised 16 colors, menu Image/Mode/Indexed
This is 190x214 which is 20.2kb and is putting two pixels in each byte. Would there be room with the code and the 'one bit per pixel' data for the text? Or would it need to be a bit smaller to get it in 16k?
Today started with a visit by SparkFun crew - we're doing a cultural exchange of sorts, learning and sharing with one another. Then we had a visit from Joey Hudy http://lookwhatjoeysmaking.blogspot.com/ Next we had a number of other visitors I can't name appear at the office - ones of particular importance who I am unable to discuss.
Bottom line is that it's 11 pm and I'm finally out of the office, but will need to be in Walnut Creek in the morning.
The VGA progress looks great to me. I'm a bit concerned about how much time I've taken up from this generous forum membership, however. I'll be ready to use whatever we wind up with.
I hope to return to the forums in earnest some day, but I think it'll be about July the way things are going. In the meantime I'll put to use any code examples.
Thanks,
Ken Gracey
I'm afraid it would have to be a lot smaller. I'm trying to get a 288 x 330-pixel (144 wide pixels x 330 normal pixels) down to less than 13kB or so. The text can take up 9kB or more, and there still needs to be room for the SD reader and I'm not sure yet what else. Anyway, the smaller the better, since it keeps Ken's options open for slide complexity.
BTW, I downloaded Gimp yesterday, in hopes that their UI had improved since the last time I tried it on Linux, thinking maybe Ken could use it to prepare the slides. The UI is still pretty hideous, though, with multiple floating windows, instead of one unified GUI. It's a powerful program, to be sure, but Corel PhotoPaint will do almost everything that Gimp will, including the palette optimization and FS dithering. However, when you tell it to palettize an image with an optimized n colors, it won't start from an image that's already been palettized with, say, the 64 Prop VGA colors. Maybe I should take another look at Gimp, but I still prefer coding it in Perl, since I have complete control (issues ).
-Phil
It's been a time-eater, for sure, but don't worry about that. I wouldn't be doing it if it wasn't a fun challenge (i.e. an obsession); and even if you never use it, I'm learning some techniques that might come in handy for future projects.
BTW, I think you need to get a pilot's license and your own plane with all the roadshow stuff you're doing.
-Phil
Ah...you are developing "Pimp". "Perl Image Manipulation Program" or "Phil's Image Manipulation Program".
Then Ken can "Pimp his presentations"
Might be a silly question but is there any merit in supporting plain old monochrome pictures?
Monochrome pictures are certainly a possibility, and they would automatically be supported with the program I'm working on. There just wouldn't be have to be any color swaps between pixel blocks, since the Prop VGA has only four gray levels.
-Phil