Color Cell image (.cc4) display code and encoding tool
This is to wrap up discussion from this thread:
https://forums.parallax.com/discussion/171799/color-cell-compression
You can now display nearly fullscreen images with VGA at 1080p from HUB RAM (no external memory required)
The .cc4 image format uses a variation of color cell compression that leverages the existing 2bpp tile driver. A really nice feature of this approach is that decompression is not needed. Just need to point tile pointers to the right spot and set palette and you're done.
The encoding tool provided here takes 8bpp (256 color) images that has height and width that are multiples of 16 (due to the 16x16 cell size). The result can be saved as a .cc4 file.
Some possible resolutions with this example are:
1440x960
1424x1072
720x1088
1456x976
The actual limitation is that image must fit in HUB ram so .cc4 has a ~400kB size limit.
Several example .cc4 files are provided here.
The tool is too big to post here...
Download the encoding tool here: https://www.rayslogic.com/Propeller2/CC4/CC4.htm (source code posted a few posts down from here)
Note: The 1080p Tile Driver is rigged for a setup such as a P2 Eval Board with A/V Adapter on Basepin=8 or equivalent.
Comments
This parrot comes out exceptionally well with .cc4 encoding. You have to look really close to find faults... The 8bpp source file is attached for comparison with the bmp version of the .cc4 file.
I've tried just about all the images from here:
https://imagecompression.info/test_images/
They all generally come out really well.
There is only one that I'm disappointed with...
The tool does not work so good on this flower, around the edges.
Seems the tool could be improved perhaps...
Original 8bpp and bmp of the .cc4 version are attached.
Here's the Visual Studio C++ source code for the encoder.
In case, someone can improve on it...
Hmm, I can't get it to totally break with any normal image, good job. Though it has issues with bright red. Doing color math in RGB space is just kinda broken and that's a symptom. Here's some ruby code I wrote for perceptual color distance computation in CIELAB or OKLAB color spaces. In either case the L component should be perceptual luminance, so try using that. I used to have a C++ version, but that is currently not accessible... (hard drive failed and not sure if that directory is in the backup).
Note for porting: the ** is the power operator
Interesting.
Hope I get time to test it some day
Got slideshow working on SimpleP2 board, but FSRW is refusing to work on Eval board for some reason I don't understand...
I think it's my USB setup with the board being after 3 hubs... Have to figure that one out before I post it.
But, to make this useful in the tile driver for GUI use, have to see how good or bad the images look with a fixed palette (instead of optimized palette).
The palette usually used in the tile driver is described like this:
XTerm palette (to be slightly modified to avoid duplicate colors)
'from: https://jonasjacek.github.io/colors/
'Don't like that several of the first 16 are duplicated later...
'Going to make small changes to colors: 0,8,9,10,11,12,13,14,15 so they are distinct
Will be interesting to see what the parrot looks like with this...
Got the slideshow working with Eval board with A/V on basepin P8.
Seems I had a "DRVNOT P58" in the VGA driver, which was messing up the uSD. All better now.
Needs these .cc4 images on uSD.
The transitions are interesting...
Ok, last thing wanted to do here was try this with a fixed Palette.
This lets us keep the background color the same and print the filename on the screen.
The quality suffers a lot, but this is what can be integrated into the 2bpp Tiled GUI.
Could be useful to have smaller images in the screen along side or under dials, buttons, sliders, etc.
@Wuerfel_21 I have luminance as:
PalLuma[i] = r * 0.30 + g * 0.59 + b * 0.11;
I can't really figure out what yours is doing...
But, including this luminance into the error when finding best palette index to use is probably a good idea...
Calculating luminance like that is a very, very rough approximation. First off, the RGB values in the image are not linear (that is, photon emission by the screen is not proportional to the number value). Instead, they are transformed by the sRGB transfer curve (unless the image contains color profile metadata, but that's another headache). This means there's more precision in darker shades, where your eyes are more sensitive.
This code converts sRGB values to linear floating point RGB in range 0 to 100 (IDK why, should really be 0 to 1.)
Conversely, this is the inverse (linear to sRGB):
Note that this can be approximated by taking the square (sRGB->linear) or square root (linear->sRGB) for real-time applications.
Also note that your monitor most likely sucks and doesn't quite adhere to sRGB (or any other specification) without calibration. You likely have a gamma lower than 2.4 in practice.
Linear RGB values allow some neat things and using these to calculate luminance is more accurate (when you then convert that linear value back into sRGB). But aha, we don't really perceive absolute counts of photons either. In the OKLAB version (just ignore the CIELAB version for now), the L,M,S variables refer to the response of the long, medium and short cone cells (under normal conditons).
The actual luminance you see from that is then a linear combination of these (L) and adding two more well-chosen orthagonal axes (A and B ) can then describe a color.
When you do math on these LAB values (such as calculating their euclidean distance), the result is more in line with how you actually perceive colors in relation to one another (notably not in line with how colors physically interact - that's what linear RGB is for). Note that LAB values do not have any particularly sensible range of valid values and it is easy to construct a color that doesn't really exist.