Alpha blending pixels
System
Posts: 45
in Propeller 2
This discussion was created from comments split from: New documentation idea.
Comments
Ada, you once said that we need to do gamma correction when alpha blending pixels.
Would this be accomplished by raising each R/G/B input channel pair to the power of 2.2, then performing the ratio'd alpha blends, then lowering each output channel to the power of 1/2.2? Or, do I have the exponents in the wrong order?
Each of the power conversions could have a 0..255 lookup table with 256 entries, right? One for raising and one for lowering.
If we wanted to cheat and use exponent values 2 and 1/2, instead, we could do:
Is that sensible?
Sensible, yes, but doing that at 8 bit precision will net you really poor quality.
If you're just blending between images it's probably fine to not do it correctly. A lot of hard- and software does not perform gamma-corrected blending. It is really important when you do gradients or anti-aliasing.
It is for anti-aliasing fonts.
Assuming we have 8-bit inputs and we need 8-bit outputs, would it be sufficient to do the computation at, say, 12 bits? Would that make the quality okay?
Yea. Though try if it's okay without gamma correction. What comes out is the most important in the end.
You had pointed out once that the anti-aliasing I was doing in the DEBUG display windows was not right. You showed an example of a red line on a green background. I want to make it right.
So, do I run the inputs through the exponential expander, do the alpha blend, then run the result through the exponential compressor? Or, do I do the compressor and then expander?
First you expand, do the computation, then you compress back down, yeah.
Or better yet, just use OpenGL and just make sure the render targets and textures are set to sRGB ;P
Thanks, Ada. I will get this working now.
Ada, does this look right?
Here is how I coded it in Delphi:
A lookup table to do this would have been 256 x 256 x 256 bytes, or 16MB.
On the P2, I can use QEXP and QLOG to do the expansion and compression. I don't need to do it for live pixels, just for some 4-entry LUT tables for dithering 2-bit font pixels in dynamic color schemes.
Technically it'd be even more correct to use the proper sRGB formula (see below), but that's pedantic. In practice even the shortcut of gamma=2 is good enough.
Does that mean it is linear for small values and exponential for greater values?
Exactly. It doesn't really matter (since it only affects values below $0A in 8 bit precision and even then only slightly), but that's how they define it. If you're using a LUT you can take it into account, otherwise just YOLO it.
So, this Power(sRGB, 2.2) business is just a close approximation.
Ada, I have found that using exponents 2.0 and 1/2, instead of 2.2 and 1/2.2 actually looks perfect, as the blend from one color to another doesn't veer off the color path due to over-bloating. The supposedly correct 2.2 results in some yellow hues while transitioning from green to red. This observation is surprising, but kind of feels right. The exponent 2.0 computes without overflows, so simple 2-clock MUL instructions can be used (rgb255 * rgb255 * alpha255). Then, QSQRT can compute the root of the sum of the products.
What monitor are you observing that on? A lot of old/cheap ones have rather poor accuracy to the sRGB standard, so the effective gamma may actually be closer to 2.0
With monitors attached to my desktop Linux boxes, I've generally preferred selecting the Mac's 1.8 gamma in the monitor's setup. Makes darker colours are more visible. I have no idea if the GPU output also has a correction ... Come to think of it, I do the same with the big TV in the lounge for the same reason - The dark scenes become more visible.
I am using a newer Samsung 4k TV. It has a really nice picture with very high contrast. I think it has some gamma adjustment, which I will try.
There's another alpha blend technique that might provide equally appealing results without the complications of the gamma/ungamma method. It involves creating two transparencies against the background color (white) and then combining those transparencies with each other.
Let p and q = 1 - p be the alpha blend ratios. So, given the two colors to blend, RGBp and RGBq, the two transparencies are defined by:
RGBTp = p * RGBp + q * 255, and
RGBTq = q * RGBq + p * 255
Combining the two,
RGB = p * RGBTp + q * RGBTq
Here's an image that illustrates the results of this method, compared with a straight linear blend:
Here's the Perl program I used to do the computations:
-Phil
That's interesting, Phil. It whitens the transition values.
Chip, I'd be interested to see how this method works on your font anti-aliasing example.
-Phil
Well, I got the gamma-correction working for the font rendering and it was terrible for that application. It didn't blend things well, at all. I don't know why it works so well for lines, but not for fonts.
It turns out that simple alpha-blending works best for adding dither pixels to low-resolution fonts. It works in all cases of color blending, too. It may be because I'm effectively starting with something akin to gamma values, since the RGB data is expanded in the monitor, so that it looks linear to our eyes. I am alpha-blending compressed values.
To do the job, I used the BLNPIX instruction. I calculated that the blend level I need is 255 * (0.5 / sqrt(2.0)), which is 90 decimal. So, this is all it takes:
The alpha-blended result winds up in 'background'.
Phil, I think the method you proposed would suffer the same kind of bias problems I had with the other gamma correction algorithm. Simple alpha blending works perfectly going from a light background to a dark foreground, as well as from a dark background to a light foreground.
Maybe relevant, but in any case fun to watch
Thanks. That was pretty clear.
I'd heard that stored pixel data wasn't linear, but never knew in what way. The square-root thing is totally new to me .. but is exactly the sort of thing that should be general knowledge for everyone.
EDIT: Aren't Photoshop and the likes meant to have a linear mode that edits in 48-bit linear colour? Presumably when operating that way they pre-convert any imported image to linear so that any blending can be done with faster linear calculations.
Different gamma,
I lerend in the broadcast world 30 years ago.
Gama was explained as the analogue plumb camera caption get expensive gamma correction, to save electronic on every home cathode-ray tv screen
[https://thebroadcastbridge.com/content/entry/15821/is-gamma-still-needed-part-5]
But I understand you have a compression/decompression algorithm issue.
Normally a red dot has to get a on or off state on a pixel and not a transition.
I understand the generated pixel do not match the resolution, and the generated pixel is not a screen pixel, but more or less of them
https://729solutions.com/understanding-resolution-vs-pixel-dimension/
Daniel
https://wiki.xiph.org/Videos/A_Digital_Media_Primer_For_Geeks#Gamma_and_its_Correction
https://xiph.org/video/vid1.shtml - Starting around 20:25 in the video.
That video was really well done. There was a ton of information that is hard to assemble together by doing searches.
Interesting to learn that the human eye has a gamma of about 3.