Now that we know how to create a 3×3 linear matrix to convert white balanced and demosaiced raw data into connection space – and where to obtain the 3×3 linear matrix to then convert it to a standard output color space like sRGB – we can take a closer look at the matrices and apply them to a real world capture chosen for its wide range of chromaticities.
My camera sports a 14 stop Engineering Dynamic Range. What bit depth do I need to safely fully encode all of the captured tones from the scene with a linear sensor? As we will see the answer is not 14 bits because that’s the eDR, but it’s not too far from that either – for other reasons, as information science will show us in this article.
When photographers talk about grayscale ‘tones’ they typically refer to the number of distinct gray levels present in a displayed image. They don’t want to see distinct levels in a natural slow changing gradient like a dark sky: if it’s smooth they want to perceive it as smooth when looking at their photograph. So they want to make sure that all possible tonal information from the scene has been captured and stored in the raw data by their imaging system.