An Introduction to Color in Digital Cameras

This article will set the stage for a discussion on how pleasing color is produced during raw conversion.  The easiest way to understand how a camera captures and processes ‘color’ is to start with an example of how the human visual system does it.

An Example: Green

Light from the sun strikes leaves on a tree.   The foliage of the tree absorbs some of the light and reflects the rest diffusely  towards the eye of a human observer.  The eye focuses the image of the foliage onto the retina at its back.  Near the center of the retina there is a small circular area called fovea centralis which is dense with light receptors of well defined spectral sensitivities called cones. Information from the cones is pre-processed by neurons and carried by nerve fibers via the optic nerve to the brain where, after some additional psychovisual processing, we recognize the color of the foliage as green[1].

spd-to-cone-quanta3
Figure 1. The human eye absorbs light from an illuminant reflected diffusely by the object it is looking at.

If we zoom in on the fovea we realize that cones come in three flavors depending on their spectral sensitivity, which may be more attuned to the Long, Medium or Short wavelengths of the visible spectrum;  they are denoted in turn L,M,S or \rho,  \gamma, \beta cones because they loosely correspond to its red, green and blue portions  (I will use the latter notation here).

Follow the Photons

Let’s follow light from the illuminant along its path to the eye, with its spectral power distribution (SPD) expressed as a photon distribution.  This is accomplished  by dividing the relative energy contained within the small wavelength interval used in the distribution by the energy of the average photon within it.  The resulting energy is given by Planck’s relation:

(1)   \begin{equation*} E_{photon}(\lambda) = \frac{hc}{\lambda} \text{   joules} \end{equation*}

with h Planck’s constant, c the speed of light and \lambda the average wavelength within the interval used in the distribution[2].

d50-spd
Figure2. Left 2a: Relative Spectral Energy or Power Distribution of ‘Daylight’ illuminant D50. Right 2b: SPD to the left converted to Relative Spectral Photon Distribution, in photons per nm.

Upon diffuse reflection by the tree the relative Spectral Photon Distribution of the illuminant (above right) is effectively multiplied wavelength by wavelength by the spectral reflectance of the foliage (below left).

spd-eye-2
Figure 3. Left 3a: spectral Reflectance of foliage according to patch (1,4) in the standard ColorCheker target. Right 3b: The Reflectance at left multiplied by the relative Spectral Photon Distribution in Figure 2b.

The resulting relative spectral distribution of photons (above right) arrives at the eye which focuses it onto the fovea where \rho, \gamma and \beta cones reside.  The cones respond to the incoming photons according to their respective spectral sensitivities[3], which determine the likelihood that a photon of a given wavelength will be absorbed[4] (below left, the red green and blue curves referring to \rho, \gamma and \beta cones respectively).

collected-photons
Figure 4. Left 4a: CIE 2006 ‘physiologically-relevant’ LMS functions (shown in the red, green and blue curves respectively). Right 4b: the LMS functions left multiplied wavelength by wavelength by the reflected Spectral Photon Distribution of Figure 3b.

Therefore Figure 4b above right represents the relative number of photons per small wavelength interval absorbed by each typical \rho, \gamma and \beta human cone for the given ‘daylight’ D50 illuminant SPD reflected diffusely by such foliage.

Cornsweet[4] suggests that if a photon is absorbed we can assume that it produces exactly the same effect on a cone type independently of wavelength.  Therefore the integrals of the \rho, \gamma, \beta curves in Figure 4b above right represent the whole input signal to the human visual system.  For this example of green foliage illuminated by daylight the integrals evaluate to the following relative values:

resul1t

It is this cone-collected signal which, after some processing, we recognize as Green!  The units are not critical at this stage because the system as described is assumed to be linear under photopic conditions and has been normalized: within some adapted limits if the illuminant’s power or Exposure is doubled/halved, the values double/halve and we still recognize the color as green.  I like to think of them as a relative number of photons (or quanta as Cornsweet suggests).  For a given light intensity range and adapted state what really matters as far as color is concerned  is the relative proportion of photons absorbed in \rho, \gamma, and \beta cones.

spd-to-cone-quanta
Figure 5: The entire process of seeing in adapted photopic conditions can be distilled to a number of photons absorbed by each of the three types of cones.

Sounds familiar?  That’s exactly how things work in a digital camera, except that instead of cones there are pixels  and instead of \rho\gamma\beta Spectral Sensitivity Functions there are the Spectral Sensitivity Functions of filters in the Color Filter Array (CFA).  Of course the proportion, layout and sensitivity of the three cone types is different than for instance in an ‘rggb’ Bayer sensor, but the underlying photon-counting principle is the same.

Color Blindness

The fact that an absorbed photon produces the same effect on a cone type regardless of its wavelength is sometimes referred to as ‘color blindness’ and extremely important.  It is the basis upon which color science is built  because if we can find a way, any way,  to stimulate someone’s \rho\gamma\beta cones to the relative tune of 238, 197 and 38 within an adapted photopic-conditions range – that person will perceive a ‘foliage’ green color[5].

Next, why we need color matrices.

 

PS. This little TED short drives the point home visually.

 

Notes and References

1. Measuring Color. Third Edition. R.W.G. Hunt. Fountain Press, 1998. Visual Signal Transmission.
2. In practice one can forget about the constants and simply multiply the Spectral Power Distribution by the average wavelength within an interval since all figures need to be normalized.
3. CIE 2006 ‘physiologically-relevant’ LMS functions, UCL Institute of Ophthalmology, Linear Energy 1nm 2 degree functions.
4. Visual Perception. Tom N. Cornsweet. Academic Press Inc., 1970.
5. Lots of provisos and simplifications for clarity as always.  I am not a color scientist, so if you spot any mistakes please let me know.

2 thoughts on “An Introduction to Color in Digital Cameras”

  1. Are you aware of some one who has similar content but that deals with the process of capturing music and outputting it as a digital format.
    Thanks
    Fred
    “Beauty is in the MIND of the beholder!!!”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.