Cone Fundamentals & the LMS Color Space

In the last article we showed how a digital camera’s captured raw data is related to Color Science.  In my next trick I will show that CIE 2012 2 deg XYZ Color Matching Functions \bar{x}, \bar{y}, \bar{z} displayed in Figure 1 are an exact linear transform of Stockman & Sharpe (2000) 2 deg Cone Fundamentals \bar{\rho}, \bar{\gamma}, \bar{\beta} displayed in Figure 2

(1)   \begin{equation*} \left[ \begin{array}{c} \bar{x}} \\ \bar{y} \\ \bar{z} \end{array} \right] = M_{lx} * \left[ \begin{array} {c}\bar{\rho} \\ \bar{\gamma} \\ \bar{\beta} \end{array} \right] \end{equation*}

with CMFs and CFs in 3xN format, M_{lx} a 3×3 matrix and * matrix multiplication.  Et voilà:[1]

Figure 1.  Solid lines: CIE (2012) 2° XYZ “physiologically-relevant” Colour Matching Functions and photopic Luminous Efficiency Function (V) from cvrl.org, the Colour & Vision Research Laboratory at UCL.  Dotted lines: The Cone Fundamentals in Figure 2 after linear transformation by 3×3 matrix Mlx below.  Source: cvrl.org.

I can also perform the opposite trick, from CMFs to CFs by using the inverse of matrix M_{lx}, M_{xl} = M_{lx}^{-1}

(2)   \begin{equation*} \left[ \begin{array} {c}\bar{\rho} \\ \bar{\gamma} \\ \bar{\beta} \end{array} \right] = M_{xl} * \left[ \begin{array}{c} \bar{x}} \\ \bar{y} \\ \bar{z} \end{array} \right] \end{equation*}

Re-voilà:

Figure 2.  Solid lines: Stockman & Sharpe (2000) 2-deg fundamentals based on the Stiles & Burch 10-deg CMFs (adjusted to 2-deg), from cvrl.org.  Dotted lines: The CMFs in Figure 1 after linear transformation by matrix Mxl.  Source: cvrl.org.

All curves and matrices are calculated thanks to the magic of linear algebra and the assumption that linearity applies to light. The underlying data comes from the Colour and Vision Research Laboratory at UCL:[2]

Cone Fundamentals are Color Matching Functions

Most humans have three types of daylight receptors in the retina called cones, each type designated Long, Medium and Short (or alternatively \rho, \gamma, \beta) for the wavelengths they absorb more efficiently.

\bar{\rho}, \bar{\gamma}, \bar{\beta} are therefore the Cone Fundamental curves that produce tristimulus values in LMS space, just as \bar{x}, \bar{y}, \bar{z} do the same in XYZ – the two are just a linear transform away from one another.  In other words Cone Fundamentals are Color Matching Functions as seen from a color space that is better aligned with the Human Visual System.

By using these matrices to move color information around we are a hop skip and a jump from all other standard color spaces, like ProPhotoRGB or sRGB, easily reached by using well known projection matrices collected in beautiful sites like Bruce Lindbloom’s.[3]

Why XYZ?

So you may wonder: why did the CIE even come up with the imaginary XYZ color space and – when trying to extract better color from our digital cameras – why are we trying to make their Spectral Sensitivity Functions look like \bar{x}, \bar{y}, \bar{z}, with two-peak reds, instead of the better and more similar looking, single-peak Cone Fundamentals?  Here for instance are the two sets of SSFs from a couple of articles ago to compare to the curves in Figure 2[4]

Figure 3.   Dashed lines: Spectral Sensitivity Functions for a Nikon D5100 and unknown lens measured by Darrodi et al. at NPL.  Solid lines: SSFs of camera and lens with the spectral components described a few articles ago.

A good question.  The answer though is that ideally it should make no difference whether we try to mimic one set of curves (CIE 2 deg. CMFs) or the other (CFs) – because they are duals of  one another.[5]

And just like 5 times 3 times 7 is equal to 5 times 7 times 3, assuming we stick with the rules of linear algebra it should make no difference to the end result whether we calculate a Compromise Color Matrix as described in the last article by minimizing the difference between our camera’s SSFs and CMFs, which will get us into XYZ; or calculate an equivalent one by minimizing the difference to CFs, which will get us into LMS space instead; or use sRGB color matching functions as a reference and land there.   We can then project to XYZ (or anywhere else for that matter) perfectly accurately by applying an appropriate matrix – with no penalty, assuming floating point math is used, which these days it mostly is.

Who Needs Cone Responses?

In fact as long as we know the curves in a given color space, like XYZ, we effectively know them everywhere else.  And since it is physically and ethically difficult to measure the output of cones in a human retina, we might as well measure their sensitivity in another color space related to it by a linear transform.  Enter the color matching experiments which form the basis of Color Science, mostly performed in a physical color space that is ideally just a 3×3 matrix multiplication away from all others, including XYZ.

It makes intuitive sense that in color matching experiments they do not measure the response at the cones proper – but the response further down the Human Visual System’s image processing chain, at least after the signal has already been detected by the three cone types and coded by neurons into a luminance (sum) and two chrominance (difference) channels for transmission via the optic nerve to the brain:

Figure 4.  Hunt’s ‘plausible’ model of multi stage visual transmission.  The first stage is represented by the rho, gamma and beta cones (plus rods, which we ignore in daylight conditions) and behaves mostly radiometrically.  The second stage is performed by neurons that encode the signal proportional to photon counts into opposing color channels, which are then transmitted to the brain by nerve fibers. From “Measuring Colour Fourth Edition, Wiley, 2011” by R. W. G. Hunt, M. R. Pointer.

And even then.  So this may introduce some non-linearities and play in the system, which however often tend to be swept under the carpet in common practice.

Obtaining sensitivities by color matching suggests the reason why there is still debate on the exact physical response curves of the HVS – derived for instance from physiological experiments on cone photopigments[*] – while there is much better agreement on the accuracy of Standard Observer CMF curves.  And this suits Color Science fine.

In other words we may not have been able to directly measure the spectral response of the HVS accurately  – but measuring it indirectly via color matching works reasonably well in practice because the two sets of curves are theoretically a linear transform of each other.  What we haven’t been able to determine satisfactorily yet is the relative 3×3 projection matrix, of which M_{xl} above is just one of the latest and greatest guesses.  Which means that ‘Cone Fundamentals’ and ‘LMS’ may be a bit of a misnomer for the curves and space shown in Figure 2 – and a more generic ‘HVS Response’ and ‘HVS’ space might be more appropriate.

LMS and Chromatic Adaptation

In fact there may be an advantage in going directly from demosaiced raw to LMS because the eye tends to adapt to the principal illumination it is bathed in – so virtually all scenes require some form of chromatic compensation, which seems akin to a cone-specific gain ideally applied closest to where it happens physiologically, LMS space in the current simple model.

This is a typical sequence of how linear color is applied by a commercial raw converter, with virtually every step requiring linear transformation by 3×3 matrix multiplication:

  1.  white balance demosaiced raw data off gray card (rgb)
  2. select compromise matrix for given scene and illuminant
  3. use it to project to connection color space (usually XYZ)
  4. project to LMS via suitable standard matrix (often Bradford)
  5. compensate for viewing chromatic adaptation by scaling primaries in LMS (or more complex method)
  6. project back to XYZ using the inverse of the matrix in step 4
  7. project from XYZ to the working/output RGB color space (sRGB or similar) using the relative standard matrix.

Because these steps are accomplished by linear multiplication, they can all be collapsed into a single 3×3 matrix, to take captured demosaiced raw data (step 1) to sRGB (step 7) in one go for instance.  Or the various steps can be shifted and/or merged around at will based on convenience and the rules of linear algebra.  We could even forget that LMS space existed if adaptation were not required – but it virtually always is.

Why Bradford?

However, back up a little, why use the Bradford Matrix to project to LMS (step 4) from XYZ instead of the one used to produce Figure 2 above, M_{xl}?

Well, we did say that the linear transform to LMS is still a bit loose so depending on the application it may be convenient to stretch it more in one direction than the other.  For the particular case of chromatic adaptation to the viewing dominant illuminant – since  the procedure usually includes a round trip (XYZ->LMS->XYZ) to perform what typically is just a simple scaling operation – there is even more latitude in the choice.  The Bradford transform presumes that chromatic adaptation to the dominant illuminant in the Human Visual System does not happen at the detection stage as-is, at the cones, but after some narrowing of the relative responses has taken place.  It is for this reason sometimes referred to as a spectrally ‘sharpened’ transform.

It works fairly well and in its linear form it has all but become the standard for Chromatic Adaptation in raw conversion over the past twenty years – though nothing is perfect.[7]

In Summary

  • There is no reason for XYZ to be the standard connection space other than in 1931 it was cool to have a color space with no negative values in the visible range
  • Any space that is a linear transform of any colorimetric Color Space is just as good as far as Color Science is concerned
  • So we can obtain perfectly valid Compromise Color Matrices from the Spectral Sensitivity Functions of our digital cameras and lenses by minimizing errors compared to Color Matching Functions in any space – including LMS where they are referred to as Cone Fundamentals
  • Physiological cone spectral responses are only loosely related to Stockman & Sharpe (2000) Cone Fundamentals since the latter are just a linear transform of CMFs derived from color matching experiments in some other space
  • Since the coordinates of LMS space have not been totally pinned down, for now we can project into and out of it when and how we deem convenient – as long as we are consistent with the relative matrices
  • So much reference data has been generated in the XYZ Profile Connection Space over the last 100 years or so that, unless there is a really good reason for doing otherwise, we might as well stick to it for now.

 

Notes and References


1. The Matlab/Octave code used to produce Figure 1 can be downloaded by clicking here.
2. The page of the Colour and Vision Laboratory of the Institute of Ophthalmology at UCL is here.
3. Bruce Lindbloom’s site can be reached by clicking here.
4. The Spectral Responses shown in Figure 3 are about 10 years old and reflect the CFA dyes used at the time. More recent dyes, like Fuji’s New Generation Color Mosaic, produce spectra that attempt to eliminate the red channel’s leakage in lower wavelengths, see for instance “Technology of color filter materials for image sensor“, Hiroshi Taguchi, Masashi Enokido, FUJIFILM Electronic Materials Co.,Ltd (2011)
5. “Exact Color“, Berthold K. P. Horn, MIT, 1983
6. “Cone Fundamentals is a Misnomer“, James T. Fulton 2009.
7. “Spectral Sharpening and the Bradford Transform“, Graham D. Finlayson and Sabine Süsstrunk, CIS 2000.

2 thoughts on “Cone Fundamentals & the LMS Color Space”

  1. I had this exact question in my head about how CMFs and CFs are related and I found your article that answered it perfectly. Then I started going down the rabbit hole of your other posts and I just want to appreciate you for providing such a gold mine of information about the science of photography. It’s been super valuable for helping me connect the dots between the mathematics and the real world.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.