Tag Archives: color correction

Linear Color: Applying the Forward Matrix

Now that we know how to create a 3×3 linear matrix to convert white balanced and demosaiced raw data into XYZ_{D50}  connection space – and where to obtain the 3×3 linear matrix to then convert it to a standard output color space like sRGB – we can take a closer look at the matrices and apply them to a real world capture chosen for its wide range of chromaticities.

Figure 1. Image with color converted using the forward linear matrix discussed in the article.

Continue reading Linear Color: Applying the Forward Matrix

Color: Determining a Forward Matrix for Your Camera

We understand from the previous article that rendering color with Adobe DNG raw conversion essentially means mapping raw data in the form of rgb triplets into a standard color space via a Profile Connection Space in a two step process

    \[ Raw Data \rightarrow  XYZ_{D50} \rightarrow RGB_{standard} \]

The first step white balances and demosaics the raw data, which at that stage we will refer to as rgb, followed by converting it to XYZ_{D50} Profile Connection Space through linear projection by an unknown ‘Forward Matrix’ (as DNG calls it) of the form

(1)   \begin{equation*} \left[ \begin{array}{c} X_{D50} \\ Y_{D50} \\ Z_{D50} \end{array} \right] = \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix} \left[ \begin{array}{c} r \\ g \\ b \end{array} \right] \end{equation*}

with data as column-vectors in a 3xN array.  Determining the nine a coefficients of this matrix M is the main subject of this article[1]. Continue reading Color: Determining a Forward Matrix for Your Camera

Color: From Object to Eye

How do we translate captured image information into a stimulus that will produce the appropriate perception of color?  It’s actually not that complicated[1].

Recall from the introductory article that a photon absorbed by a cone type (\rho, \gamma or \beta) in the fovea produces the same stimulus to the brain regardless of its wavelength[2].  Take the example of the eye of an observer which focuses  on the retina the image of a uniform object with a spectral photon distribution of 1000 photons/nm in the 400 to 720nm wavelength range and no photons outside of it.

Because the system is linear, cones in the foveola will weigh the incoming photons by their relative sensitivity (probability) functions and add the result up to produce a stimulus proportional to the area under the curves.  For instance a \gamma cone may see about 321,000 photons arrive and produce a relative stimulus of about 94,700, the weighted area under the curve:

equal-photons-per-wl
Figure 1. Light made up of 321k photons of broad spectrum and constant Spectral Photon Distribution between 400 and 720nm  is weighted by cone sensitivity to produce a relative stimulus equivalent to 94,700 photons, proportional to the area under the curve

Continue reading Color: From Object to Eye

How Is a Raw Image Rendered?

What are the basic low level steps involved in raw file conversion?  In this article I will discuss what happens under the hood of digital camera raw converters in order to turn raw file data into a viewable image, a process sometimes referred to as ‘rendering’.  We will use the following raw capture by a Nikon D610 to show how image information is transformed at every step along the way:

Nikon D610 with AF-S 24-120mm f/4 lens at 24mm f/8 ISO100, minimally rendered from raw as outlined in the article.
Figure 1. Nikon D610 with AF-S 24-120mm f/4 lens at 24mm f/8 ISO100, minimally rendered from raw by Octave/Matlab following the steps outlined in the article.

Rendering = Raw Conversion + Editing

Continue reading How Is a Raw Image Rendered?