Category Archives: Color

Phase One IQ3 100MP Trichromatic vs Standard Back Linear Color, Part III

Over the last two posts we’ve been exploring some of the differences introduced by tweaks to the Color Filter Array of the Phase One IQ3 100MP Trichromatic Digital Back versus its original incarnation, the Standard Back.  Refer to those for the background.  In this article we will delve into some of these differences quantitatively[1].

Let’s start with the compromise color matrices we derived from David Chew’s captures of a ColorChecher 24 in the shade of a sunny November morning in Ohio[2].   These are the matrices necessary to convert white balanced raw data to the perceptual CIE XYZ color space, where it is said there should be one-to-one correspondence with colors as perceived by humans, and therefore where most measurements are performed.  They are optimized for each back in the current conditions but they are not perfect, the reason for the word ‘compromise’ in their name:

Figure 1. Optimized Linear Compromise Color Matrices for the Phase One IQ3 100 MP Standard and Trichromatic Backs under approximately D65 light.

Continue reading Phase One IQ3 100MP Trichromatic vs Standard Back Linear Color, Part III

Phase One IQ3 100MP Trichromatic vs Standard Back Linear Color, Part II

We have seen in the last post that Phase One apparently performed a couple of main tweaks to the Color Filter Array of its Medium Format IQ3 100MP back when it introduced the Trichromatic:  it made the shapes of color filter sensitivities more symmetric by eliminating residual transmittance away from the peaks; and it boosted the peak sensitivity of the red (and possibly blue) filter.  It did this with the objective of obtaining more accurate, less noisy color out of the hardware, requiring less processing and weaker purple fringing to boot.

Both changes carry the compromises discussed in the last article so the purpose of this one and the one that follows is to attempt to measure – within the limits of my tests, procedures and understanding[1] – the effect of the CFA changes from similar raw captures by the IQ3 100MP Standard Back and Trichromatic, courtesy of David Chew.  We will concentrate on color accuracy, leaving purple fringing for another time.

Figure 1. Phase One IQ3 100MP image rendered linearly via a dedicated color matrix from raw data without any additional processing whatsoever: no color corrections, no tone curve, no sharpening, no nothing. Brightness adjusted to just avoid clipping.  Capture by David Chew.

Continue reading Phase One IQ3 100MP Trichromatic vs Standard Back Linear Color, Part II

Phase One IQ3 100MP Trichromatic vs Standard Back Linear Color, Part I

It is always interesting when innovative companies push the envelope of the state-of-the-art of a single component in their systems because a lot can be learned from before and after comparisons.   I was therefore excited when Phase One introduced a Trichromatic version of their Medium Format IQ3 100MP Digital Back last September because it could allows us to isolate the effects of tweaks to their Bayer Color Filter Array, assuming all else stays the same.

Figure 1. IQ3 100MP Trichromatic (left) vs the rest (right), from PhaseOne.com.   Units are not specified but one would assume that the vertical axis is relative spectral sensitivity and the horizontal axis represents wavelength.

Thanks to two virtually identical captures by David Chew at getDPI, and Erik Jaffehr’s intelligent questions at DPR, in the following articles I will explore the effect on linear color of the new Trichromatic CFA (TC) vs the old one on the Standard Back (SB).  In the process we will discover that – within the limits of my tests, procedures and understanding[1] – the Standard Back produces apparently more ‘accurate’ color while the Trichromatic produces better looking matrices, potentially resulting in ‘purer’ signals. Continue reading Phase One IQ3 100MP Trichromatic vs Standard Back Linear Color, Part I

Bayer CFA Effect on Sharpness

In this article we shall find that the effect of a Bayer CFA on the spatial frequencies and hence the ‘sharpness’ captured by a sensor compared to those from a corresponding monochrome imager can go from nothing to halving the potentially unaliased range based on the chrominance content of the image projected on the sensing plane and the direction in which the spatial frequencies are being stressed.

A Little Sampling Theory

We know from Goodman[1] and previous articles that the sampled image (I_{s} ) captured in the raw data by a typical current digital camera can be represented mathematically as  the continuous image on the sensing plane (I_{c} ) multiplied by a rectangular lattice of Dirac delta functions positioned at the center of each pixel:

(1)   \begin{equation*} I_{s}(x,y) = I_{c}(x,y) \cdot comb(\frac{x}{p}) \cdot comb(\frac{y}{p}) \end{equation*}

with the comb functions representing the two dimensional grid of delta functions, sampling pitch p apart horizontally and vertically.  To keep things simple the sensing plane is considered here to be the imager’s silicon itself, which sits below microlenses and other filters so the continuous image I_{c} is assumed to incorporate their as well as pixel aperture’s effects. Continue reading Bayer CFA Effect on Sharpness

Linear Color: Applying the Forward Matrix

Now that we know how to create a 3×3 linear matrix to convert white balanced and demosaiced raw data into XYZ_{D50}  connection space – and where to obtain the 3×3 linear matrix to then convert it to a standard output color space like sRGB – we can take a closer look at the matrices and apply them to a real world capture chosen for its wide range of chromaticities.

Figure 1. Image with color converted using the forward linear matrix discussed in the article.

Continue reading Linear Color: Applying the Forward Matrix

Color: Determining a Forward Matrix for Your Camera

We understand from the previous article that rendering color during raw conversion essentially means mapping raw data represented by RGB triplets into a standard color space via a Profile Connection Space in a two step process

    \[ RGB_{raw} \rightarrow  XYZ_{D50} \rightarrow RGB_{standard} \]

The process I will use first white balances and demosaics the raw data, which at that stage we will refer to as RGB_{rwd}, followed by converting it to XYZ_{D50} Profile Connection Space through linear transformation by an unknown ‘Forward Matrix’ (as DNG calls it) of the form

(1)   \begin{equation*} \left[ \begin{array}{c} X_{D50} \\ Y_{D50} \\ Z_{D50} \end{array} \right] = \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix} \times \left[ \begin{array}{c} R_{rwd} \\ G_{rwd} \\ B_{rwd} \end{array} \right] \end{equation*}

Determining the nine a coefficients of this matrix is the main subject of this article[1]. Continue reading Color: Determining a Forward Matrix for Your Camera

Color: From Capture to Eye

How do we translate captured image information into a stimulus that will produce the appropriate perception of color?  It’s actually not that complicated[1].

Recall from the introductory article that a photon absorbed by a cone type (\rho, \gamma or \beta) in the fovea produces the same stimulus to the brain regardless of its wavelength[2].  Take the example of the eye of an observer which focuses  on the retina the image of a uniform object with a spectral photon distribution of 1000 photons/nm in the 400 to 720nm wavelength range and no photons outside of it.

Because the system is linear, cones in the foveola will weigh the incoming photons by their relative sensitivity (probability) functions and add the result up to produce a stimulus proportional to the area under the curves.  For instance a \gamma cone will see about 321,000 photons arrive and produce a relative stimulus of about 94,700, the weighted area under the curve:

equal-photons-per-wl
Figure 1. Light made up of 321k photons of broad spectrum and constant Spectral Photon Distribution between 400 and 720nm  is weighted by cone sensitivity to produce a relative stimulus equivalent to 94,700 photons, proportional to the area under the curve

Continue reading Color: From Capture to Eye

An Introduction to Color in Digital Cameras

This article will set the stage for a discussion on how pleasing color is produced during raw conversion.  The easiest way to understand how a camera captures and processes ‘color’ is to start with an example of how the human visual system does it.

An Example: Green

Light from the sun strikes leaves on a tree.   The foliage of the tree absorbs some of the light and reflects the rest diffusely  towards the eye of a human observer.  The eye focuses the image of the foliage onto the retina at its back.  Near the center of the retina there is a small circular area called the foveola which is dense with light receptors of well defined spectral sensitivities called cones. Information from the cones is pre-processed by neurons and carried by nerve fibers via the optic nerve to the brain where, after some additional psychovisual processing, we recognize the color of the foliage as green[1].

spd-to-cone-quanta3
Figure 1. The human eye absorbs light from an illuminant reflected diffusely by the object it is looking at.

Continue reading An Introduction to Color in Digital Cameras

How does a Raw Image Get Rendered?

What are the basic low level steps involved in raw file conversion?  In this article I will discuss what happens under the hood of digital camera raw converters in order to turn raw file data into a viewable image, a process sometimes referred to as ‘rendering’.  We will use the following raw capture to show how image information is transformed at every step along the way:

Nikon D610 with AF-S 24-120mm f/4 lens at 24mm f/8 ISO100, minimally rendered from raw as outlined in the article.
Figure 1. Nikon D610 with AF-S 24-120mm f/4 lens at 24mm f/8 ISO100, minimally rendered from raw by Octave/Matlab following the steps outlined in the article.

Rendering = Raw Conversion + Editing

Continue reading How does a Raw Image Get Rendered?

Linearity in the Frequency Domain

For the purposes of ‘sharpness’ spatial resolution measurement in photography  cameras can be considered shift-invariant, linear systems.

Shift invariant means that the imaging system should respond exactly the same way no matter where light from the scene falls on the sensing medium .  We know that in a strict sense this is not true because for instance a pixel has a square area so it cannot have an isotropic response by definition.  However when using the slanted edge method of linear spatial resolution measurement  we can effectively make it shift invariant by careful preparation of the testing setup.  For example the edges should be slanted no more than this and no less than that. Continue reading Linearity in the Frequency Domain

Nikon CFA Spectral Power Distribution

I measured the Spectral Power Distribution of the three CFA filters of a Nikon D610 in ‘Daylight’ conditions with a cheap spectrometer.  Taking a cue from this post I pointed it at light from the sun reflected off a gray card  and took a raw capture of the spectrum it produced.

CFA Spectrum Spectrometer

An ImageJ plot did the rest.  I took a dozen pictures at slightly different angles to catch the picture of the clearest spectrum.  Shown are the three spectral curves averaged over the two best opposing captures. The Photopic Eye Luminous Efficiency Function (2 degree, Sharpe et al 2005) is also shown for reference, scaled to the same maximum as the green curve. Continue reading Nikon CFA Spectral Power Distribution