Now that we know how to create a 3×3 linear matrix to convert white balanced and demosaiced raw data into connection space – and where to obtain the 3×3 linear matrix to then convert it to a standard output color space like sRGB – we can take a closer look at the matrices and apply them to a real world capture chosen for its wide range of chromaticities.

# Tag Archives: spectral power distribution

# Color: Determining a Forward Matrix for Your Camera

We understand from the previous article that rendering color during raw conversion essentially means mapping raw data in the form of triplets into a standard color space via a Profile Connection Space in a two step process

The first step white balances and demosaics the raw data, which at that stage we will refer to as , followed by converting it to Profile Connection Space through linear projection by an unknown ‘Forward Matrix’ (as DNG calls it) of the form

(1)

Determining the nine coefficients of this matrix is the main subject of this article^{[1]}. Continue reading Color: Determining a Forward Matrix for Your Camera

# Color: From Capture to Eye

How do we translate captured image information into a stimulus that will produce the appropriate perception of color? It’s actually not that complicated^{[1]}.

Recall from the introductory article that a photon absorbed by a cone type (, or ) in the fovea produces the same stimulus to the brain regardless of its wavelength^{[2]}. Take the example of the eye of an observer which focuses on the retina the image of a uniform object with a spectral photon distribution of 1000 photons/nm in the 400 to 720nm wavelength range and no photons outside of it.

Because the system is linear, cones in the foveola will weigh the incoming photons by their relative sensitivity (probability) functions and add the result up to produce a stimulus proportional to the area under the curves. For instance a cone will see about 321,000 photons arrive and produce a relative stimulus of about 94,700, the weighted area under the curve:

# An Introduction to Color in Digital Cameras

This article will set the stage for a discussion on how pleasing color is produced during raw conversion. The easiest way to understand how a camera captures and processes ‘color’ is to start with an example of how the human visual system does it.

#### An Example: Green

Light from the sun strikes leaves on a tree. The foliage of the tree absorbs some of the light and reflects the rest diffusely towards the eye of a human observer. The eye focuses the image of the foliage onto the retina at its back. Near the center of the retina there is a small circular area called the foveola which is dense with light receptors of well defined spectral sensitivities called cones. Information from the cones is pre-processed by neurons and carried by nerve fibers via the optic nerve to the brain where, after some additional psychovisual processing, we recognize the color of the foliage as green^{[1]}.

Continue reading An Introduction to Color in Digital Cameras

# A Simple Model for Sharpness in Digital Cameras – II

Now that we know from the introductory article that the spatial frequency response of a typical perfect digital camera and lens can be modeled simply as the product of the Modulation Transfer Function of the lens and pixel area, convolved with a Dirac delta grid at cycles-per-pixel spacing

(1)

we can take a closer look at each of those components ( here indicating normalization). I used Matlab to generate the examples below but you can easily do the same in a spreadsheet. Here is the code if you wish to follow along. Continue reading A Simple Model for Sharpness in Digital Cameras – II

# What is the Effective Quantum Efficiency of my Sensor?

Now that we know how to determine how many photons impinge on a sensor we can estimate its Effective Quantum Efficiency, that is the efficiency with which it turns such a photon flux into photoelectrons ( ), which will then be converted to raw data to be stored in the capture’s raw file:

(1)

I call it ‘effective’ because it represents the probability that a photon arriving on the sensing plane from the scene will be converted to a photoelectron by a typical digital camera sensor. It therefore includes the effect of microlenses, fill factor, CFA and other filters on top of silicon in the pixel. It is usually expressed as a percentage. For instance if an average of 100 photons per pixel within the sensor’s passband were incident on a uniformly lit spot of the sensor and on average each pixel produced a signal of 20 photoelectrons we would say that the Effective Quantum Efficiency of the sensor is 20%. Clearly the higher the eQE the better for Image Quality parameters such as SNR and DR. Continue reading What is the Effective Quantum Efficiency of my Sensor?

# How Many Photons on a Pixel at a Given Exposure

How many photons impinge on a pixel illuminated by a known light source during exposure? To answer this question in a photographic context we need to know the area of the pixel, the Spectral Power Distribution of the illuminant and the relative Exposure.

We know the pixel’s area and we know that the Spectral Power Distribution of a common class of light sources called blackbody radiators at temperature T is described by Spectral Radiant Exitance – so all we need to determine is what Exposure this irradiance corresponds to in order to obtain the answer.

Continue reading How Many Photons on a Pixel at a Given Exposure

# Photons Emitted by Light Source

How many photons are emitted by a light source? To answer this question we need to evaluate the following simple formula at every wavelength in the spectral range we are interested in and add the values up:

(1)

The astute reader will have realized that the units above are simply . Written more formally:

(2)

# How Many Photons on a Pixel

How many visible photons hit a pixel on my sensor? The answer depends on Exposure, Spectral power distribution of the arriving light and pixel area. With a few simplifying assumptions it is not too difficult to calculate that with a typical Daylight illuminant the number is roughly 11,850 photons per lx-s per micron. Without the simplifying assumptions* it reduces to about 11,260. Continue reading How Many Photons on a Pixel

# Nikon CFA Spectral Power Distribution

I measured the Spectral Power Distribution of the three CFA filters of a Nikon D610 in ‘Daylight’ conditions with a cheap spectrometer. Taking a cue from this post I pointed it at light from the sun reflected off a gray card and took a raw capture of the spectrum it produced.

An ImageJ plot did the rest. I took a dozen pictures at slightly different angles to catch the picture of the clearest spectrum. Shown are the three spectral curves averaged over the two best opposing captures. The Photopic Eye Luminous Efficiency Function (2 degree, Sharpe et al 2005) is also shown for reference, scaled to the same maximum as the green curve. Continue reading Nikon CFA Spectral Power Distribution