Now that we know how to create a 3×3 linear matrix to convert white balanced and demosaiced raw data into connection space – and where to obtain the 3×3 linear matrix to then convert it to a standard output color space like sRGB – we can take a closer look at the matrices and apply them to a real world capture chosen for its wide range of chromaticities.

# Tag Archives: digital camera

# Color: Determining a Forward Matrix for Your Camera

We understand from the previous article that rendering color during raw conversion essentially means mapping raw data in the form of triplets into a standard color space via a Profile Connection Space in a two step process

The first step white balances and demosaics the raw data, which at that stage we will refer to as , followed by converting it to Profile Connection Space through linear projection by an unknown ‘Forward Matrix’ (as DNG calls it) of the form

(1)

Determining the nine coefficients of this matrix is the main subject of this article^{[1]}. Continue reading Color: Determining a Forward Matrix for Your Camera

# Color: From Capture to Eye

How do we translate captured image information into a stimulus that will produce the appropriate perception of color? It’s actually not that complicated^{[1]}.

Recall from the introductory article that a photon absorbed by a cone type (, or ) in the fovea produces the same stimulus to the brain regardless of its wavelength^{[2]}. Take the example of the eye of an observer which focuses on the retina the image of a uniform object with a spectral photon distribution of 1000 photons/nm in the 400 to 720nm wavelength range and no photons outside of it.

Because the system is linear, cones in the foveola will weigh the incoming photons by their relative sensitivity (probability) functions and add the result up to produce a stimulus proportional to the area under the curves. For instance a cone will see about 321,000 photons arrive and produce a relative stimulus of about 94,700, the weighted area under the curve:

# An Introduction to Color in Digital Cameras

This article will set the stage for a discussion on how pleasing color is produced during raw conversion. The easiest way to understand how a camera captures and processes ‘color’ is to start with an example of how the human visual system does it.

#### An Example: Green

Light from the sun strikes leaves on a tree. The foliage of the tree absorbs some of the light and reflects the rest diffusely towards the eye of a human observer. The eye focuses the image of the foliage onto the retina at its back. Near the center of the retina there is a small circular area called the foveola which is dense with light receptors of well defined spectral sensitivities called cones. Information from the cones is pre-processed by neurons and carried by nerve fibers via the optic nerve to the brain where, after some additional psychovisual processing, we recognize the color of the foliage as green^{[1]}.

Continue reading An Introduction to Color in Digital Cameras

# How does a Raw Image Get Rendered?

What are the basic low level steps involved in raw file conversion? In this article I will discuss what happens under the hood of digital camera raw converters in order to turn raw file data into a viewable image, a process sometimes referred to as ‘rendering’. We will use the following raw capture to show how image information is transformed at every step along the way:

#### Rendering = Raw Conversion + Editing

# Taking the Sharpness Model for a Spin – II

This post will continue looking at the spatial frequency response measured by MTF Mapper off slanted edges in DPReview.com raw captures and relative fits by the ‘sharpness’ model discussed in the last few articles. The model takes the physical parameters of the digital camera and lens as inputs and produces theoretical directional system MTF curves comparable to measured data. As we will see the model seems to be able to simulate these systems well – at least within this limited set of parameters.

The following fits refer to the green channel of a number of interchangeable lens digital camera systems with different lenses, pixel sizes and formats – from the current Medium Format 100MP champ to the 1/2.3″ 18MP sensor size also sometimes found in the best smartphones. Here is the roster with the cameras as set up:

# Taking the Sharpness Model for a Spin

The series of articles starting here outlines a model of how the various physical components of a digital camera and lens can affect the ‘sharpness’ – that is the spatial resolution – of the images captured in the raw data. In this one we will pit the model against MTF curves obtained through the slanted edge method^{[1] }from real world raw captures both with and without an anti-aliasing filter.

With a few simplifying assumptions, which include ignoring aliasing and phase, the spatial frequency response (SFR or MTF) of a photographic digital imaging system near the center can be expressed as the product of the Modulation Transfer Function of each component in it. For a current digital camera these would typically be the main ones:

(1)

all in two dimensions Continue reading Taking the Sharpness Model for a Spin

# A Simple Model for Sharpness in Digital Cameras – Polychromatic Light

We now know how to calculate the two dimensional Modulation Transfer Function of a perfect lens affected by diffraction, defocus and third order Spherical Aberration – under monochromatic light at the given wavelength and f-number. In digital photography however we almost never deal with light of a single wavelength. So what effect does an illuminant with a wide spectral power distribution, going through the color filter of a typical digital camera CFA before the sensor have on the spatial frequency responses discussed thus far?

#### Monochrome vs Polychromatic Light

Not much, it turns out. Continue reading A Simple Model for Sharpness in Digital Cameras – Polychromatic Light

# A Simple Model for Sharpness in Digital Cameras – Spherical Aberrations

Spherical Aberration (SA) is one key component missing from our MTF toolkit for modeling an ideal imaging system’s ‘sharpness’ in the center of the field of view in the frequency domain. In this article formulas will be presented to compute the two dimensional Point Spread and Modulation Transfer Functions of the combination of diffraction, defocus and third order Spherical Aberration for an otherwise perfect lens with a circular aperture.

Spherical Aberrations result because most photographic lenses are designed with quasi spherical surfaces that do not necessarily behave ideally in all situations. For instance, they may focus light on slightly different planes depending on whether the respective ray goes through the exit pupil closer or farther from the optical axis, as shown below:

Continue reading A Simple Model for Sharpness in Digital Cameras – Spherical Aberrations

# A Simple Model for Sharpness in Digital Cameras – Defocus

This series of articles has dealt with modeling an ideal imaging system’s ‘sharpness’ in the frequency domain. We looked at the effects of the hardware on spatial resolution: diffraction, sampling interval, sampling aperture (e.g. a squarish pixel), anti-aliasing OLPAF filters. The next two posts will deal with modeling typical simple imperfections in the system: defocus and spherical aberrations.

#### Defocus = OOF

Defocus means that the sensing plane is not exactly where it needs to be for image formation in our ideal imaging system: the image is therefore out of focus (OOF). Said another way, light from a distant star would go through the lens but converge either behind or in front of the sensing plane, as shown in the following diagram, for a lens with a circular aperture:

Continue reading A Simple Model for Sharpness in Digital Cameras – Defocus

# A Simple Model for Sharpness in Digital Cameras – AA

This article will discuss a simple frequency domain model for an AntiAliasing (or Optical Low Pass) Filter, a hardware component sometimes found in a digital imaging system^{[1]}. The filter typically sits right on top of the sensing plane and its objective is to block as much of the aliasing and moiré creating energy above the Nyquist spatial frequency while letting through as much as possible of the real image forming energy below that, hence the low-pass designation.

In consumer digital cameras it is often implemented by introducing one or two birefringent plates in the sensor’s filter stack. This is how Nikon shows it for one of its DSLRs:

Continue reading A Simple Model for Sharpness in Digital Cameras – AA

# A Simple Model for Sharpness in Digital Cameras – Aliasing

Having shown that our simple two dimensional MTF model is able to predict the performance of the combination of a perfect lens and square monochrome pixel we now turn to the effect of the sampling interval on spatial resolution according to the guiding formula:

(1)

The hats in this case mean the Fourier Transform of the relative component normalized to 1 at the origin (), that is the individual MTFs of the perfect lens PSF, the perfect square pixel and the delta grid.

#### Sampling in the Spatial and Frequency Domains

Sampling is expressed mathematically as a Dirac delta function at the center of each pixel (the red dots below).

Continue reading A Simple Model for Sharpness in Digital Cameras – Aliasing

# A Simple Model for Sharpness in Digital Cameras – II

Now that we know from the introductory article that the spatial frequency response of a typical perfect digital camera and lens can be modeled simply as the product of the Modulation Transfer Function of the lens and pixel area, convolved with a Dirac delta grid at cycles-per-pixel spacing

(1)

we can take a closer look at each of those components ( here indicating normalization). I used Matlab to generate the examples below but you can easily do the same in a spreadsheet. Here is the code if you wish to follow along. Continue reading A Simple Model for Sharpness in Digital Cameras – II

# A Longitudinal CA Metric for Photographers

While perusing Jim Kasson’s excellent Longitudinal Chromatic Aberration tests^{[1]} I was impressed by the quantity and quality of the information the resulting data provides. Longitudinal, or Axial, CA is a form of defocus and as such it cannot be effectively corrected during raw conversion, so having a lens well compensated for it will provide a real and tangible improvement in the sharpness of final images. How much of an improvement?

In this article I suggest one such metric for the Longitudinal Chromatic Aberrations (LoCA) of a photographic imaging system: Continue reading A Longitudinal CA Metric for Photographers

# COMBINING BAYER CFA MTF Curves – II

This is a vast and complex subject for which I do not have formal training. In this and the previous article I present my thoughts on how MTF50 results obtained from raw data of the four Bayer CFA channels off a uniformly illuminated neutral target captured with a typical digital camera through the slanted edge method can be combined to provide a meaningful composite MTF50 for the imaging system as a whole^{1}. Corrections, suggestions and challenges are welcome. Continue reading COMBINING BAYER CFA MTF Curves – II

# Sub LSB Quantization

This article is a little esoteric so one may want to skip it unless one is interested in the underlying mechanisms that cause quantization error as photographic signal and noise approach the darkest levels of acceptable dynamic range in our digital cameras: one least significant bit in the raw data. We will use our simplified camera model and deal with Poissonian Signal and Gaussian Read Noise separately – then attempt to bring them together.

# Photographic Sensor Simulation

Physicists and mathematicians over the last few centuries have spent a lot of their time studying light and electrons, the key ingredients of digital photography. In so doing they have left us with a wealth of theories to explain their behavior in nature and in our equipment. In this article I will describe how to simulate the information generated by a uniformly illuminated imaging system using open source Octave (or equivalently Matlab) utilizing some of these theories. Since as you will see the simulations are incredibly (to me) accurate, understanding how the simulator works goes a long way in explaining the inner workings of a digital sensor at its lowest levels; and simulated data can be used to further our understanding of photographic science without having to run down the shutter count of our favorite SLRs. This approach is usually referred to as Monte Carlo simulation.

# Smooth Gradients and the Weber-Fechner Fraction

Whether the human visual system perceives a displayed slow changing gradient of tones, such as a vast expanse of sky, as smooth or posterized depends mainly on two well known variables: the Weber-Fechner Fraction of the ‘steps’ in the reflected/produced light intensity (the subject of this article); and spatial dithering of the light intensity as a result of noise (the subject of a future one).

Continue reading Smooth Gradients and the Weber-Fechner Fraction

# Image Quality: Raising ISO vs Pushing in Conversion

In the last few posts I have made the case that Image Quality in a digital camera is **entirely** dependent on the light Information collected at a sensor’s photosites during Exposure. Any subsequent processing – whether analog amplification and conversion to digital in-camera and/or further processing in-computer – effectively applies a set of Information Transfer Functions to the signal that when multiplied together result in the data from which the final photograph is produced. Each step of the way can at best maintain the original Information Quality (IQ) but in most cases it will degrade it somewhat.

#### IQ: Only as Good as at Photosites’ Output

This point is key: in a well designed imaging system** the **final image IQ is only as good as the scene information collected at the sensor’s photosites, independently of how this information is stored in the working data along the processing chain, on its way to being transformed into a pleasing photograph. **As long as scene information is properly encoded by the system early on, before being written to the raw file – and information transfer is maintained in the data throughout the imaging and processing chain – final photograph IQ will be virtually the same independently of how its data’s histogram looks along the way.

Continue reading Image Quality: Raising ISO vs Pushing in Conversion

# Sensor IQ’s Simple Model

Imperfections in an imaging system’s capture process manifest themselves in the form of deviations from the expected signal. We call these imperfections ‘noise’. The fewer the imperfections, the lower the noise, the higher the image quality. However, because the Human Visual System is adaptive within its working range, it’s not the absolute amount of noise that matters to perceived IQ as much as the amount of noise relative to the signal. That’s why to characterize the performance of a sensor in addition to noise we also need to determine its sensitivity and the maximum signal it can detect.

In this series of articles I will describe how to use the Photon Transfer method and a spreadsheet to determine basic IQ performance metrics of a digital camera sensor. It is pretty easy if we keep in mind the simple model of how light information is converted into raw data by digital cameras:

# Why Raw Sharpness IQ Measurements Are Better

**Why Raw**? The question is whether one is interested in measuring the objective, quantitative spatial resolution capabilities of the hardware or whether instead one would prefer to measure the arbitrary, qualitatively perceived sharpening prowess of (in-camera or in-computer) processing software as it turns the capture into a pleasing final image. Either is of course fine.

My take on this is that the better the IQ captured the better the final image will be after post processing. In other words I am typically more interested in measuring the spatial resolution information produced by the hardware comfortable in the knowledge that if I’ve got good quality data to start with its appearance will only be improved in post by the judicious use of software. By IQ here I mean objective, reproducible, measurable physical quantities representing the quality of the information captured by the hardware, ideally in scientific units.

Can we do that off a file rendered by a raw converter or, heaven forbid, a Jpeg? Not quite, especially if the objective is measuring IQ. Continue reading Why Raw Sharpness IQ Measurements Are Better

# How Many Photons on a Pixel

How many visible photons hit a pixel on my sensor? The answer depends on Exposure, Spectral power distribution of the arriving light and pixel area. With a few simplifying assumptions it is not too difficult to calculate that with a typical Daylight illuminant the number is roughly 11,850 photons per lx-s per micron. Without the simplifying assumptions* it reduces to about 11,260. Continue reading How Many Photons on a Pixel

# Nikon CFA Spectral Power Distribution

I measured the Spectral Power Distribution of the three CFA filters of a Nikon D610 in ‘Daylight’ conditions with a cheap spectrometer. Taking a cue from this post I pointed it at light from the sun reflected off a gray card and took a raw capture of the spectrum it produced.

An ImageJ plot did the rest. I took a dozen pictures at slightly different angles to catch the picture of the clearest spectrum. Shown are the three spectral curves averaged over the two best opposing captures. The Photopic Eye Luminous Efficiency Function (2 degree, Sharpe et al 2005) is also shown for reference, scaled to the same maximum as the green curve. Continue reading Nikon CFA Spectral Power Distribution

# MTF50 and Perceived Sharpness

Is MTF50 a good proxy for perceived sharpness? It turns out that the spatial frequencies that are most closely related to our perception of sharpness vary with the size and viewing distance of the displayed image.

For instance if an image captured by a Full Frame camera is viewed at ‘standard’ distance (that is a distance equal to its diagonal) the portion of the MTF curve most representative of perceived sharpness appears to be around MTF90. Continue reading MTF50 and Perceived Sharpness