Category Archives: Exposure

Angles and the Camera Equation

Imagine a bucolic scene on a clear sunny day at the equator, sand warmed by the tropical sun with a typical irradiance (E) of about 1000 watts per square meter.  As discussed earlier we could express this quantity as illuminance in lumens per square meter (lx) – or as a certain number of photons per second (\Phi) over an area of interest (\mathcal{A}).

(1)   \begin{equation*} E = \frac{\Phi}{\mathcal{A}}  \; (W, lm, photons/s) / m^2 \end{equation*}

How many photons/s per unit area can we expect on the camera’s image plane (irradiance E_i )?

Figure 1.  Irradiation transfer from scene to sensor.

In answering this question we will discover the Camera Equation as a function of opening angles – and set the stage for the next article on lens pupils.  By the way, all quantities in this article depend on wavelength, which will be assumed in the formulas to make them more readable.

Continue reading Angles and the Camera Equation

Connecting Photographic Raw Data to Tristimulus Color Science

Absolute Raw Data

In the previous article we determined that the three r_{_L}g_{_L}b_{_L} values recorded in the raw data in the center of the image plane in units of Data Numbers per pixel – by a digital camera and lens as a function of absolute spectral radiance L(\lambda) at the lens – can be estimated as follows:

(1)   \begin{equation*} r_{_L}g_{_L}b_{_L} =\frac{\pi p^2 t}{4N^2} \int\limits_{380}^{780}L(\lambda) \odot SSF_{rgb}(\lambda)  d\lambda \end{equation*}

with subscript _L indicating absolute-referred units and SSF_{rgb} the three system Spectral Sensitivity Functions.   In this series of articles \odot is wavelength by wavelength multiplication (what happens to the spectrum of light as it progresses through the imaging system) and the integral just means the area under each of the three resulting curves (integration is what the pixels do during exposure).  Together they represent an inner or dot product.  All variables in front of the integral were previously described and can be considered constant for a given photographic setup. Continue reading Connecting Photographic Raw Data to Tristimulus Color Science

The Physical Units of Raw Data

In the previous article we (I) learned that the Spectral Sensitivity Functions of a given digital camera and lens are the result of the interaction of light from the scene with all of the spectrally varied components that make up the imaging system: mainly the lens, ultraviolet/infrared hot mirror, Color Filter Array and other filters, finally the photoelectric layer of the sensor, which is normally silicon in consumer kit.

Figure 1. The journey of light from source to sensor.  Cone Ω will play a starring role in the narrative that follows.

In this one we will put the process on a more formal theoretical footing, setting the stage for the next few on the role of white balance.

Continue reading The Physical Units of Raw Data

The Spectral Response of Digital Cameras

Photography works because visible light from one or more sources reaches the scene and is reflected in the direction of the camera, which then captures a signal proportional to it.  The journey of light can be described in integrated units of power all the way to the sensor, for instance so many watts per square meter. However ever since Newton we have known that such total power is in fact the result of the weighted sum of contributions by every frequency  that makes up the light, what he called its spectrum.

Our ability to see and record color depends on knowing the distribution of the power contained within a subset of these frequencies and how it interacts with the various objects in its path.  This article is about how a typical digital camera for photographers interacts with such a spectrum from the scene: we will dissect what is sometimes referred to as the system’s Spectral Response or Sensitivity.

Figure 1. Spectral Sensitivity Functions of an arbitrary imaging system, resulting from combining the responses of the various components described in the article.

Continue reading The Spectral Response of Digital Cameras

Linear Color: Applying the Forward Matrix

Now that we know how to create a 3×3 linear matrix to convert white balanced and demosaiced raw data into XYZ_{D50}  connection space – and where to obtain the 3×3 linear matrix to then convert it to a standard output color space like sRGB – we can take a closer look at the matrices and apply them to a real world capture chosen for its wide range of chromaticities.

Figure 1. Image with color converted using the forward linear matrix discussed in the article.

Continue reading Linear Color: Applying the Forward Matrix

An Introduction to Color in Digital Cameras

This article will set the stage for a discussion on how pleasing color is produced during raw conversion.  The easiest way to understand how a camera captures and processes ‘color’ is to start with an example of how the human visual system does it.

An Example: Green

Light from the sun strikes leaves on a tree.   The foliage of the tree absorbs some of the light and reflects the rest diffusely  towards the eye of a human observer.  The eye focuses the image of the foliage onto the retina at its back.  Near the center of the retina there is a small circular area called fovea centralis which is dense with light receptors of well defined spectral sensitivities called cones. Information from the cones is pre-processed by neurons and carried by nerve fibers via the optic nerve to the brain where, after some additional psychovisual processing, we recognize the color of the foliage as green[1].

spd-to-cone-quanta3
Figure 1. The human eye absorbs light from an illuminant reflected diffusely by the object it is looking at.

Continue reading An Introduction to Color in Digital Cameras

Information Transfer – The ISO Invariant Case

We know that the best Information Quality possible collected from the scene by a digital camera is available right at the output of the sensor and it will only be degraded from there.  This article will discuss what happens to this information as it is transferred through the imaging system and stored in the raw data.  It will use the simple language outlined in the last post to explain how and why the strategy for Capturing the best Information or Image Quality (IQ) possible from the scene in the raw data involves only two simple steps:

1) Maximizing the collected Signal given artistic and technical constraints; and
2) Choosing what part of the Signal to store in the raw data and what part to leave behind.

The second step is only necessary  if your camera is incapable of storing the entire Signal at once (that is it is not ISO invariant) and will be discussed in a future article.  In this post we will assume an ISOless imaging system.

Continue reading Information Transfer – The ISO Invariant Case

How to Measure the SNR Performance of Your Digital Camera

Determining the Signal to Noise Ratio (SNR) curves of your digital camera at various ISOs and extracting from them the underlying IQ metrics of its sensor can help answer a number of questions useful to photography.  For instance whether/when to raise ISO;  what its dynamic range is;  how noisy its output could be in various conditions; or how well it is likely to perform compared to other Digital Still Cameras.  As it turns out obtaining the relative data is a little  time consuming but not that hard.  All you need is your camera, a suitable target, a neutral density filter, dcraw or libraw or similar software to access the linear raw data – and a spreadsheet.

Continue reading How to Measure the SNR Performance of Your Digital Camera

Comparing Sensor SNR

We’ve seen how SNR curves can help us analyze digital camera IQ:

SNR-Photon-Transfer-Model-D610-4

In this post we will use them to help us compare digital cameras, independently of format size. Continue reading Comparing Sensor SNR

Equivalence and Equivalent Image Quality: Signal

One of the fairest ways to compare the performance of two cameras of different physical characteristics and specifications is to ask a simple question: which photograph would look better if the cameras were set up side by side, captured identical scene content and their output were then displayed and viewed at the same size?

Achieving this set up and answering the question is anything but intuitive because many of the variables involved, like depth of field and sensor size, are not those we are used to dealing with when taking photographs.  In this post I would like to attack this problem by first estimating the output signal of different cameras when set up to capture Equivalent images.

It’s a bit long so I will give you the punch line first:  digital cameras of the same generation set up equivalently will typically generate more or less the same signal in e^- independently of format.  Ignoring noise, lenses and aspect ratio for a moment and assuming the same camera gain and number of pixels, they will produce identical raw files. Continue reading Equivalence and Equivalent Image Quality: Signal

What is the Effective Quantum Efficiency of my Sensor?

Now that we know how to determine how many photons impinge on a sensor we can estimate its Effective Quantum Efficiency, that is the efficiency with which it turns such a photon flux (n_{ph}) into photoelectrons (n_{e^-} ), which will then be converted to raw data to be stored in the capture’s raw file:

(1)   \begin{equation*} EQE = \frac{n_{e^-} \text{ produced by average pixel}}{n_{ph} \text{ incident on average pixel}} \end{equation*}

I call it ‘Effective’, as opposed to ‘Absolute’, because it represents the probability that a photon arriving on the sensing plane from the scene will be converted to a photoelectron by a given pixel in a digital camera sensor.  It therefore includes the effect of microlenses, fill factor, CFA and other filters on top of silicon in the pixel.  Whether Effective or Absolute, QE is usually expressed as a percentage, as seen below in the specification sheet of the KAF-8300 by On Semiconductor, without IR/UV filters:

For instance if  an average of 100 photons per pixel were incident on a uniformly lit spot on the sensor and on average each pixel produced a signal of 20 photoelectrons we would say that the Effective Quantum Efficiency of the sensor is 20%.  Clearly the higher the EQE the better for Image Quality parameters such as SNR. Continue reading What is the Effective Quantum Efficiency of my Sensor?

How Many Photons on a Pixel at a Given Exposure

How many photons impinge on a pixel illuminated by a known light source during exposure?  To answer this question in a photographic context under daylight we need to know the effective area of the pixel, the Spectral Power Distribution of the illuminant and the relative Exposure.

We can typically estimate the pixel’s effective area and the Spectral Power Distribution of the illuminant – so all we need to determine is what Exposure the relative irradiance corresponds to in order to obtain the answer.

Continue reading How Many Photons on a Pixel at a Given Exposure

Photons Emitted by Light Source

How many photons are emitted by a light source? To answer this question we need to evaluate the following simple formula at every wavelength in the spectral range of interest and add the values up:

(1)   \begin{equation*} \frac{\text{Power of Light in }W/m^2}{\text{Energy of Average Photon in }J/photon} \end{equation*}

The Power of Light emitted in W/m^2 is called Spectral Exitance, with the symbol M_e(\lambda) when referred to  units of energy.  The energy of one photon at a given wavelength is

(2)   \begin{equation*} e_{ph}(\lambda) = \frac{hc}{\lambda}\text{    joules/photon} \end{equation*}

with \lambda the wavelength of light in meters and h and c Planck’s constant and the speed of light in the chosen medium respectively.  Since Watts are joules per second the units of (1) are therefore photons/m^2/s.  Writing it more formally:

(3)   \begin{equation*} M_{ph} = \int\limits_{\lambda_1}^{\lambda_2} \frac{M_e(\lambda)\cdot \lambda \cdot d\lambda}{hc} \text{  $\frac{photons}{m^2\cdot s}$} \end{equation*}

Continue reading Photons Emitted by Light Source

Converting Radiometric to Photometric Units

When first approaching photographic science a photographer is often confused by the unfamiliar units used.  In high school we were taught energy and power in radiometric units like watts ($W$) – while in photography the same concepts are dealt with in photometric units like lumens ($lm$).

Once one realizes that both sets of units refer to the exact same physical process – energy transfer – but they are fine tuned for two slightly different purposes it becomes a lot easier to interpret the science behind photography through the theory one already knows.

It all boils down to one simple notion: lumens are watts as perceived by the Human Visual System.

Continue reading Converting Radiometric to Photometric Units

How Many Photons on a Pixel

How many visible photons hit a pixel on my sensor?  The answer depends on Exposure, Spectral power distribution of the arriving light and effective pixel area.  With a few simplifying assumptions it is not difficult to calculate that with a typical Daylight illuminant the number is roughly 11,760 photons per lx-s per \mu m^2.  Without the simplifying assumptions* it reduces to about 11,000. Continue reading How Many Photons on a Pixel

Exposure and ISO

The in-camera ISO dial is a ballpark milkshake of an indicator to help choose parameters that will result in a ‘good’ perceived picture. Key ingredients to obtain a ‘good’ perceived picture are 1) ‘good’ Exposure and 2) ‘good’ in-camera or in-computer processing. It’s easier to think about them as independent processes and that comes naturally to you because you shoot raw in manual mode and you like to PP, right? Continue reading Exposure and ISO

What Is Exposure

When capturing a typical photograph, light from one or more sources is reflected from the scene, reaches the lens, goes through it and eventually hits the sensing plane.

In photography Exposure is the quantity of visible light per unit area incident on the image plane during the time that it is exposed to the scene.  Exposure is intuitively proportional to Luminance from the scene $L$ and exposure time $t$.  It is inversely proportional to lens f-number $N$ squared because it determines the relative size of the cone of light captured from the scene.  You can read more about the theory in the article on angles and the Camera Equation.

Continue reading What Is Exposure