Imagine a bucolic scene on a clear sunny day at the equator, sand warmed by the tropical sun with a typical irradiance () of about 1000 watts per square meter. As discussed earlier we could express this quantity as illuminance in lumens per square meter () – or as a certain number of photons per second () over an area of interest ().
(1)
How many per unit area can we expect on the camera’s image plane (irradiance )?
In answering this question we will discover the Camera Equation as a function of opening angles – and set the stage for the next article on lens pupils. By the way, all quantities in this article depend on wavelength, which will be assumed in the formulas to make them more readable.
In the previous article we determined that the three values recorded in the raw data in the center of the image plane in units of Data Numbers per pixel – by a digital camera and lens as a function of absolute spectral radianceat the lens – can be estimated as follows:
(1)
with subscript indicating absolute-referred units and the three system Spectral Sensitivity Functions. In this series of articles is wavelength by wavelength multiplication (what happens to the spectrum of light as it progresses through the imaging system) and the integral just means the area under each of the three resulting curves (integration is what the pixels do during exposure). Together they represent an inner or dot product. All variables in front of the integral were previously described and can be considered constant for a given photographic setup. Continue reading Connecting Photographic Raw Data to Tristimulus Color Science→
In the previous article we (I) learned that the Spectral Sensitivity Functions of a given digital camera and lens are the result of the interaction of light from the scene with all of the spectrally varied components that make up the imaging system: mainly the lens, ultraviolet/infrared hot mirror, Color Filter Array and other filters, finally the photoelectric layer of the sensor, which is normally silicon in consumer kit.
In this one we will put the process on a more formal theoretical footing, setting the stage for the next few on the role of white balance.
Photography works because visible light from one or more sources reaches the scene and is reflected in the direction of the camera, which then captures a signal proportional to it. The journey of light can be described in integrated units of power all the way to the sensor, for instance so many watts per square meter. However ever since Newton we have known that such total power is in fact the result of the weighted sum of contributions by every frequency that makes up the light, what he called its spectrum.
Our ability to see and record color depends on knowing the distribution of the power contained within a subset of these frequencies and how it interacts with the various objects in its path. This article is about how a typical digital camera for photographers interacts with such a spectrum from the scene: we will dissect what is sometimes referred to as the system’s Spectral Response or Sensitivity.
Goodman, in his excellent Introduction to Fourier Optics[1], describes how an image is formed on a camera sensing plane starting from first principles, that is electromagnetic propagation according to Maxwell’s wave equation. If you want the play by play account I highly recommend his math intensive book. But for the budding photographer it is sufficient to know what happens at the Exit Pupil of the lens because after that the transformations to Point Spread and Modulation Transfer Functions are straightforward, as we will show in this article.
The following diagram exemplifies the last few millimeters of the journey that light from the scene has to travel in order to be absorbed by a camera’s sensing medium. Light from the scene in the form of field arrives at the front of the lens. It goes through the lens being partly blocked and distorted by it as it arrives at its virtual back end, the Exit Pupil, we’ll call this blocking/distorting function . Other than in very simple cases, the Exit Pupil does not necessarily coincide with a specific physical element or Principal surface.[iv] It is a convenient mathematical construct which condenses all of the light transforming properties of a lens into a single plane.
The complex light field at the Exit Pupil’s two dimensional plane is then as shown below (not to scale, the product of the two arrays is element-by-element):
How do we translate captured image information into a stimulus that will produce the appropriate perception of color? It’s actually not that complicated[1].
Recall from the introductory article that a photon absorbed by a cone type (, or ) in the fovea produces the same stimulus to the brain regardless of its wavelength[2]. Take the example of the eye of an observer which focuses on the retina the image of a uniform object with a spectral photon distribution of 1000 photons/nm in the 400 to 720nm wavelength range and no photons outside of it.
Because the system is linear, cones in the foveola will weigh the incoming photons by their relative sensitivity (probability) functions and add the result up to produce a stimulus proportional to the area under the curves. For instance a cone may see about 321,000 photons arrive and produce a relative stimulus of about 94,700, the weighted area under the curve:
This article will set the stage for a discussion on how pleasing color is produced during raw conversion. The easiest way to understand how a camera captures and processes ‘color’ is to start with an example of how the human visual system does it.
An Example: Green
Light from the sun strikes leaves on a tree. The foliage of the tree absorbs some of the light and reflects the rest diffusely towards the eye of a human observer. The eye focuses the image of the foliage onto the retina at its back. Near the center of the retina there is a small circular area called fovea centralis which is dense with light receptors of well defined spectral sensitivities called cones. Information from the cones is pre-processed by neurons and carried by nerve fibers via the optic nerve to the brain where, after some additional psychovisual processing, we recognize the color of the foliage as green[1].
How many photons impinge on a pixel illuminated by a known light source during exposure? To answer this question in a photographic context under daylight we need to know the effective area of the pixel, the Spectral Power Distribution of the illuminant and the relative Exposure.
We can typically estimate the pixel’s effective area and the Spectral Power Distribution of the illuminant – so all we need to determine is what Exposure the relative irradiance corresponds to in order to obtain the answer.
How many photons are emitted by a light source? To answer this question we need to evaluate the following simple formula at every wavelength in the spectral range of interest and add the values up:
(1)
The Power of Light emitted in is called Spectral Exitance, with the symbol when referred to units of energy. The energy of one photon at a given wavelength is
(2)
with the wavelength of light in meters and and Planck’s constant and the speed of light in the chosen medium respectively. Since Watts are joules per second the units of (1) are therefore . Writing it more formally: