In this article we shall find that the effect of a Bayer CFA on the spatial frequencies and hence the ‘sharpness’ captured by a sensor compared to those from a corresponding monochrome imager can go from nothing to halving the potentially unaliased range based on the chrominance content of the image projected on the sensing plane and the direction in which the spatial frequencies are being stressed.
A Little Sampling Theory
We know from Goodman and previous articles that the sampled image ( ) captured in the raw data by a typical current digital camera can be represented mathematically as the continuous image on the sensing plane ( ) multiplied by a rectangular lattice of Dirac delta functions positioned at the center of each pixel:
with the functions representing the two dimensional grid of delta functions, sampling pitch apart horizontally and vertically. To keep things simple the sensing plane is considered here to be the imager’s silicon itself, which sits below microlenses and other filters so the continuous image is assumed to incorporate their as well as pixel aperture’s effects. Continue reading Bayer CFA Effect on Sharpness→
In the last article we saw that the Point Spread Function and the Modulation Transfer Function of a lens could be easily obtained numerically by applying Discrete Fourier Transforms to its generalized exit pupil function twice in sequence.
Obtaining the 2D DFTs is easy: simply feed MxN numbers representing the two dimensional complex image of the pupil function in its space to a fast fourier transform routine and, presto, it produces MxN numbers that represent the amplitude of the PSF on the sensing plane, as shown below for the pupil function of a perfect lens with a circular aperture and MxN = 1024×1024.
Simple and fast. Wonderful. Below is a slice through the center, the 513th row, zoomed in. Hmm…. What are the physical units on the axes of displayed data produced by the DFT?
Less easy – and the subject of this article as seen from a photographic perspective.
Goodman, in his excellent Introduction to Fourier Optics, describes how an image is formed on a camera sensing plane starting from first principles, that is electromagnetic propagation according to Maxwell’s wave equation. If you want the play by play account I highly recommend his math intensive book. But for the budding photographer it is sufficient to know what happens at the exit pupil of the lens because after that the transformations to Point Spread and Modulation Transfer Functions are straightforward, as we will show in this article.
The following diagram exemplifies the last few millimeters of the journey that light from the scene has to travel in order to smash itself against our camera’s sensing medium. Light from the scene in the form of field arrives at the front of the lens. It goes through the lens being partly blocked and distorted by it (we’ll call this blocking/distorting function ) and finally arrives at its back end, the exit pupil. The complex light field at the exit pupil’s two dimensional plane is now as shown below:
Having shown that our simple two dimensional MTF model is able to predict the performance of the combination of a perfect lens and square monochrome pixel we now turn to the effect of the sampling interval on spatial resolution according to the guiding formula:
The hats in this case mean the Fourier Transform of the relative component normalized to 1 at the origin (), that is the individual MTFs of the perfect lens PSF, the perfect square pixel and the delta grid.
Sampling in the Spatial and Frequency Domains
Sampling is expressed mathematically as a Dirac delta function at the center of each pixel (the red dots below).
Ever since Einstein we’ve been able to say that humans ‘see’ because information about the scene is carried to the eyes by photons reflected by it. So when we talk about Information in photography we are referring to information about the energy and distribution of photons arriving from the scene. The more complete this information, the better we ‘see’. No photons = no information = no see; few photons = little information = see poorly = poor IQ; more photons = more information = see better = better IQ.
Sensors in digital cameras work similarly, their output ideally being the energy and location of every photon incident on them during Exposure. That’s the full information ideally required to recreate an exact image of the original scene for the human visual system, no more and no less. In practice however we lose some of this information along the way during sensing, so we need to settle for approximate location and energy – in the form of photoelectron counts by pixels of finite area, often correlated to a color filter array.