Category Archives: Optics

Wavefront to PSF to MTF: Physical Units

In the last article we saw that the Point Spread Function and the Modulation Transfer Function of a lens could be easily obtained numerically by applying Discrete Fourier Transforms to its generalized exit pupil function P twice in sequence.[1]

Obtaining the 2D DFTs is easy: simply feed MxN numbers representing the two dimensional complex image of the pupil function in its uv space to a fast fourier transform routine and, presto, it produces MxN numbers that represent the amplitude of the PSF on the xy sensing plane, as shown below for the pupil function of a perfect lens with a circular aperture and MxN = 1024×1024.

Figure 1. 1a Left: Array of numbers representing a circular aperture (zeros for black and ones for white).  1b Right: Array of numbers representing the PSF of image 1a (contrast slightly boosted).

Simple and fast.  Wonderful.  Below is a slice through the center, the 513th row, zoomed in.  Hmm….  What are the physical units on the axes of displayed data produced by the DFT?

Figure 2. A slice through the center of the PSFshown in figure 1b.

Less easy – and the subject of this article as seen from a photographic perspective.

Continue reading Wavefront to PSF to MTF: Physical Units

Aberrated Wave to Image Intensity to MTF

Goodman, in his excellent Introduction to Fourier Optics[1], describes how an image is formed on a camera sensing plane starting from first principles, that is electromagnetic propagation according to Maxwell’s wave equation.  If you want the play by play account I highly recommend his math intensive book.  But for the budding photographer it is sufficient to know what happens at the exit pupil of the lens because after that the transformations to Point Spread and Modulation Transfer Functions are straightforward, as we will show in this article.

The following diagram exemplifies the last few millimeters of the journey that light from the scene has to travel in order to smash itself against our camera’s sensing medium.  Light from the scene in the form of  field  U arrives at the front of the lens.  It goes through the lens being partly blocked and distorted by it (we’ll call this blocking/distorting function P) and finally arrives at its back end, the exit pupil.   The complex light field at the exit pupil’s two dimensional uv plane is now  U\cdot P as shown below:

Figure 1. Simplified schematic diagram of the space between the exit of a camera lens and its sensing plane. The space is filled with air.

Continue reading Aberrated Wave to Image Intensity to MTF

Taking the Sharpness Model for a Spin – II

This post  will continue looking at the spatial frequency response measured by MTF Mapper off slanted edges in DPReview.com raw captures and relative fits by the ‘sharpness’ model discussed in the last few articles.  The model takes the physical parameters of the digital camera and lens as inputs and produces theoretical directional system MTF curves comparable to measured data.  As we will see the model seems to be able to simulate these systems well – at least within this limited set of parameters.

The following fits refer to the green channel of a number of interchangeable lens digital camera systems with different lenses, pixel sizes and formats – from the current Medium Format 100MP champ to the 1/2.3″ 18MP sensor size also sometimes found in the best smartphones.  Here is the roster with the cameras as set up:

table-1-testing-model
Table 1. The cameras and lenses under test.

Continue reading Taking the Sharpness Model for a Spin – II

Taking the Sharpness Model for a Spin

The series of articles starting here outlines a model of how the various physical components of a digital camera and lens can affect the ‘sharpness’ – that is the spatial resolution – of the  images captured in the raw data.  In this one we will pit the model against MTF curves obtained through the slanted edge method[1] from real world raw captures both with and without an anti-aliasing filter.

With a few simplifying assumptions, which include ignoring aliasing and phase, the spatial frequency response (SFR or MTF) of a photographic digital imaging system near the center can be expressed as the product of the Modulation Transfer Function of each component in it.  For a current digital camera these would typically be the main ones:

(1)   \begin{equation*} MTF_{sys} = MTF_{lens} (\cdot MTF_{AA}) \cdot MTF_{pixel} \end{equation*}

all in two dimensions Continue reading Taking the Sharpness Model for a Spin

A Simple Model for Sharpness in Digital Cameras – Polychromatic Light

We now know how to calculate the two dimensional Modulation Transfer Function of a perfect lens affected by diffraction, defocus and third order Spherical Aberration  – under monochromatic light at the given wavelength and f-number.  In digital photography however we almost never deal with light of a single wavelength.  So what effect does an illuminant with a wide spectral power distribution, going through the color filter of a typical digital camera CFA  before the sensor have on the spatial frequency responses discussed thus far?

Monochrome vs Polychromatic Light

Not much, it turns out. Continue reading A Simple Model for Sharpness in Digital Cameras – Polychromatic Light

A Simple Model for Sharpness in Digital Cameras – Spherical Aberrations

Spherical Aberration (SA) is one key component missing from our MTF toolkit for modeling an ideal imaging system’s ‘sharpness’ in the center of the field of view in the frequency domain.  In this article formulas will be presented to compute the two dimensional Point Spread and Modulation Transfer Functions of the combination of diffraction, defocus and third order Spherical Aberration for an otherwise perfect lens with a circular aperture.

Spherical Aberrations result because most photographic lenses are designed with quasi spherical surfaces that do not necessarily behave ideally in all situations.  For instance, they may focus light on slightly different planes depending on whether the respective ray goes through the exit pupil closer or farther from the optical axis, as shown below:

371px-spherical_aberration_2
Figure 1. Top: an ideal spherical lens focuses all rays on the same focal point. Bottom: a practical lens with Spherical Aberration focuses rays that go through the exit pupil based on their radial distance from the optical axis. Image courtesy Andrei Stroe.

Continue reading A Simple Model for Sharpness in Digital Cameras – Spherical Aberrations

A Simple Model for Sharpness in Digital Cameras – Defocus

This series of articles has dealt with modeling an ideal imaging system’s ‘sharpness’ in the frequency domain.  We looked at the effects of the hardware on spatial resolution: diffraction, sampling interval, sampling aperture (e.g. a squarish pixel), anti-aliasing OLPAF filters.  The next two posts will deal with modeling typical simple imperfections in the system: defocus and spherical aberrations.

Defocus = OOF

Defocus means that the sensing plane is not exactly where it needs to be for image formation in our ideal imaging system: the image is therefore out of focus (OOF).  Said another way, light from a distant star would go through the lens but converge either behind or in front of the sensing plane, as shown in the following diagram, for a lens with a circular aperture:

Figure 1. Back Focus, In Focus, Front Focus.
Figure 1. Top to bottom: Back Focus, In Focus, Front Focus.  To the right is how the relative PSF would look like on the sensing plane.  Image under license courtesy of Brion.

Continue reading A Simple Model for Sharpness in Digital Cameras – Defocus

A Simple Model for Sharpness in Digital Cameras – Aliasing

Having shown that our simple two dimensional MTF model is able to predict the performance of the combination of a perfect lens and square monochrome pixel we now turn to the effect of the sampling interval on spatial resolution according to the guiding formula:

(1)   \begin{equation*} MTF_{Sys2D} = \left|(\widehat{ PSF_{lens} }\cdot \widehat{PIX_{ap} })\ast\ast\: \delta\widehat{\delta_{pitch}}\right|_{pu} \end{equation*}

The hats in this case mean the Fourier Transform of the relative component normalized to 1 at the origin (_{pu}), that is the individual MTFs of the perfect lens PSF, the perfect square pixel and the delta grid.

Sampling in the Spatial and Frequency Domains

Sampling is expressed mathematically as a Kronecker delta function at the center of each pixel (the red dots below).

Footprint-PSF3
Figure 1. Left, 1a: A highly zoomed (3200%) image of the lens PSF, an Airy pattern, projected onto the imaging plane where the sensor sits. Pixels shown outlined in yellow. A red dot marks the sampling coordinates. Right, 1b: The sampled image zoomed at 16000%, 5x as much, because each pixel’s width is 5 linear units on the side.

Continue reading A Simple Model for Sharpness in Digital Cameras – Aliasing

A Simple Model for Sharpness in Digital Cameras – II

Now that we know from the introductory article that the spatial frequency response of a typical perfect digital camera and lens can be modeled simply as the product of the Modulation Transfer Function of the lens and pixel area, convolved with a Dirac delta grid at cycles-per-pixel spacing

(1)   \begin{equation*} MTF_{Sys2D} = \left|(\widehat{ PSF_{lens} }\cdot \widehat{PIX_{ap} })\ast\ast\: \delta\widehat{\delta_{pitch}}\right|_{pu} \end{equation*}

we can take a closer look at each of those components (pu here indicating normalization).   I used Matlab to generate the examples below but you can easily do the same in a spreadsheet.  Here is the code if you wish to follow along. Continue reading A Simple Model for Sharpness in Digital Cameras – II

A Simple Model for Sharpness in Digital Cameras – I

The next few posts will describe a linear spatial resolution model that can help a photographer better understand the main variables involved in evaluating the ‘sharpness’ of photographic equipment and related captures. I will show numerically that the combined spectral frequency response (MTF) of a perfect AAless monochrome digital camera and lens in two dimensions can be described as the normalized multiplication of the Fourier Transform (FT) of the lens Point Spread Function by the FT of the (square) pixel footprint, convolved with the FT of a rectangular grid of Dirac delta functions centered at each  pixel, as better described in the article

    \[ MTF_{2D} = \left|(\widehat{ PSF_{lens} }\cdot \widehat{PIX_{ap} })\ast\ast\: \delta\widehat{\delta_{pitch}}\right|_{pu} \]

With a few simplifying assumptions we will see that the effect of the lens and sensor on the spatial resolution of the continuous image on the sensing plane can be broken down into these simple components.  The overall ‘sharpness’ of the captured digital image can then be estimated by combining the ‘sharpness’ of each of them. Continue reading A Simple Model for Sharpness in Digital Cameras – I

A Longitudinal CA Metric for Photographers

While perusing Jim Kasson’s excellent Longitudinal Chromatic Aberration tests[1] I was impressed by the quantity and quality of the information the resulting data provides.  Longitudinal, or Axial, CA is a form of defocus and as such it cannot be effectively corrected during raw conversion, so having a lens well compensated for it will provide a real and tangible improvement in the sharpness of final images.  How much of an improvement?

In this article I suggest one such metric for the Longitudinal Chromatic Aberrations (LoCA) of a photographic imaging system: Continue reading A Longitudinal CA Metric for Photographers

The Units of Spatial Resolution

Several sites perform spatial resolution ‘sharpness’ testing of imaging systems for photographers (i.e. ‘lens+digital camera’) and publish results online.  You can also measure your own equipment relatively easily to determine how sharp your hardware is.  However comparing results from site to site and to your own can be difficult and/or misleading, starting from the multiplicity of units used: cycles/pixel, line pairs/mm, line widths/picture height, line pairs/image height, cycles/picture height etc.

This post will address the units involved in spatial resolution measurement using as an example readings from the slanted edge method.

Continue reading The Units of Spatial Resolution

The Slanted Edge Method

My preferred method for measuring the spatial resolution performance of photographic equipment these days is the slanted edge method.  It requires a minimum amount of additional effort compared to capturing and simply eye-balling a pinch, Siemens or other chart but it gives immensely more, useful, accurate, absolute information in the language and units that have been used to characterize optical systems for over a century: it produces a good approximation to  the Modulation Transfer Function of the two dimensional Point Spread Function of the camera/lens system in the direction perpendicular to the edge.

Much of what there is to know about a system’s spatial resolution performance can be deduced by analyzing such a curve, starting from the perceptually relevant MTF50 metric, discussed a while back.  And all of this simply from capturing the image of a black and white slanted edge, which one can easily produce and print at home.

Continue reading The Slanted Edge Method