Tag Archives: Fourier Transform

Introduction to Texture MTF

Texture MTF is a method to measure the sharpness of a digital camera and lens by capturing the image of a target of known characteristics.  It purports to better evaluate the perception of fine details in low contrast areas of the image – what is referred to as ‘texture’ – in the presence of noise reduction, sharpening or other non-linear processing performed by the camera before writing data to file.

Figure 1. Image of Dead Leaves low contrast target. Such targets are designed to have controlled scale and direction invariant features with a power law Power Spectrum.

The Modulation Transfer Function (MTF) of an imaging system represents its spatial frequency response,  from which many metrics related to perceived sharpness are derived: MTF50, SQF, SQRI, CMT Acutance etc.  In these pages we have used to good effect the slanted edge method to obtain accurate estimates of a system’s MTF curves in the past.[1]

In this article we will explore proposed methods to determine Texture MTF and/or estimate the Optical Transfer Function of the imaging system under test from a reference power-law Power Spectrum target.  All three rely on variations of the ratio of captured to reference image in the frequency domain: straight Fourier Transforms; Power Spectral Density; and Cross Power Density.  In so doing we will develop some intuitions about their strengths and weaknesses. Continue reading Introduction to Texture MTF

Fourier Optics and the Complex Pupil Function

In the last article we learned that a complex lens can be modeled as just an entrance pupil, an exit pupil and a geometrical optics black-box in between.  Goodman[1] suggests that all optical path errors for a given Gaussian point on the image plane can be thought of as being introduced by a custom phase plate at the pupil plane, delaying or advancing the light wavefront locally according to wavefront aberration function \Delta W(u,v) as described there.

The phase plate distorts the forming wavefront, introducing diffraction and aberrations, while otherwise allowing us to treat the rest of the lens as if it followed geometrical optics rules.  It can be associated with either the entrance or the exit pupil.  Photographers are usually concerned with the effects of the lens on the image plane so we will associate it with the adjacent Exit Pupil.

aberrations coded as phase plate in exit pupil generalized complex pupil function
Figure 1.  Aberrations can be fully described by distortions introduced by a fictitious phase plate inserted at the uv exit pupil plane.  The phase error distribution is the same as the path length error described by wavefront aberration function ΔW(u,v), introduced in the previous article.

Continue reading Fourier Optics and the Complex Pupil Function

Bayer CFA Effect on Sharpness

In this article we shall find that the effect of a Bayer CFA on the spatial frequencies and hence the ‘sharpness’ information captured by a sensor compared to those from the corresponding monochrome version can go from (almost) nothing to halving the potentially unaliased range – based on the chrominance content of the image and the direction in which the spatial frequencies are being stressed. Continue reading Bayer CFA Effect on Sharpness

Wavefront to PSF to MTF: Physical Units

In the last article we saw that the intensity Point Spread Function and the Modulation Transfer Function of a lens could be easily approximated numerically by applying Discrete Fourier Transforms to its generalized exit pupil function \mathcal{P} twice in sequence.[1]

Numerical Fourier Optics: amplitude Point Spread Function, intensity PSF and MTF

Obtaining the 2D DFTs is easy: simply feed MxN numbers representing the two dimensional complex image of the Exit Pupil function in its uv space to a Fast Fourier Transform routine and, presto, it produces MxN numbers representing the amplitude of the PSF on the xy sensing plane.  Figure 1a shows a simple case where pupil function \mathcal{P} is a uniform disk representing the circular aperture of a perfect lens with MxN = 1024×1024.  Figure 1b is the resulting intensity PSF.

Figure 1a, left: A circular array of ones appearing as a white disk on a black background, representing a circular aperture. Figure 1b, right: Array of numbers representing the PSF of image 1a in the classic shape of an Airy Pattern.
Figure 1. 1a Left: Array of numbers representing a circular aperture (zeros for black and ones for white).  1b Right: Array of numbers representing the PSF of image 1a (contrast slightly boosted).

Simple and fast.  Wonderful.  Below is a slice through the center, the 513th row, zoomed in.  Hmm….  What are the physical units on the axes of displayed data produced by the DFT? Continue reading Wavefront to PSF to MTF: Physical Units

Aberrated Wave to Image Intensity to MTF

Goodman, in his excellent Introduction to Fourier Optics[1], describes how an image is formed on a camera sensing plane starting from first principles, that is electromagnetic propagation according to Maxwell’s wave equation.  If you want the play by play account I highly recommend his math intensive book.  But for the budding photographer it is sufficient to know what happens at the Exit Pupil of the lens because after that the transformations to Point Spread and Modulation Transfer Functions are straightforward, as we will show in this article.

The following diagram exemplifies the last few millimeters of the journey that light from the scene has to travel in order to be absorbed by a camera’s sensing medium.  Light from the scene in the form of  field  U arrives at the front of the lens.  It goes through the lens being partly blocked and distorted by it as it arrives at its virtual back end, the Exit Pupil, we’ll call this blocking/distorting function P.   Other than in very simple cases, the Exit Pupil does not necessarily coincide with a specific physical element or Principal surface.[iv]  It is a convenient mathematical construct which condenses all of the light transforming properties of a lens into a single plane.

The complex light field at the Exit Pupil’s two dimensional uv plane is then  U\cdot P as shown below (not to scale, the product of the two arrays is element-by-element):

Figure 1. Simplified schematic diagram of the space between the exit pupil of a camera lens and its sensing plane. The space is assumed to be filled with air.

Continue reading Aberrated Wave to Image Intensity to MTF

Taking the Sharpness Model for a Spin

The series of articles starting here outlines a model of how the various physical components of a digital camera and lens can affect the ‘sharpness’ – that is the spatial resolution – of the  images captured in the raw data.  In this one we will pit the model against MTF curves obtained through the slanted edge method[1] from real world raw captures both with and without an anti-aliasing filter.

With a few simplifying assumptions, which include ignoring aliasing and phase, the spatial frequency response (SFR or MTF) of a photographic digital imaging system near the center can be expressed as the product of the Modulation Transfer Function of each component in it.  For a current digital camera these would typically be the main ones:

(1)   \begin{equation*} MTF_{sys} = MTF_{lens} (\cdot MTF_{AA}) \cdot MTF_{pixel} \end{equation*}

all in two dimensions Continue reading Taking the Sharpness Model for a Spin

A Simple Model for Sharpness in Digital Cameras – Spherical Aberrations

Spherical Aberration (SA) is one key component missing from our MTF toolkit for modeling an ideal imaging system’s ‘sharpness’ in the center of the field of view in the frequency domain.  In this article formulas will be presented to compute the two dimensional Point Spread and Modulation Transfer Functions of the combination of diffraction, defocus and third order Spherical Aberration for an otherwise perfect lens with a circular aperture.

Spherical Aberrations result because most photographic lenses are designed with quasi spherical surfaces that do not necessarily behave ideally in all situations.  For instance, they may focus light on systematically different planes depending on whether the respective ray goes through the exit pupil closer or farther from the optical axis, as shown below:

371px-spherical_aberration_2
Figure 1. Top: an ideal spherical lens focuses all rays on the same focal point. Bottom: a practical lens with Spherical Aberration focuses rays that go through the exit pupil based on their radial distance from the optical axis. Image courtesy Andrei Stroe.

Continue reading A Simple Model for Sharpness in Digital Cameras – Spherical Aberrations

A Simple Model for Sharpness in Digital Cameras – Defocus

This series of articles has dealt with modeling an ideal imaging system’s ‘sharpness’ in the frequency domain.  We looked at the effects of the hardware on spatial resolution: diffraction, sampling interval, sampling aperture (e.g. a squarish pixel), anti-aliasing OLPAF filters.  The next two posts will deal with modeling typical simple imperfections related to the lens: defocus and spherical aberrations.

Defocus = OOF

Defocus means that the sensing plane is not exactly where it needs to be for image formation in our ideal imaging system: the image is therefore out of focus (OOF).  Said another way, light from a point source would go through the lens but converge either behind or in front of the sensing plane, as shown in the following diagram, for a lens with a circular aperture:

Figure 1. Back Focus, In Focus, Front Focus.
Figure 1. Top to bottom: Back Focus, In Focus, Front Focus.  To the right is how the relative PSF would look like on the sensing plane.  Image under license courtesy of Brion.

Continue reading A Simple Model for Sharpness in Digital Cameras – Defocus

The Units of Discrete Fourier Transforms

This article is about specifying the units of the Discrete Fourier Transform of an image and the various ways that they can be expressed.  This apparently simple task can be fiendishly unintuitive.

The image we will use as an example is the familiar Airy Disk from the last few posts, at f/16 with light of mean 530nm wavelength. Zoomed in to the left in Figure 1; and as it looks in its 1024×1024 sample image to the right:

Airy Mesh and Intensity
Figure 1. Airy disc image I(x,y). Left, 1a, 3D representation, zoomed in. Right, 1b, as it would appear on the sensing plane (yes, the rings are there but you need to squint to see them).

Continue reading The Units of Discrete Fourier Transforms

A Simple Model for Sharpness in Digital Cameras – Diffraction and Pixel Aperture

Now that we know from the introductory article that the spatial frequency response of a typical perfect digital camera and lens (its Modulation Transfer Function) can be modeled simply as the product of the Fourier Transform of the Point Spread Function of the lens and pixel aperture, convolved with a Dirac delta grid at cycles-per-pixel pitch spacing

(1)   \begin{equation*} MTF_{Sys2D} = \left|\widehat{ PSF_{lens} }\cdot \widehat{PIX_{ap} }\right|_{pu}\ast\ast\: \delta\widehat{\delta_{pitch}} \end{equation*}

we can take a closer look at each of those components (pu here indicating normalization to one at the origin).   I used Matlab to generate the examples below but you can easily do the same with a spreadsheet.   Continue reading A Simple Model for Sharpness in Digital Cameras – Diffraction and Pixel Aperture

The Units of Spatial Resolution

Several sites for photographers perform spatial resolution ‘sharpness’ testing of a specific lens and digital camera set up by capturing a target.  You can also measure your own equipment relatively easily to determine how sharp your hardware is.  However comparing results from site to site and to your own can be difficult and/or misleading, starting from the multiplicity of units used: cycles/pixel, line pairs/mm, line widths/picture height, line pairs/image height, cycles/picture height etc.

This post will address the units involved in spatial resolution measurement using as an example readings from the popular slanted edge method, although their applicability is generic.

Continue reading The Units of Spatial Resolution

How to Get MTF Performance Curves for Your Camera and Lens

You have obtained a raw file containing the image of a slanted edge  captured with good technique.  How do you get the Modulation Transfer Function of the camera and lens combination that took it?  Download and feast your eyes on open source MTF Mapper version 0.4.16 by Frans van den Bergh.

[Edit 2023: MTF Mapper has kept improving over the years, making it in my opinion the most accurate slanted edge measuring tool available today, used in applications that range from photography to machine vision to the Mars Rover.   Did I mention that it is open source?

It now sports a Graphical User Interface which can load raw files and allow the arbitrary selection of individual edges by simply pointing and clicking, making this post largely redundant.  The procedure outlined below will still work but there are easier ways to accomplish the same task today: just “File/Open single edge image” raw files from the GUI after having inserted “–bayer green” in the additional string field.  Thanks Frans.]

The first thing we are going to do is crop the edges and package them into a TIFF file format so that MTF Mapper has an easier time reading them.  Let’s use as an example a Nikon D810+85mm:1.8G ISO 64 studio raw capture by DPReview so that you can follow along if you wish.   Continue reading How to Get MTF Performance Curves for Your Camera and Lens

The Slanted Edge Method

My preferred method for measuring the spatial resolution performance of photographic equipment these days is the slanted edge method.  It requires a minimum amount of additional effort compared to capturing and simply eye-balling a pinch, Siemens or other chart but it gives immensely more, useful, accurate, quantitative information in the language and units that have been used to characterize optical systems for over a century: it produces a good approximation to  the Modulation Transfer Function of the two dimensional camera/lens system impulse response – at the location of the edge in the direction perpendicular to it.

Much of what there is to know about an imaging system’s spatial resolution performance can be deduced by analyzing its MTF curve, which represents the system’s ability to capture increasingly fine detail from the scene, starting from perceptually relevant metrics like MTF50, discussed a while back.  In fact the area under the curve weighted by some approximation of the Contrast Sensitivity Function of the Human Visual System is the basis for many other, better accepted single figure ‘sharpness‘ metrics with names like Subjective Quality Factor (SQF), Square Root Integral (SQRI), CMT Acutance, etc.   And all this simply from capturing the image of a slanted edge, which one can actually and somewhat easily do at home, as presented in the next article.

Continue reading The Slanted Edge Method