My preferred method for measuring the spatial resolution performance of photographic equipment these days is the **slanted edge method**. It requires a minimum amount of additional effort compared to capturing and simply eye-balling a pinch, Siemens or other chart but it gives immensely more, useful, accurate, absolute information in the language and units that have been used to characterize optical systems for over a century: it produces a good approximation to the **Modulation Transfer Function** of the two dimensional Point Spread Function of the camera/lens system in the direction perpendicular to the edge.

Much of what there is to know about a system’s spatial resolution performance can be deduced by analyzing such a curve, starting from the perceptually relevant **MTF50** metric, discussed a while back. And all of this simply from capturing the image of a black and white slanted edge, which one can easily produce and print at home.

I was first introduced to the slanted edge method by Frans van den Bergh, author of excellent open source MTF Mapper that I use extensively in these pages to produce linear spatial resolution curves from raw data. In fact much of this article comes from Frans’ blog and MTF Mapper’s documentation.

I will assume that you know that the Modulation Transfer Function is the Spatial Frequency Response of a linear, space invariant imaging system that can be obtained by taking the magnitude (modulus) of the Fourier transform of the system’s impulse response otherwise known as its Point Spread Function (PSF).

The terms **MTF** (Modulation Transfer Function) and **SFR** (Spatial Frequency Response) are **used interchangeably** in photography. They both refer to the expected Michelson Contrast at the given linear spatial frequency, so in a way in imaging they can also be considered a Contrast Transfer Function. I will mainly use MTF in this article.

#### Point to Line to Edge

In a nutshell, the method is based on the idea that it is difficult to obtain the PSF of a camera/lens system by, say, taking a capture of a single distant star, a **POINT**, against a black sky because of the relative intensity and size of the target: it’s too small compared to the size of a pixel, too dim and too noisy. Because the imaging system is assumed to be linear one could build intensity up and reduce noise by capturing a number of closely spaced stars in a row (a straight **LINE**) to obtain the MTF in the direction perpendicular to the line. Even better would be capturing a number of contiguous lines of stars, which at this point could be considered a white **EDGE** against a darker sky. Alas such constellations are hard to come by but not to worry because we can print our own. Here is what one looks like (400×200 pixels):

Pretty simple, right?

#### Edge to Line to Modulation Transfer Function

The following picture is how Frans explains the slanted edge method: the two dimensional edge is projected onto a one dimensional line producing the edge intensity profile (the Edge Spread Function or ESF) in the direction perpendicular to the edge; the derivative of the ESF is the Line Spread Function, which is then Fourier transformed into the MTF curve that we are after:

Wait, I thought you said that the MTF was obtained from the Point Spread Function, not the Line Spread Function? Read on.

The edge is ideally perfectly straight and of infinite contrast. One of the better such targets is a backlit razor blade but in practice it can be a bridge on a satellite photo or it can be created by printing a uniformly black square or rectangle onto uniformly white paper. It needs to be tilted (slanted) ideally between 4 and 6 degrees off the vertical or horizontal and be at least 50 pixels long, 100 or more is better. It gives linear spatial resolution information in one direction only, that perpendicular to the edge, shown as the green arrow labeled ‘Edge Normal’ in the picture.

#### Reconstituting the Continuous Intensity Profile

The advantage of the method is that it **effectively super-samples the edge by the number of pixels along it**. Assuming the edge is perfectly straight (not a given in some recent camera formats that rely on software to correct for poor lens distortion characteristics) if it is 400 pixels long, then the edge profile is oversampled four hundred times with great benefits in terms of effectively cancelling quantization and reducing the impact of noise and aliasing on the spatial resolution measurement. With the proper setup, this gets around the fact that a digital sensor is typically not space invariant because of its edgy pixels and rectangular sampling grid.

MTF Mapper collects the super-sampled data into bins 1/8th of a pixel long, which gives plenty of latitude in the calculations for photographic purposes.

Assuming a perfectly straight edge, no distortion and low noise the resulting edge spread function is for all intents and purposes a **one dimensional representation of the profile of the continuous light intensity reflected by the edge** after it has gone through the optics as detected by a very small square pixel aperture. Since for a well printed and illuminated edge the transition from black to white on paper is theoretically a step function (centered at zero pixels below), any degradation in the ESF from this ideal can be ascribed to loss of sharpness due to the imaging system:

#### From ESF to PSF Intensity Profile to MTF

The derivative of the oversampled ESF is the Line Spread Function, which is directly proportional to the intensity profile of the system’s Point Spread Function in the direction perpendicular to the edge/line:

Yes, that’s approximately the impulse response of the optics in that one direction, aka what **the two-dimensional continuous intensity profile of a star would look like if looked at from the side**. Mathematically, it represents the Radon Transform of the two dimensional PSF onto the edge normal. Note that the ‘bright’ side of the LSF is noisier than the ‘dark’ side in this base ISO capture as a result of shot noise. Differentiation amplifies noise.

By taking the magnitude (modulus) of the Fourier Transform of the one-dimensional LSF so derived we are able to determine fairly accurately* the Modulation Transfer Function of the camera and lens at the position of the edge in a direction perpendicular to it. In other words we have applied the Fourier Slice Theorem to the Radon Transform of the system’s PSF obtaining **a radial slice through the two-dimensional MTF of the two-dimensional PSF**. The resulting curve tells us how good our equipment is at capturing various levels of detail (spatial resolution) from the scene.

Here is MTF Mapper’s output presented as an Excel graph from the data above:

In this case the MTF curve, or the Spatial Frequency Response, of the raw green channels of the AAless D810 mounting an 85mm:1.8G @ f/5.6 as tested by DPReview indicates that this system seems to be truly excellent at it, with good contrast transfer at higher MTFs and a reference MTF50 value of about 2735 lw/ph ( or 1368 lp/ph or 57 lp/mm – see this article for an explanation of how the units used in spatial resolution are related).

On the other hand it hits grayscale Nyquist with a lot of energy still, at an MTF of around 0.2 (aka MTF20), which does not bode well for aliasing in very fine detail. The Grayscale Nyquist limit refers to the maximum number of full lines that can be represented accurately by the sensor, in this case 4912 since the D810 has that many pixels on the side. Expressed as a frequency in cycles or line pairs per picture height it is half this value.

#### Super-Sampling Lets us See Far

We are able to measure the imaging system’s frequency response above the Nyquist frequency because of the method’s super-sampling, which allows us to take a much closer look at the continuous PSF profile of the optics on the imaging plane than possible for example with visual charts. For instance we can see that extinction happens above 8000 lw/ph or 170 lp/mm.

You can compare this MTF curve and reference values directly with those obtained by others with similarly good technique straight from raw green channels – a hard feat though, given the number of variables involved – to make an informed call as to which system is ‘sharper’ or whether an old lens has lost its touch. Keep in mind that in typical situations a difference of 10% in MTF50 values is noticeable, but you’d have a hard time perceiving a difference of 5% even when pixel peeping.

So how do we use MTF Mapper to obtain MTF curves? Next.

*Slanted edge MTF results obtained with excellent technique should be comparable to those obtained with other methods.

**If you would like to learn more about how MTF measurements are derived and used to fine tune the optics in anything from sensors to orbiting satellites off slanted shadows created by bridges you will find this, this and this paper interesting.