# Bayer CFA Effect on Sharpness

In this article we shall find that the effect of a Bayer CFA on the spatial frequencies and hence the ‘sharpness’ captured by a sensor compared to those from a corresponding monochrome imager can go from nothing to halving the potentially unaliased range based on the chrominance content of the image projected on the sensing plane and the direction in which the spatial frequencies are being stressed.

#### A Little Sampling Theory

We know from Goodman[1] and previous articles that the sampled image ( ) captured in the raw data by a typical current digital camera can be represented mathematically as  the continuous image on the sensing plane ( ) multiplied by a rectangular lattice of Dirac delta functions positioned at the center of each pixel:

(1)

with the functions representing the two dimensional grid of delta functions, sampling pitch apart horizontally and vertically.  To keep things simple the sensing plane is considered here to be the imager’s silicon itself, which sits below microlenses and other filters so the continuous image is assumed to incorporate their as well as pixel aperture’s effects. Continue reading Bayer CFA Effect on Sharpness

# Wavefront to PSF to MTF: Physical Units

In the last article we saw that the Point Spread Function and the Modulation Transfer Function of a lens could be easily obtained numerically by applying Discrete Fourier Transforms to its generalized exit pupil function twice in sequence.[1]

Obtaining the 2D DFTs is easy: simply feed MxN numbers representing the two dimensional complex image of the pupil function in its space to a fast fourier transform routine and, presto, it produces MxN numbers that represent the amplitude of the PSF on the sensing plane.  Figure 1a shows a simple case where pupil function is a uniform disk representing the circular aperture of a perfect lens with MxN = 1024×1024.  Figure 1b is the resulting PSF.

Simple and fast.  Wonderful.  Below is a slice through the center, the 513th row, zoomed in.  Hmm….  What are the physical units on the axes of displayed data produced by the DFT?

Less easy – and the subject of this article as seen from a photographic perspective.

# Aberrated Wave to Image Intensity to MTF

Goodman, in his excellent Introduction to Fourier Optics[1], describes how an image is formed on a camera sensing plane starting from first principles, that is electromagnetic propagation according to Maxwell’s wave equation.  If you want the play by play account I highly recommend his math intensive book.  But for the budding photographer it is sufficient to know what happens at the exit pupil of the lens because after that the transformations to Point Spread and Modulation Transfer Functions are straightforward, as we will show in this article.

The following diagram exemplifies the last few millimeters of the journey that light from the scene has to travel in order to smash itself against our camera’s sensing medium.  Light from the scene in the form of  field   arrives at the front of the lens.  It goes through the lens being partly blocked and distorted by it (we’ll call this blocking/distorting function ) and finally arrives at its back end, the exit pupil.   The complex light field at the exit pupil’s two dimensional plane is now   as shown below:

# Taking the Sharpness Model for a Spin – II

This post  will continue looking at the spatial frequency response measured by MTF Mapper off slanted edges in DPReview.com raw captures and relative fits by the ‘sharpness’ model discussed in the last few articles.  The model takes the physical parameters of the digital camera and lens as inputs and produces theoretical directional system MTF curves comparable to measured data.  As we will see the model seems to be able to simulate these systems well – at least within this limited set of parameters.

The following fits refer to the green channel of a number of interchangeable lens digital camera systems with different lenses, pixel sizes and formats – from the current Medium Format 100MP champ to the 1/2.3″ 18MP sensor size also sometimes found in the best smartphones.  Here is the roster with the cameras as set up:

# Taking the Sharpness Model for a Spin

The series of articles starting here outlines a model of how the various physical components of a digital camera and lens can affect the ‘sharpness’ – that is the spatial resolution – of the  images captured in the raw data.  In this one we will pit the model against MTF curves obtained through the slanted edge method[1] from real world raw captures both with and without an anti-aliasing filter.

With a few simplifying assumptions, which include ignoring aliasing and phase, the spatial frequency response (SFR or MTF) of a photographic digital imaging system near the center can be expressed as the product of the Modulation Transfer Function of each component in it.  For a current digital camera these would typically be the main ones:

(1)

all in two dimensions Continue reading Taking the Sharpness Model for a Spin

# A Simple Model for Sharpness in Digital Cameras – Polychromatic Light

We now know how to calculate the two dimensional Modulation Transfer Function of a perfect lens affected by diffraction, defocus and third order Spherical Aberration  – under monochromatic light at the given wavelength and f-number.  In digital photography however we almost never deal with light of a single wavelength.  So what effect does an illuminant with a wide spectral power distribution, going through the color filter of a typical digital camera CFA  before the sensor have on the spatial frequency responses discussed thus far?

#### Monochrome vs Polychromatic Light

Not much, it turns out. Continue reading A Simple Model for Sharpness in Digital Cameras – Polychromatic Light

# A Simple Model for Sharpness in Digital Cameras – Spherical Aberrations

Spherical Aberration (SA) is one key component missing from our MTF toolkit for modeling an ideal imaging system’s ‘sharpness’ in the center of the field of view in the frequency domain.  In this article formulas will be presented to compute the two dimensional Point Spread and Modulation Transfer Functions of the combination of diffraction, defocus and third order Spherical Aberration for an otherwise perfect lens with a circular aperture.

Spherical Aberrations result because most photographic lenses are designed with quasi spherical surfaces that do not necessarily behave ideally in all situations.  For instance, they may focus light on slightly different planes depending on whether the respective ray goes through the exit pupil closer or farther from the optical axis, as shown below:

# A Simple Model for Sharpness in Digital Cameras – Defocus

This series of articles has dealt with modeling an ideal imaging system’s ‘sharpness’ in the frequency domain.  We looked at the effects of the hardware on spatial resolution: diffraction, sampling interval, sampling aperture (e.g. a squarish pixel), anti-aliasing OLPAF filters.  The next two posts will deal with modeling typical simple imperfections in the system: defocus and spherical aberrations.

#### Defocus = OOF

Defocus means that the sensing plane is not exactly where it needs to be for image formation in our ideal imaging system: the image is therefore out of focus (OOF).  Said another way, light from a distant star would go through the lens but converge either behind or in front of the sensing plane, as shown in the following diagram, for a lens with a circular aperture:

# A Simple Model for Sharpness in Digital Cameras – AA

This article will discuss a simple frequency domain model for an AntiAliasing (or Optical Low Pass) Filter, a hardware component sometimes found in a digital imaging system[1].  The filter typically sits right on top of the sensing plane and its objective is to block as much of the aliasing and moiré creating energy above the Nyquist spatial frequency while letting through as much as possible of the real image forming energy below that, hence the low-pass designation.

In consumer digital cameras it is often implemented  by introducing one or two birefringent plates in the sensor’s filter stack.  This is how Nikon shows it for one of its DSLRs:

# The Units of Discrete Fourier Transforms

The image we will use as an example is the familiar Airy Disk from the last few posts, at f/16 with light of mean 530nm wavelength. Zoomed in to the left in Figure 1; and as it looks in its 1024×1024 sample image to the right:

# A Simple Model for Sharpness in Digital Cameras – Aliasing

Having shown that our simple two dimensional MTF model is able to predict the performance of the combination of a perfect lens and square monochrome pixel we now turn to the effect of the sampling interval on spatial resolution according to the guiding formula:

(1)

The hats in this case mean the Fourier Transform of the relative component normalized to 1 at the origin (), that is the individual MTFs of the perfect lens PSF, the perfect square pixel and the delta grid.

#### Sampling in the Spatial and Frequency Domains

Sampling is expressed mathematically as a Dirac delta function at the center of each pixel (the red dots below).

# A Simple Model for Sharpness in Digital Cameras – II

Now that we know from the introductory article that the spatial frequency response of a typical perfect digital camera and lens can be modeled simply as the product of the Modulation Transfer Function of the lens and pixel area, convolved with a Dirac delta grid at cycles-per-pixel spacing

(1)

we can take a closer look at each of those components ( here indicating normalization).   I used Matlab to generate the examples below but you can easily do the same in a spreadsheet.  Here is the code if you wish to follow along. Continue reading A Simple Model for Sharpness in Digital Cameras – II

# A Simple Model for Sharpness in Digital Cameras – I

The next few posts will describe a linear spatial resolution model that can help a photographer better understand the main variables involved in evaluating the ‘sharpness’ of photographic equipment and related captures. I will show numerically that the combined spectral frequency response (MTF) of a perfect AAless monochrome digital camera and lens in two dimensions can be described as the normalized multiplication of the Fourier Transform (FT) of the lens Point Spread Function by the FT of the (square) pixel footprint, convolved with the FT of a rectangular grid of Dirac delta functions centered at each  pixel, as better described in the article

With a few simplifying assumptions we will see that the effect of the lens and sensor on the spatial resolution of the continuous image on the sensing plane can be broken down into these simple components.  The overall ‘sharpness’ of the captured digital image can then be estimated by combining the ‘sharpness’ of each of them. Continue reading A Simple Model for Sharpness in Digital Cameras – I

# A Longitudinal CA Metric for Photographers

While perusing Jim Kasson’s excellent Longitudinal Chromatic Aberration tests[1] I was impressed by the quantity and quality of the information the resulting data provides.  Longitudinal, or Axial, CA is a form of defocus and as such it cannot be effectively corrected during raw conversion, so having a lens well compensated for it will provide a real and tangible improvement in the sharpness of final images.  How much of an improvement?

In this article I suggest one such metric for the Longitudinal Chromatic Aberrations (LoCA) of a photographic imaging system: Continue reading A Longitudinal CA Metric for Photographers

# COMBINING BAYER CFA MTF Curves – II

This is a vast and complex subject for which I do not have formal training.  In this and the previous article I present my thoughts on how MTF50 results obtained from  raw data of the four Bayer CFA channels off  a uniformly illuminated neutral target captured with a typical digital camera through the slanted edge method can be combined to provide a meaningful composite MTF50 for the imaging system as a whole1.  Corrections, suggestions and challenges are welcome. Continue reading COMBINING BAYER CFA MTF Curves – II

# Combining Bayer CFA Modulation Transfer Functions – I

This is a vast and complex subject for which I do not have formal training.  In this and the following article I will discuss my thoughts on how MTF50 results obtained from  raw data of the four Bayer CFA color channels off  a neutral target captured with a typical camera through the slanted edge method can be combined to provide a meaningful composite MTF50 for the imaging system as a whole.   The perimeter are neutral slanted edge measurements of Bayer CFA raw data for linear spatial resolution  (‘sharpness’) photographic hardware evaluations.  Corrections, suggestions and challenges are welcome. Continue reading Combining Bayer CFA Modulation Transfer Functions – I

# Linearity in the Frequency Domain

For the purposes of ‘sharpness’ spatial resolution measurement in photography  cameras can be considered shift-invariant, linear systems.

Shift invariant means that the imaging system should respond exactly the same way no matter where light from the scene falls on the sensing medium .  We know that in a strict sense this is not true because for instance a pixel has a square area so it cannot have an isotropic response by definition.  However when using the slanted edge method of linear spatial resolution measurement  we can effectively make it shift invariant by careful preparation of the testing setup.  For example the edges should be slanted no more than this and no less than that. Continue reading Linearity in the Frequency Domain

# Downsizing Algorithms: Effects on Resolution

Most of the photographs captured these days end up being viewed on a display of some sort, with at best 4K (4096×2160) but often no better than HD resolution (1920×1080).  Since the cameras that capture them have typically several times that number of pixels, 6000×4000 being fairly normal today, most images need to be substantially downsized for viewing, even allowing for some cropping.  Resizing algorithms built into browsers or generic image viewers tend to favor expediency over quality, so it behooves the IQ conscious photographer to manage the process, choosing the best image size and downsampling algorithm for the intended file and display medium.

When downsizing the objective is to maximize the original spatial resolution retained while minimizing the possibility of aliasing and moirè.  In this article we will take a closer look at some common downsizing algorithms and their effect on spatial resolution information in the frequency domain.

# Raw Converter Sharpening with Sliders at Zero?

I’ve mentioned in the past that I prefer to take spatial resolution measurements directly off the raw information in order to minimize often unknown subjective variables introduced by demosaicing and rendering algorithms unbeknownst to the operator, even when all relevant sliders are zeroed.  In this post we discover that that is indeed the case for ACR/LR process 2010/2012 and for Capture NX-D – while DCRAW appears to be transparent and perform straight out demosaicing with no additional processing without the operator’s knowledge.

# Are micro Four Thirds Lenses Typically Twice as ‘Sharp’ as Full Frame’s?

In fact the question is more generic than that.   Smaller format lens designers try to compensate for their imaging system geometric resolution penalty  (compared to a larger format when viewing final images at the same size) by designing ‘sharper’ lenses specifically for it, rather than recycling larger formats’ designs (feeling guilty APS-C?) – sometimes with excellent effect.   Are they succeeding?   I will use mFT only as an example here, but input is welcome for all formats, from phones to large format.

# MTF Mapper vs sfrmat3

Over the last couple of years I’ve been using Frans van den Bergh‘s excellent open source MTF Mapper to measure the Modulation Transfer Function of imaging systems off a slanted edge target, as you may have seen in these pages.  As long as one understands how to get the most out of it I find it a solid product that gives reliable results, with MTF50 typically well within 2% of actual in less than ideal real-world situations (see below).  I had little to compare it to other than to tests published by gear testing sites:  they apparently mostly use a commercial package called Imatest for their slanted edge readings – and it seemed to correlate well with those.

Then recently Jim Kasson pointed out sfrmat3, the matlab program written by Peter Burns who is a slanted edge method expert who worked at Kodak and was a member of the committee responsible for ISO12233, the resolution and spatial frequency response standard for photography.  sfrmat3 is considered to be a solid implementation of the standard and many, including Imatest, benchmark against it – so I was curious to see how MTF Mapper 0.4.1.6 would compare.  It did well.

# Can MTF50 be Trusted?

A reader suggested that a High-Res Olympus E-M5 Mark II image used in the previous post looked sharper than the equivalent Sony a6000 image, contradicting the relative MTF50 measurements, perhaps showing ‘the limitations of MTF50 as a methodology’.   That would be surprising because MTF50 normally correlates quite well with perceived sharpness, so I decided to check this particular case out.

# Olympus E-M5 II High-Res 64MP Shot Mode

Olympus just announced the E-M5 Mark II, an updated version of its popular micro Four Thirds E-M5 model, with an interesting new feature: its 16MegaPixel sensor, presumably similar to the one in other E-Mx bodies, has a high resolution mode where it gets shifted around by the image stabilization servos during exposure to capture, as they say in their press release

‘resolution that goes beyond full-frame DSLR cameras.  8 images are captured with 16-megapixel image information while moving the sensor by 0.5 pixel steps between each shot. The data from the 8 shots are then combined to produce a single, super-high resolution image, equivalent to the one captured with a 40-megapixel image sensor.’

A great idea that could give a welcome boost to the ‘sharpness’ of this handy system.  This preliminary test shows that the E-M5 mk II 64MP High-Res mode gives in this case a 10-12% advantage in MTF50 linear spatial resolution compared to the Standard Shot 16MP mode.  Plus it apparently virtually eliminates the possibility of  aliasing and moiré.  Great stuff, Olympus.

# Equivalence in Pictures: Sharpness/Spatial Resolution

So, is it true that a Four Thirds lens needs to be about twice as ‘sharp’ as its Full Frame counterpart in order to be able to display an image of spatial resolution equivalent to the larger format’s?

It is, because of the simple geometry I will describe in this article.  In fact with a few provisos one can generalize and say that lenses from any smaller format need to be ‘sharper’ by the ratio of their sensor linear sizes in order to produce the same linear resolution on same-sized final images.

This is one of the reasons why Ansel Adams shot 4×5 and 8×10 – and I would too, were it not for logistical and pecuniary concerns.

# The Units of Spatial Resolution

Several sites perform spatial resolution ‘sharpness’ testing of imaging systems for photographers (i.e. ‘lens+digital camera’) and publish results online.  You can also measure your own equipment relatively easily to determine how sharp your hardware is.  However comparing results from site to site and to your own can be difficult and/or misleading, starting from the multiplicity of units used: cycles/pixel, line pairs/mm, line widths/picture height, line pairs/image height, cycles/picture height etc.

This post will address the units involved in spatial resolution measurement using as an example readings from the slanted edge method.

# How to Get MTF Performance Curves for Your Camera and Lens

You have obtained a raw file containing the image of a slanted edge  captured with good technique.  How do you get the MTF curve of the camera and lens combination that took it?  Download and feast your eyes on open source MTF Mapper by Frans van den Bergh.  No installation required, simply store it in its own folder.

The first thing we are going to do is crop the edges and package them into a TIFF file format so that MTF Mapper has an easier time reading them.  Let’s use as an example a Nikon D810+85mm:1.8G ISO 64 studio raw capture by DPReview so that you can follow along if you wish.   Continue reading How to Get MTF Performance Curves for Your Camera and Lens

# The Slanted Edge Method

My preferred method for measuring the spatial resolution performance of photographic equipment these days is the slanted edge method.  It requires a minimum amount of additional effort compared to capturing and simply eye-balling a pinch, Siemens or other chart but it gives immensely more, useful, accurate, quantitative information in the language and units that have been used to characterize optical systems for over a century: it produces a good approximation to  the Modulation Transfer Function of the two dimensional Point Spread Function of the camera/lens system in the direction perpendicular to the edge.

Much of what there is to know about a system’s spatial resolution performance can be deduced by analyzing such a curve, starting from the perceptually relevant MTF50 metric, discussed a while back.  And all of this simply from capturing the image of a black and white slanted edge, which one can easily produce and print at home.

# Why Raw Sharpness IQ Measurements Are Better

Why Raw?  The question is whether one is interested in measuring the objective, quantitative spatial resolution capabilities of the hardware or whether instead one would prefer to measure the arbitrary, qualitatively perceived sharpening prowess of (in-camera or in-computer) processing software as it turns the capture into a pleasing final image.  Either is of course fine.

My take on this is that the better the IQ captured the better the final image will be after post processing.  In other words I am typically more interested in measuring the spatial resolution information produced by the hardware comfortable in the knowledge that if I’ve got good quality data to start with its appearance will only be improved in post by the judicious use of software.  By IQ here I mean objective, reproducible, measurable physical quantities representing the quality of the information captured by the hardware, ideally in scientific units.

Can we do that off a file rendered by a raw converter or, heaven forbid, a Jpeg?  Not quite, especially if the objective is measuring IQ. Continue reading Why Raw Sharpness IQ Measurements Are Better

# How Sharp are my Camera and Lens?

You want to measure how sharp your camera/lens combination is to make sure it lives up to its specs.  Or perhaps you’d like to compare how well one lens captures spatial resolution compared to another  you own.  Or perhaps again you are in the market for new equipment and would like to know what could be expected from the shortlist.  Or an old faithful is not looking right and you’d like to check it out.   So you decide to do some testing.  Where to start? Continue reading How Sharp are my Camera and Lens?

# MTF50 and Perceived Sharpness

Is MTF50 a good proxy for perceived sharpness?  It turns out that the spatial frequencies that are most closely related to our perception of sharpness vary with the size and viewing distance of the displayed image.

For instance if an image captured by a Full Frame camera is viewed at ‘standard’ distance (that is a distance equal to its diagonal) the portion of the MTF curve most representative of perceived sharpness appears to be around MTF90. Continue reading MTF50 and Perceived Sharpness

# What is the Best Single Deconvolution PSF to Use for Capture Sharpening 1?

Deconvolution is one of the  processes by which we can attempt to undo blurring introduced by our hardware while capturing an image.  It can be performed in the spatial domain via a kernel or in the frequency domain by dividing image data by one or more  Point Spread Functions.  The best single deconvolution PSF to use when Capture Sharpening is the one that resulted in the blurring in the first place: the System PSF.   It is often not easy or practical to determine it. Continue reading What is the Best Single Deconvolution PSF to Use for Capture Sharpening 1?

# Point Spread Function and Capture Sharpening

A Point Spread Function is the image projected on the sensing plane when our cameras are pointed at a single, bright, infinitesimally small Point of light, like a distant star on a perfectly dark and clear night.   Ideally, that’s also how it would appear on the sensing material (silicon) of our camera sensors: a singularly small yet bright point of light surrounded by pitch black.  However a PSF can never look like a perfect point because in order to reach silicon it has to travel at least through an imperfect lens (1) of finite aperture (2), various filters (3) and only then finally land typically via a microlens on a squarish photosite of finite dimensions (4).

Each time it passes through one of these elements the Point of light is affected and spreads out a little more in slightly different ways, so that by the time it reaches silicon it is no longer a perfect Point but it is a slightly blurry Point instead: the image that this spread out Point makes on the sensing material is called the System’s Point Spread Function.  It is what we try to undo through Capture Sharpening. Continue reading Point Spread Function and Capture Sharpening

# Deconvolution PSF Changes with Aperture

We have  seen in the previous post how the radius for deconvolution capture sharpening by a Gaussian PSF can be estimated for a given setup in well behaved and characterized camera systems.  Some parameters like pixel aperture and AA strength should remain stable for a camera/prime lens combination as f-numbers are increased (aperture is decreased) from about f/5.6 on up – the f/stops dear to Full Frame landscape photographers.  But how should the radius for generic Gaussian deconvolution  change as the f-number increases from there? Continue reading Deconvolution PSF Changes with Aperture

# What Radius to Use for Deconvolution Capture Sharpening

The following approach will work if you know the MTF50 in cycles/pixel of your camera/lens combination as set up at the time that the capture you’d like to sharpen by deconvolution with a Gaussian PSF was taken.

The process by which our hardware captures images and stores them  in the raw data inevitably blurs detail information from the scene. Continue reading What Radius to Use for Deconvolution Capture Sharpening

# Deconvolution vs USM Capture Sharpening

UnSharp Masking (USM) capture sharpening is somewhat equivalent to taking a black/white marker and drawing along every transition in the picture to make it stand out more – automatically.  Line thickness and darkness is chosen arbitrarily to achieve the desired effect, much like painters do. Continue reading Deconvolution vs USM Capture Sharpening