Tag Archives: integration

The Effect of Sampling on Image Resolution

We understand from the previous article that the process of digitizing an optical image with a photographic sensor can be thought of as two subsequent operations:

  1. filtering (convolution) of the optical image on the sensing plane by the pixel’s finite effective active area (aka pixel aperture);
  2. point sampling the convolved image at a given fixed rate and position, often corresponding to the center of each pixel.

Both affect resolution in different ways: the former can be thought of as modifying continuously the analog optical image, as seen below right; the latter as possibly introducing interference (aliasing) into the result.

Figure 1. Digitizing an optical image corresponds to convolution with pixel aperture followed by Dirac delta sampling at the center of each pixel (red dots).  Highly magnified images of two simulated stars separated by the Rayleigh limit: the stars are resolved after just the optics to the left; and unresolved after smoothing by an ideal square pixel with 100% Fill Factor to the right.

In this page I will explore how the act of digitizing that image – the process of sampling – fundamentally alters what we can resolve.   In the next one we will discuss the impact on resolution of  pixel-shift modes available in current mirrorless cameras. Continue reading The Effect of Sampling on Image Resolution

A Simple Model for Sharpness in Digital Cameras – I

The next few posts will describe a linear spatial resolution model that can help a photographer better understand the main variables involved in evaluating the ‘sharpness’ of photographic equipment and related captures.   I will show numerically that the combined Spatial Frequency Response (aka Modulation Transfer Function) of a perfect AAless monochrome digital camera and lens in two dimensions can be described as the magnitude of the normalized product of the Fourier Transform (FT) of the lens Point Spread Function (PSF) by the FT of the pixel footprint (aperture), convolved with the FT of a square grid of Dirac delta functions centered at each  pixel:

    \[ MTF_{2D} = \left|\widehat{ PSF_{lens} }\cdot \widehat{PIX_{ap} }\right|_{pu}\ast\ast\: \delta\widehat{\delta_{pitch}} \]

With a few simplifying assumptions we will see that the effect of the lens and sensor on the spatial resolution of the continuous image on the sensing plane can be broken down into these simple components.  The overall ‘sharpness’ of the captured digital image can then be estimated by combining the ‘sharpness’ of each of them.

The stage will be set in this first installment with a little background and perfect components.  Following articles will deal with the effect on captured sharpness of

We will learn how to measure MTF curves for our equipment, look at numerical methods to model PSFs and MTFs from the wavefront at the pupil of the lens and the theory behind them. Continue reading A Simple Model for Sharpness in Digital Cameras – I