Tag Archives: convolution

The Effect of Sampling on Image Resolution

We understand from the previous article that the process of digitizing an optical image with a photographic sensor can be thought of as two subsequent operations:

  1. filtering (convolution) of the optical image on the sensing plane by the pixel’s finite effective active area (aka pixel aperture);
  2. point sampling the convolved image at a given fixed rate and position, often corresponding to the center of each pixel.

Both affect resolution in different ways: the former can be thought of as modifying continuously the analog optical image, as seen below right; the latter as possibly introducing interference (aliasing) into the result.

Figure 1. Digitizing an optical image corresponds to convolution with pixel aperture followed by Dirac delta sampling at the center of each pixel (red dots).  Highly magnified images of two simulated stars separated by the Rayleigh limit: the stars are resolved after just the optics to the left; and unresolved after smoothing by an ideal square pixel with 100% Fill Factor to the right.

In this page I will explore how the act of digitizing that image – the process of sampling – fundamentally alters what we can resolve.   In the next one we will discuss the impact on resolution of  pixel-shift modes available in current mirrorless cameras. Continue reading The Effect of Sampling on Image Resolution

The Richardson-Lucy Algorithm

Deconvolution by the Richardson-Lucy algorithm is achieved by minimizing the convex loss function derived in the last article

(1)   \begin{equation*} J(O) = \sum \bigg (O**PSF - I\cdot ln(O**PSF) \bigg) \end{equation*}

with

  • J, the scalar quantity to minimize, function of ideal image O(x,y)
  • I(x,y), linear captured image intensity laid out in M rows and N columns, corrupted by Poisson noise and blurred by the PSF
  • PSF(x,y), the known two-dimensional Point Spread Function that should be deconvolved out of I
  • O(x,y), the output image resulting from deconvolution, ideally without shot noise and blurring introduced by the PSF
  • **   two-dimensional convolution
  • \cdot   element-wise product
  • ln, element-wise natural logarithm

In what follows indices x and y, from zero to M-1 and N-1 respectively, are dropped for readability.  Articles about algorithms are by definition dry so continue at your own peril.

So, given captured raw image I blurred by known function PSF, how do we find the minimum value of J yielding the deconvolved image O that we are after?

Continue reading The Richardson-Lucy Algorithm

Elements of Richardson-Lucy Deconvolution

We have seen that deconvolution by naive division in the frequency domain only works in ideal conditions not typically found in normal photographic settings, in part because of shot noise inherent in light from the scene. Half a century ago William Richardson (1972)[1] and Leon Lucy (1974)[2] independently came up with a better way to deconvolve blurring introduced by an imaging system in the presence of shot noise. Continue reading Elements of Richardson-Lucy Deconvolution

A Simple Model for Sharpness in Digital Cameras – I

The next few posts will describe a linear spatial resolution model that can help a photographer better understand the main variables involved in evaluating the ‘sharpness’ of photographic equipment and related captures.   I will show numerically that the combined Spatial Frequency Response (aka Modulation Transfer Function) of a perfect AAless monochrome digital camera and lens in two dimensions can be described as the magnitude of the normalized product of the Fourier Transform (FT) of the lens Point Spread Function (PSF) by the FT of the pixel footprint (aperture), convolved with the FT of a square grid of Dirac delta functions centered at each  pixel:

    \[ MTF_{2D} = \left|\widehat{ PSF_{lens} }\cdot \widehat{PIX_{ap} }\right|_{pu}\ast\ast\: \delta\widehat{\delta_{pitch}} \]

With a few simplifying assumptions we will see that the effect of the lens and sensor on the spatial resolution of the continuous image on the sensing plane can be broken down into these simple components.  The overall ‘sharpness’ of the captured digital image can then be estimated by combining the ‘sharpness’ of each of them.

The stage will be set in this first installment with a little background and perfect components.  Following articles will deal with the effect on captured sharpness of

We will learn how to measure MTF curves for our equipment, look at numerical methods to model PSFs and MTFs from the wavefront at the pupil of the lens and the theory behind them. Continue reading A Simple Model for Sharpness in Digital Cameras – I