Deconvolution by Naive Division

Now that we have the lens Point Spread Function it should be straight forward to reverse its blurring effect out of the image, right?

Figure 1. Lens PSF

After all, with a few simplifications typically valid in general photography, we know that the image projected by the lens onto the sensing plane  of our digital camera I(x,y) is the continuous image from the scene in object space projected onto the sensing plane per geometrical optics O(x,y) convolved with the PSF(x,y) of the lens

(1)   \begin{equation*} I(x,y) = O(x,y) \ast\ast  \, PSF(x,y) \end{equation*}

with \ast\ast representing two dimensional convolution and (x,y) sensing/image plane coordinates.   We know that convolution in the spatial domain is element by element multiplication in the frequency domain, so indicating the Fourier Transform by \mathcal{F}

(2)   \begin{equation*} \mathcal{F}(I(x,y)) = \mathcal{F}(O(x,y)) \times  \, \mathcal{F}(PSF(x,y)) \end{equation*}

why not just divide element-wise the FT of the Image on the sensing plane I(x,y) by the FT of PSF(x,y) in order to obtain restored scene image \widehat{O}(x,y) unblurred by the lens? And isn’t this operation deconvolution since it undoes convolution?  Yes, yes but…

Deconvolution by Division

We’ll drop image plane coordinates (x,y) for the rest of the article for readability.   In order to restore unblurred object image O tainted by lens blur PSF we divide the blur out of the actual image on the sensing plane I in the frequency domain, then return to the spatial domain via inverse FT (\mathcal{F}^{-1} )

(3)   \begin{equation*} \widehat{O}_{naive} = \mathcal{F}^{-1} \Bigg(\frac{\mathcal{F}(I)}{\mathcal{F}(PSF) }\Bigg) \end{equation*}

with division above element by element.  In digital photography the image is sampled in the spatial domain by an MxN sensor and conversion to and from the frequency domain is performed by the Discrete Fourier Transform, which produces a spectrum of the same size.  Since the division is performed element-wise frequency-by-frequency we need to ensure that both image and PSF are the same size.

Images are typically thousands of pixels on the side while the PSF we have in Figure 1 is only 63×63 pixels because we have seen in the previous post that the PSF decays quickly.  Therefore starting from a few pixels or so away from the center it is made up of truly tiny values.  No problems, we can pad it with zeros to make it the same size as image I.  Or even better, our PSF was the result of a model, so we can calculated it out to whatever radius we want, keeping in mind that it will still be full of zeros in the frequency domain beyond diffraction extinction.

Therefore, taking a

  1. 2048×2048 target object sampled image O,
  2. Convolving it with the PSF of Figure 1 to obtain I (below left);
  3. Dividing the DFT of I by the DFT of the lens PSF in Figure 1 and
  4. Taking the inverse DFT of that per Equation 3

we should get the unblurred original sampled image O back, as shown below right:

Figure 2, Naive Deconvolution. Left, Image I, the result of convolution of object O with the lens PSF per Equation 2. Right, Object O restored by dividing the FT of I by the FT of the lens PSF. It is identical to the original.

And indeed Figure 2 right shows the restored image resulting from deconvolution by division to be identical to the original image before blurring. The original is not shown because it is truly identical to the restored one, after all 5 x n / n is still 5, at least with floating point math.

Only Works in Ideal Conditions

However, any minimal deviation from ideality in the system, say noise or the blurring function not being known precisely, will cause deconvolution by naive division to become unstable.  This is what adding 2% gaussian noise to convolved image I (Figure 2 left) and deconvolving it with the same lens PSF as earlier does to the restored image (below right)

Figure 3. Deconvolution by naive division in the presence of noise. Left, Image I, the result of convolution of object O with the lens PSF per Equation 2 plus 2% Gaussian noise. Right, Object O restored by dividing the FT of I by the same FT of the lens PSF as in Figure 2.

Although it may be initially surprising that adding a little noise effectively obliterates the ability to restore a blurred image by naive inverse filtering  it is actually easy to understand why this happens.

As we know each value of a Discrete Fourier Transform is the weighted sum of the intensity of every single pixel of the image being transformed.  The reverse is true for the inverse transform and this is where the problem arises.  Just a few unreasonably large values resulting from division in the frequency domain can act like a wrench in the otherwise perfect deconvolution machine.

For instance, if we pad with zeros the tiny 63×63 PSF to match the size of the much larger 2048×2048 sample image, the resulting DFT will be made up of mostly tiny numbers, that when divided into the much larger values of the DFT of the image will generate potentially huge figures.  This is especially true at higher spatial frequencies where signal energy approaches zero and the spectrum is dominated by noise.  Division by a very small noisy spectrum results in a very large noisy spectrum that contaminates the whole inverse transform, hence the resulting ‘restored’ image in Figure 3.  This problem makes the method unusable other than in ideal conditions not typically found in photographic practice.

Weiner-Helstrom to the Rescue

There are ways to mitigate this issue and they tend to take the name of their proponents.  One of the better known was proposed by Weiner and indeed Weiner filter deconvolution is a well known method in the presence of noise.  It adds the  factor shown below left to the denominator of the naive division in the frequency domain

(4)   \begin{equation*} \widehat{O}_{weiner} = \mathcal{F}^{-1} \Bigg(\frac{1}{1+\frac{1}{|\mathcal{F}(PSF)|^2 \cdot SNR}}}\cdot\frac{\mathcal{F}(I)}{\mathcal{F}(PSF) }\Bigg) \end{equation*}

with SNR the ratio of the power spectra of the signal and the noise, ideally estimated at every frequency.  This version of deconvolution by division in the frequency domain incorporates what is known as a least square error or minimum mean square error filter[1].

It is clear that in clean portions of the spectrum where SNR is high this additional factor drives restored image \widehat{O} to similar values as in the naive solution of Equation 3;  conversely it drives the resulting spectrum towards zero in noisy portions because SNR itself approaches zero.  In the frequency domain this is equivalent to saying that the Weiner factor turns the function of the denominator increasingly from high-pass to low-pass as noise power is increased, thus mitigating its negative high frequency amplification effects.

Below the blurred image I to the left is again corrupted by the addition of 2% Gaussian noise and the restored image \widehat{O} to the right is recovered using Equation 4 with a blanket SNR of 50.  As you can see deconvolution by division in the frequency domain, damped by the Wiener filter, has indeed been able to restore some of the lost sharpness – but at the cost of unwelcome artifacts.

Figure 4. Left: Original image blurred by the PSF in Figure 1 and with 2% additive Gaussian noise. Right: restored by Weiner deconvolution with SNR parameter equal to 50. Both images were downsized 2:1 to reduce their load with Photoshop bicubic.

I won’t pursue this approach further for now because for typical photographic images of relatively high IQ, iterative methods tend to produce better results.  Next.

 

Notes and References

1. Digital Image Processing, Third Edition, Gonzalez and Woods, Pearson 2009.  Section 5.8, p. 353.
2. The Matlab/Octave script used to generate images in this post can be found here

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.