*The following approach will work if you know the MTF50 in cycles/pixel of your camera/lens combination as set up at the time that the capture you’d like to sharpen by deconvolution with a Gaussian PSF was taken.*

The process by which our hardware captures images and stores them in the raw data inevitably blurs detail information from the scene.

Even with the best equipment and technique diffraction, lens blur, antialiasing filters, pixel aperture, etc. add up (well, multiply out as we will see) to degrade our camera system’s spatial resolution ultimate performance – even before we start processing and rendering images for display. Attempting to undo some of the blurring inherent in the capture process is the objective of capture sharpening.

The blurring contribution of the main components in a photographic lens/sensor system can be modeled relatively easily in the frequency domain. The graph below shows how the main hardware components individually attenuate spatial frequency information in our images (dashed lines). The parameters used are those of a Nikon D4 coupled with a Nikkor 85mm:1.8G at f/5.6.

In the spatial frequency domain the combined effect of more components is obtained by multiplying their values together. Doing so produces the overall lens/camera system spatial frequency response (aka MTF curve) shown as the solid black line below.

The solid green line is the actual MTF curve for the D4+85mm:1.8G as measured off the slanted edge in the referenced raw file by MTF Mapper, the excellent open source MTF analyzer by Frans van den Bergh. You can download the raw file from dpreview. As you can see theory seems to fit practice fairly well in this case.

Ideally, to undo blurring introduced during the capture process by the hardware all we would need to do is transform our raw image data to the frequency domain and divide out the combined function which resulted in the Total Modeled MTF curve. **Division in the frequency domain is called deconvolution**.

Since in this case the two curves are similar, the result of dividing one by the other should be close to 1 throughout the frequency range, which represents full spatial frequency transfer from the scene to the raw file. Hardware blurring undone then, mission accomplished!

Not so fast. For a number of reasons, not the least of which is that every division in the frequency domain can increase noise exponentially, that’s easier said than done.

There is a rough shortcut, however. Theory says that the more components contribute to the degradation of the system’s spatial frequency response, the more the system’s Total MTF curve starts to look like a Gaussian’s – independently of how the individual component MTF curves look like. So how close is the D4+85mm:1.8G Total MTF curve to a Gaussian’s for the given set up? You can see both curves plotted below, the Gaussian is shown as a yellow dotted line and is relative to a PSF of 0.65 pixel standard deviation (radius).

Not a bad fit for the D4+85mm:1.8G at f/5.6. If we divide (deconvolve) the system’s MTF Total Measured curve by the Gaussian’s the resulting image should in theory show the following MTF spatial frequency response:

Recall that if we ideally had full transfer of all spatial frequencies from the scene to the image recorded in the raw data the Total MTF curve would be a straight line with a value of 1 throughout the range (well, Shannon-Nyquist say that’s impossible so ignore for the moment frequencies much above 0.5 cycles/pixel, the subject of another post). The Gaussian PSF deconvolution at the specified radius is giving us a fairly decent approximation of just that .

If we had chosen a different radius (standard deviation) for the Gaussian however, things would not have looked as pretty. Here for instance is a radius of 1 pixel (note the change of vertical scales):

Watch the higher spatial frequencies get amplified through the roof as the Gaussian is no longer a good proxy for system blur and the division produces some seriously unreal results. Deconvolution plug-in designers resort to advanced low-pass filters to try to keep the higher frequencies in check.

So how do we** choose the radius for deconvolution with a generic Gaussian PSF** for a given camera/lens if we know its MTF50 spatial resolution as set up?

One way is to notice that the D4’s well behaved total MTF curve looks like a reverse S just like the Gaussian’s. Let’s then choose a value for the radius/standard deviation of the Gaussian that will make the two curves intersect in the middle, at MTF50, with half of the Gaussian curve overestimating and the other half underestimating Total Measured MTF (see figure 2).

The Standard Deviation of a Gaussian PSF that will result in MTF50 at spatial frequency s in cycles per pixel is:

Figure 1 shows measured MTF50 at s=0.29 cycles per pixel (to convert from different units of spatial resolution see this article) for our example camera/lens combination, which when plugged into the equation results in a corresponding Gaussian PSF radius of 0.646 pixels. If the camera/lens system is not well behaved (and by that I mean that the Measured System MTF curve is not approximately Gaussian) all bets are off.

That’s one way to estimate the radius for deconvolution by a Gaussian PSF with well behaved cameras such as the D4 when the MTF50 of the set up is known. Applying deconvolution with that radius will very roughly attempt to undo the blurring effects of the hardware during the capture process, what Capture Sharpening is all about – keeping in mind that we have not dealt with blurring introduced by the demosaicing process and several practical issues linked to noise and insufficient energy in the measured curve.

Of course if the set up changes so does the radius, as you can read in the next post.