Point Spread Function and Capture Sharpening

A Point Spread Function is the image projected on the sensing plane when our cameras are pointed at a single, bright, infinitesimally small Point of light, like a distant star on a perfectly dark and clear night.   Ideally, that’s also how it would appear on the sensing material (silicon) of our camera sensors: a singularly small yet bright point of light surrounded by pitch black.  However a PSF can never look like a perfect point because in order to reach silicon it has to travel at least through an imperfect lens (1) of finite aperture (2), various filters (3) and only then finally land typically via a microlens on a squarish photosite of finite dimensions (4).

Each time it passes through one of these elements the Point of light is affected and spreads out a little more in slightly different ways, so that by the time it reaches silicon it is no longer a perfect Point but it is a slightly blurry Point instead: the image that this spread out Point makes on the sensing material is called the System’s Point Spread Function.  It is what we try to undo through Capture Sharpening.

Each one of the four elements above can be looked at in isolation: what PSF does the lens in isolation have?  What about the AA filter or the pixel shape (aperture)?  Combining the PSF of each individual stage results in the system’s total PSF as the camera and lens were set up at the time of  capture.   In the spatial domain the individual PSFs are convolved together to generate the System PSF.  In the frequency domain they are multiplied together.

What does a System PSF look like?  Here is its intensity profile onto a plane in one dimension (the 1D projection of the 2D PSF image), obtained through MTF Mapper:

D90 Line Spread Function

For reasons that we will see in later posts it is called a Line, instead of a Point, Spread Function because it was obtained by capturing the image of an edge.   It yields information about the PSF in the direction perpendicular to the edge.  If the imaging system were perfect it would be simply a vertical impulse at position zero**.

Clearly the better and the more ideal the hardware,  the smaller the image of the system PSF of a point of light on the sensing plane, the narrower its projection in the figure.  For instance an excellent, well constructed and corrected lens (1) would contribute less blurring/spread than a lesser one..

With Capture Sharpening we attempt ‘to restore any sharpness that was lost in the capture process’, therefore we try to undo the blurring effects of  the System PSF.  How do we do that?  See the next post.

 

** However there is a physical  limit to how small the PSF can be dictated by the physics of a (circular we assume) lens aperture.  Whenever a Point of light goes through a circular aperture it spreads out through a process called diffraction in the shape of a (Airy) disk, which has a first zero of diameter

Airy Disk Diameter = 2.44 * lambda * N, in the same units as lambda

with lambda the wavelength of the point of light and N the f-number of the lens as set up.  For instance with green light of about 0.550 microns at f/5.6, diffraction alone would produce the image of an Airy disk on the sensing plane with first zero diameter equal to 7.85 microns.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.