Imagine a bucolic scene on a clear sunny day at the equator, sand warmed by the tropical sun with a typical irradiance () of about 1000 watts per square meter. As discussed earlier we could express this quantity as illuminance in lumens per square meter () – or as a certain number of photons per second () over an area of interest ().
(1)
How many per unit area can we expect on the camera’s image plane (irradiance )?
In answering this question we will discover the Camera Equation as a function of opening angles – and set the stage for the next article on lens pupils. By the way, all quantities in this article depend on wavelength, which will be assumed in the formulas to make them more readable.
The next few articles will outline the first tiny few steps towards achieving perfect capture sharpening, that is deconvolution of an image by the Point Spread Function (PSF) of the lens used to capture it. This is admittedly a complex subject, fraught with a myriad ever changing variables even in a lab, let alone in the field. But studying it can give a glimpse of the possibilities and insights into the processes involved.
I will explain the steps I followed and show the resulting images and measurements. Jumping the gun, the blue line below represents the starting system Spatial Frequency Response (SFR)[1], the black one unattainable/undesirable perfection and the orange one the result of part of the process outlined in this series.
In the last article we saw that the intensity Point Spread Function and the Modulation Transfer Function of a lens could be easily approximated numerically by applying Discrete Fourier Transforms to its generalized exit pupil function twice in sequence.[1]
Obtaining the 2D DFTs is easy: simply feed MxN numbers representing the two dimensional complex image of the Exit Pupil function in its space to a Fast Fourier Transform routine and, presto, it produces MxN numbers representing the amplitude of the PSF on the sensing plane. Figure 1a shows a simple case where pupil function is a uniform disk representing the circular aperture of a perfect lens with MxN = 1024×1024. Figure 1b is the resulting intensity PSF.
Simple and fast. Wonderful. Below is a slice through the center, the 513th row, zoomed in. Hmm…. What are the physical units on the axes of displayed data produced by the DFT? Continue reading Wavefront to PSF to MTF: Physical Units→
Goodman, in his excellent Introduction to Fourier Optics[1], describes how an image is formed on a camera sensing plane starting from first principles, that is electromagnetic propagation according to Maxwell’s wave equation. If you want the play by play account I highly recommend his math intensive book. But for the budding photographer it is sufficient to know what happens at the Exit Pupil of the lens because after that the transformations to Point Spread and Modulation Transfer Functions are straightforward, as we will show in this article.
The following diagram exemplifies the last few millimeters of the journey that light from the scene has to travel in order to be absorbed by a camera’s sensing medium. Light from the scene in the form of field arrives at the front of the lens. It goes through the lens being partly blocked and distorted by it as it arrives at its virtual back end, the Exit Pupil, we’ll call this blocking/distorting function . Other than in very simple cases, the Exit Pupil does not necessarily coincide with a specific physical element or Principal surface.[iv] It is a convenient mathematical construct which condenses all of the light transforming properties of a lens into a single plane.
The complex light field at the Exit Pupil’s two dimensional plane is then as shown below (not to scale, the product of the two arrays is element-by-element):
The next few posts will describe a linear spatial resolution model that can help a photographer better understand the main variables involved in evaluating the ‘sharpness’ of photographic equipment and related captures. I will show numerically that the combined spectral frequency response (MTF) of a perfect AAless monochrome digital camera and lens in two dimensions can be described as the magnitude of the normalized product of the Fourier Transform (FT) of the lens Point Spread Function by the FT of the pixel footprint (aperture), convolved with the FT of a rectangular grid of Dirac delta functions centered at each pixel:
With a few simplifying assumptions we will see that the effect of the lens and sensor on the spatial resolution of the continuous image on the sensing plane can be broken down into these simple components. The overall ‘sharpness’ of the captured digital image can then be estimated by combining the ‘sharpness’ of each of them.
When capturing a typical photograph, light from one or more sources is reflected from the scene, reaches the lens, goes through it and eventually hits the sensing plane.
In photography Exposureis the quantity of visible light per unit area incident on the image plane during the time that it is exposed to the scene. Exposure is intuitively proportional to Luminance from the scene $L$ and exposure time $t$. It is inversely proportional to lens f-number $N$ squared because it determines the relative size of the cone of light captured from the scene. You can read more about the theory in the article on angles and the Camera Equation.