Tag Archives: aperture

Angles and the Camera Equation

Imagine a bucolic scene on a clear sunny day at the equator, sand warmed by the tropical sun with a typical irradiance (E) of about 1000 watts per square meter.  As discussed earlier we could express this quantity as illuminance in lumens per square meter (lx) – or as a certain number of photons per second (\Phi) over an area of interest (\mathcal{A}).

(1)   \begin{equation*} E = \frac{\Phi}{\mathcal{A}}  \; (W, lm, photons/s) / m^2 \end{equation*}

How many photons/s per unit area can we expect on the camera’s image plane (irradiance E_i )?

Figure 1.  Irradiation transfer from scene to sensor.

In answering this question we will discover the Camera Equation as a function of opening angles – and set the stage for the next article on lens pupils.  By the way, all quantities in this article depend on wavelength, which will be assumed in the formulas to make them more readable.

Continue reading Angles and the Camera Equation

Capture Sharpening: Estimating Lens PSF

The next few articles will outline the first tiny few steps towards achieving perfect capture sharpening, that is deconvolution of an image by the Point Spread Function (PSF) of the lens used to capture it.  This is admittedly  a complex subject, fraught with a myriad ever changing variables even in a lab, let alone in the field.  But studying it can give a glimpse of the possibilities and insights into the processes involved.

I will explain the steps I followed and show the resulting images and measurements.  Jumping the gun, the blue line below represents the starting system Spatial Frequency Response (SFR)[1], the black one unattainable/undesirable perfection and the orange one the result of part of the process outlined in this series.

Figure 1. Spatial Frequency Response of the imaging system before and after Richardson-Lucy deconvolution by the PSF of the lens that captured the original image.

Continue reading Capture Sharpening: Estimating Lens PSF

Wavefront to PSF to MTF: Physical Units

In the last article we saw that the intensity Point Spread Function and the Modulation Transfer Function of a lens could be easily approximated numerically by applying Discrete Fourier Transforms to its generalized exit pupil function \mathcal{P} twice in sequence.[1]

Numerical Fourier Optics: amplitude Point Spread Function, intensity PSF and MTF

Obtaining the 2D DFTs is easy: simply feed MxN numbers representing the two dimensional complex image of the Exit Pupil function in its uv space to a Fast Fourier Transform routine and, presto, it produces MxN numbers representing the amplitude of the PSF on the xy sensing plane.  Figure 1a shows a simple case where pupil function \mathcal{P} is a uniform disk representing the circular aperture of a perfect lens with MxN = 1024×1024.  Figure 1b is the resulting intensity PSF.

Figure 1a, left: A circular array of ones appearing as a white disk on a black background, representing a circular aperture. Figure 1b, right: Array of numbers representing the PSF of image 1a in the classic shape of an Airy Pattern.
Figure 1. 1a Left: Array of numbers representing a circular aperture (zeros for black and ones for white).  1b Right: Array of numbers representing the PSF of image 1a (contrast slightly boosted).

Simple and fast.  Wonderful.  Below is a slice through the center, the 513th row, zoomed in.  Hmm….  What are the physical units on the axes of displayed data produced by the DFT? Continue reading Wavefront to PSF to MTF: Physical Units

Aberrated Wave to Image Intensity to MTF

Goodman, in his excellent Introduction to Fourier Optics[1], describes how an image is formed on a camera sensing plane starting from first principles, that is electromagnetic propagation according to Maxwell’s wave equation.  If you want the play by play account I highly recommend his math intensive book.  But for the budding photographer it is sufficient to know what happens at the Exit Pupil of the lens because after that the transformations to Point Spread and Modulation Transfer Functions are straightforward, as we will show in this article.

The following diagram exemplifies the last few millimeters of the journey that light from the scene has to travel in order to be absorbed by a camera’s sensing medium.  Light from the scene in the form of  field  U arrives at the front of the lens.  It goes through the lens being partly blocked and distorted by it as it arrives at its virtual back end, the Exit Pupil, we’ll call this blocking/distorting function P.   Other than in very simple cases, the Exit Pupil does not necessarily coincide with a specific physical element or Principal surface.[iv]  It is a convenient mathematical construct which condenses all of the light transforming properties of a lens into a single plane.

The complex light field at the Exit Pupil’s two dimensional uv plane is then  U\cdot P as shown below (not to scale, the product of the two arrays is element-by-element):

Figure 1. Simplified schematic diagram of the space between the exit pupil of a camera lens and its sensing plane. The space is assumed to be filled with air.

Continue reading Aberrated Wave to Image Intensity to MTF

A Simple Model for Sharpness in Digital Cameras – I

The next few posts will describe a linear spatial resolution model that can help a photographer better understand the main variables involved in evaluating the ‘sharpness’ of photographic equipment and related captures.   I will show numerically that the combined spectral frequency response (MTF) of a perfect AAless monochrome digital camera and lens in two dimensions can be described as the magnitude of the normalized product of the Fourier Transform (FT) of the lens Point Spread Function by the FT of the pixel footprint (aperture), convolved with the FT of a rectangular grid of Dirac delta functions centered at each  pixel:

    \[ MTF_{2D} = \left|\widehat{ PSF_{lens} }\cdot \widehat{PIX_{ap} }\right|_{pu}\ast\ast\: \delta\widehat{\delta_{pitch}} \]

With a few simplifying assumptions we will see that the effect of the lens and sensor on the spatial resolution of the continuous image on the sensing plane can be broken down into these simple components.  The overall ‘sharpness’ of the captured digital image can then be estimated by combining the ‘sharpness’ of each of them.

The stage will be set in this first installment with a little background and perfect components.  Later additional detail will be provided to take into account pixel aperture and Anti-Aliasing filters.  Then we will look at simple aberrations.  Next we will learn how to measure MTF curves for our equipment, and look at numerical methods to model PSFs and MTFs from the wavefront at the aperture. Continue reading A Simple Model for Sharpness in Digital Cameras – I

What Is Exposure

When capturing a typical photograph, light from one or more sources is reflected from the scene, reaches the lens, goes through it and eventually hits the sensing plane.

In photography Exposure is the quantity of visible light per unit area incident on the image plane during the time that it is exposed to the scene.  Exposure is intuitively proportional to Luminance from the scene $L$ and exposure time $t$.  It is inversely proportional to lens f-number $N$ squared because it determines the relative size of the cone of light captured from the scene.  You can read more about the theory in the article on angles and the Camera Equation.

Continue reading What Is Exposure