Tag Archives: f-number

Angles and the Camera Equation

Imagine a bucolic scene on a clear sunny day at the equator, sand warmed by the tropical sun with a typical irradiance (E) of about 1000 watts per square meter.  As discussed earlier we could express this quantity as illuminance in lumens per square meter (lx) – or as a certain number of photons per second (\Phi) over an area of interest (\mathcal{A}).

(1)   \begin{equation*} E = \frac{\Phi}{\mathcal{A}}  \; (W, lm, photons/s) / m^2 \end{equation*}

How many photons/s per unit area can we expect on the camera’s image plane (irradiance E_i )?

Figure 1.  Irradiation transfer from scene to sensor.

In answering this question we will discover the Camera Equation as a function of opening angles – and set the stage for the next article on lens pupils.  By the way, all quantities in this article depend on wavelength, which will be assumed in the formulas to make them more readable.

Continue reading Angles and the Camera Equation

Connecting Photographic Raw Data to Tristimulus Color Science

Absolute Raw Data

In the previous article we determined that the three r_{_L}g_{_L}b_{_L} values recorded in the raw data in the center of the image plane in units of Data Numbers per pixel – by a digital camera and lens as a function of absolute spectral radiance L(\lambda) at the lens – can be estimated as follows:

(1)   \begin{equation*} r_{_L}g_{_L}b_{_L} =\frac{\pi p^2 t}{4N^2} \int\limits_{380}^{780}L(\lambda) \odot SSF_{rgb}(\lambda)  d\lambda \end{equation*}

with subscript _L indicating absolute-referred units and SSF_{rgb} the three system Spectral Sensitivity Functions.   In this series of articles \odot is wavelength by wavelength multiplication (what happens to the spectrum of light as it progresses through the imaging system) and the integral just means the area under each of the three resulting curves (integration is what the pixels do during exposure).  Together they represent an inner or dot product.  All variables in front of the integral were previously described and can be considered constant for a given photographic setup. Continue reading Connecting Photographic Raw Data to Tristimulus Color Science

The Physical Units of Raw Data

In the previous article we (I) learned that the Spectral Sensitivity Functions of a given digital camera and lens are the result of the interaction of light from the scene with all of the spectrally varied components that make up the imaging system: mainly the lens, ultraviolet/infrared hot mirror, Color Filter Array and other filters, finally the photoelectric layer of the sensor, which is normally silicon in consumer kit.

Figure 1. The journey of light from source to sensor.  Cone Ω will play a starring role in the narrative that follows.

In this one we will put the process on a more formal theoretical footing, setting the stage for the next few on the role of white balance.

Continue reading The Physical Units of Raw Data

Diffracted DOF Aperture Guides: 24-35mm

As a landscape shooter I often wonder whether old rules for DOF still apply to current small pixels and sharp lenses. I therefore roughly measured  the spatial resolution performance of my Z7 with 24-70mm/4 S in the center to see whether ‘f/8 and be there’ still made sense today.  The journey and the diffraction-simple-aberration aware model were described in the last few posts.  The results are summarized in the Landscape Aperture-Distance charts presented here for the 24, 28 and 35mm focal lengths.

I also present the data in the form of a simplified plot to aid making the right compromises when the focusing distance is flexible.  This information is valid for the Z7 and kit in the center only.  It probably just as easily applies to cameras with similarly spec’d pixels and lenses. Continue reading Diffracted DOF Aperture Guides: 24-35mm

DOF and Diffraction: 24mm Guidelines

After an exhausting two and a half hour hike you are finally resting, sitting on a rock at the foot of your destination, a tiny alpine lake, breathing in the thin air and absorbing the majestic scenery.  A cool light breeze suddenly rips the surface of the water, morphing what has until now been a perfect reflection into an impressionistic interpretation of the impervious mountains in the distance.

The beautiful flowers in the foreground are so close you can touch them, the reflection in the water 10-20m away, the imposing mountains in the background a few hundred meters further out.  You realize you are hungry.  As you search the backpack for the two panini you prepared this morning you begin to ponder how best to capture the scene: subject,  composition, Exposure, Depth of Field.

Figure 1. A typical landscape situation: a foreground a few meters away, a mid-ground a few tens and a background a few hundred meters further out.  Three orders of magnitude.  The focus point was on the running dog, f/16, 1/100s.  Was this a good choice?

Depth of Field.  Where to focus and at what f/stop?  You tip your hat and just as you look up at the bluest of blue skies the number 16 starts enveloping your mind, like rays from the warm noon sun. You dial it in and as you squeeze the trigger that familiar nagging question bubbles up, as it always does in such conditions.  If this were a one shot deal, was that really the best choice?

In this article we attempt to provide information to make explicit some of the trade-offs necessary in the choice of Aperture for 24mm landscapes.  The result of the process is a set of guidelines.  The answers are based on the previously introduced diffraction-aware model for sharpness in the center along the depth of the field – and a tripod-mounted Nikon Z7 + Nikkor 24-70mm/4 S kit lens at 24mm.
Continue reading DOF and Diffraction: 24mm Guidelines

DOF and Diffraction: Setup

The two-thin-lens model for precision Depth Of Field estimates described in the last two articles is almost ready to be deployed.  In this one we will describe the setup that will be used to develop the scenarios that will be outlined in the next one.

The beauty of the hybrid geometrical-Fourier optics approach is that, with an estimate of the field produced at the exit pupil by an on-axis point source, we can generate the image of the resulting Point Spread Function and related Modulation Transfer Function.

Pretend that you are a photon from such a source in front of a f/2.8 lens focused at 10m with about 0.60 microns of third order spherical aberration – and you are about to smash yourself onto the ‘best focus’ observation plane of your camera.  Depending on whether you leave exactly from the in-focus distance of 10 meters or slightly before/after that, the impression you would leave on the sensing plane would look as follows:

Figure 1. PSF of a lens with about 0.6um of third order spherical aberration focused on 10m.

The width of the square above is 30 microns (um), which corresponds to the diameter of the Circle of Confusion used for old-fashioned geometrical DOF calculations with full frame cameras.  The first ring of the in-focus PSF at 10.0m has a diameter of about 2.44\lambda \frac{f}{D} = 3.65 microns.   That’s about the size of the estimated effective square pixel aperture of the Nikon Z7 camera that we are using in these tests.
Continue reading DOF and Diffraction: Setup

Wavefront to PSF to MTF: Physical Units

In the last article we saw that the intensity Point Spread Function and the Modulation Transfer Function of a lens could be easily approximated numerically by applying Discrete Fourier Transforms to its generalized exit pupil function \mathcal{P} twice in sequence.[1]

Numerical Fourier Optics: amplitude Point Spread Function, intensity PSF and MTF

Obtaining the 2D DFTs is easy: simply feed MxN numbers representing the two dimensional complex image of the Exit Pupil function in its uv space to a Fast Fourier Transform routine and, presto, it produces MxN numbers representing the amplitude of the PSF on the xy sensing plane.  Figure 1a shows a simple case where pupil function \mathcal{P} is a uniform disk representing the circular aperture of a perfect lens with MxN = 1024×1024.  Figure 1b is the resulting intensity PSF.

Figure 1a, left: A circular array of ones appearing as a white disk on a black background, representing a circular aperture. Figure 1b, right: Array of numbers representing the PSF of image 1a in the classic shape of an Airy Pattern.
Figure 1. 1a Left: Array of numbers representing a circular aperture (zeros for black and ones for white).  1b Right: Array of numbers representing the PSF of image 1a (contrast slightly boosted).

Simple and fast.  Wonderful.  Below is a slice through the center, the 513th row, zoomed in.  Hmm….  What are the physical units on the axes of displayed data produced by the DFT? Continue reading Wavefront to PSF to MTF: Physical Units

Aberrated Wave to Image Intensity to MTF

Goodman, in his excellent Introduction to Fourier Optics[1], describes how an image is formed on a camera sensing plane starting from first principles, that is electromagnetic propagation according to Maxwell’s wave equation.  If you want the play by play account I highly recommend his math intensive book.  But for the budding photographer it is sufficient to know what happens at the Exit Pupil of the lens because after that the transformations to Point Spread and Modulation Transfer Functions are straightforward, as we will show in this article.

The following diagram exemplifies the last few millimeters of the journey that light from the scene has to travel in order to be absorbed by a camera’s sensing medium.  Light from the scene in the form of  field  U arrives at the front of the lens.  It goes through the lens being partly blocked and distorted by it as it arrives at its virtual back end, the Exit Pupil, we’ll call this blocking/distorting function P.   Other than in very simple cases, the Exit Pupil does not necessarily coincide with a specific physical element or Principal surface.[iv]  It is a convenient mathematical construct which condenses all of the light transforming properties of a lens into a single plane.

The complex light field at the Exit Pupil’s two dimensional uv plane is then  U\cdot P as shown below (not to scale, the product of the two arrays is element-by-element):

Figure 1. Simplified schematic diagram of the space between the exit pupil of a camera lens and its sensing plane. The space is assumed to be filled with air.

Continue reading Aberrated Wave to Image Intensity to MTF

Taking the Sharpness Model for a Spin

The series of articles starting here outlines a model of how the various physical components of a digital camera and lens can affect the ‘sharpness’ – that is the spatial resolution – of the  images captured in the raw data.  In this one we will pit the model against MTF curves obtained through the slanted edge method[1] from real world raw captures both with and without an anti-aliasing filter.

With a few simplifying assumptions, which include ignoring aliasing and phase, the spatial frequency response (SFR or MTF) of a photographic digital imaging system near the center can be expressed as the product of the Modulation Transfer Function of each component in it.  For a current digital camera these would typically be the main ones:

(1)   \begin{equation*} MTF_{sys} = MTF_{lens} (\cdot MTF_{AA}) \cdot MTF_{pixel} \end{equation*}

all in two dimensions Continue reading Taking the Sharpness Model for a Spin

Equivalence in Pictures: Focal Length, f-number, diffraction, DOF

Equivalence – as we’ve discussed one of the fairest ways to compare the performance of two cameras of different physical formats, characteristics and specifications – essentially boils down to two simple realizations for digital photographers:

  1. metrics need to be expressed in units of picture height (or diagonal where the aspect ratio is significantly different) in order to easily compare performance with images displayed at the same size; and
  2. focal length changes proportionally to sensor size in order to capture identical scene content on a given sensor, all other things being equal.

The first realization should be intuitive (see next post).  The second one is the subject of this post: I will deal with it through a couple of geometrical diagrams.

Continue reading Equivalence in Pictures: Focal Length, f-number, diffraction, DOF

Equivalence and Equivalent Image Quality: Signal

One of the fairest ways to compare the performance of two cameras of different physical characteristics and specifications is to ask a simple question: which photograph would look better if the cameras were set up side by side, captured identical scene content and their output were then displayed and viewed at the same size?

Achieving this set up and answering the question is anything but intuitive because many of the variables involved, like depth of field and sensor size, are not those we are used to dealing with when taking photographs.  In this post I would like to attack this problem by first estimating the output signal of different cameras when set up to capture Equivalent images.

It’s a bit long so I will give you the punch line first:  digital cameras of the same generation set up equivalently will typically generate more or less the same signal in e^- independently of format.  Ignoring noise, lenses and aspect ratio for a moment and assuming the same camera gain and number of pixels, they will produce identical raw files. Continue reading Equivalence and Equivalent Image Quality: Signal

Photons Emitted by Light Source

How many photons are emitted by a light source? To answer this question we need to evaluate the following simple formula at every wavelength in the spectral range of interest and add the values up:

(1)   \begin{equation*} \frac{\text{Power of Light in }W/m^2}{\text{Energy of Average Photon in }J/photon} \end{equation*}

The Power of Light emitted in W/m^2 is called Spectral Exitance, with the symbol M_e(\lambda) when referred to  units of energy.  The energy of one photon at a given wavelength is

(2)   \begin{equation*} e_{ph}(\lambda) = \frac{hc}{\lambda}\text{    joules/photon} \end{equation*}

with \lambda the wavelength of light in meters and h and c Planck’s constant and the speed of light in the chosen medium respectively.  Since Watts are joules per second the units of (1) are therefore photons/m^2/s.  Writing it more formally:

(3)   \begin{equation*} M_{ph} = \int\limits_{\lambda_1}^{\lambda_2} \frac{M_e(\lambda)\cdot \lambda \cdot d\lambda}{hc} \text{  $\frac{photons}{m^2\cdot s}$} \end{equation*}

Continue reading Photons Emitted by Light Source

Deconvolution PSF Changes with Aperture

We have  seen in the previous post how the radius for deconvolution capture sharpening by a Gaussian PSF can be estimated for a given setup in well behaved and characterized camera systems.  Some parameters like pixel aperture and AA strength should remain stable for a camera/prime lens combination as f-numbers are increased (aperture is decreased) from about f/5.6 on up – the f/stops dear to Full Frame landscape photographers.  But how should the radius for generic Gaussian deconvolution  change as the f-number increases from there? Continue reading Deconvolution PSF Changes with Aperture