Over the last two posts we’ve been exploring some of the differences introduced by tweaks to the Color Filter Array of the Phase One IQ3 100MP Trichromatic Digital Back versus its original incarnation, the Standard Back. Refer to those for the background. In this article we will delve into some of these differences quantitatively.
Let’s start with the compromise color matrices we derived from David Chew’s captures of a ColorChecher 24 in the shade of a sunny November morning in Ohio. These are the matrices necessary to convert white balanced raw data to the perceptual CIE XYZ color space, where it is said there should be one-to-one correspondence with colors as perceived by humans, and therefore where most measurements are performed. They are optimized for each back in the current conditions but they are not perfect, the reason for the word ‘compromise’ in their name:
We have seen in the last post that Phase One apparently performed a couple of main tweaks to the Color Filter Array of its Medium Format IQ3 100MP back when it introduced the Trichromatic: it made the shapes of color filter sensitivities more symmetric by eliminating residual transmittance away from the peaks; and it boosted the peak sensitivity of the red (and possibly blue) filter. It did this with the objective of obtaining more accurate, less noisy color out of the hardware, requiring less processing and weaker purple fringing to boot.
Both changes carry the compromises discussed in the last article so the purpose of this one and the one that follows is to attempt to measure – within the limits of my tests, procedures and understanding – the effect of the CFA changes from similar raw captures by the IQ3 100MP Standard Back and Trichromatic, courtesy of David Chew. We will concentrate on color accuracy, leaving purple fringing for another time.
It is always interesting when innovative companies push the envelope of the state-of-the-art of a single component in their systems because a lot can be learned from before and after comparisons. I was therefore excited when Phase One introduced a Trichromatic version of their Medium Format IQ3 100MP Digital Back last September because it could allows us to isolate the effects of tweaks to their Bayer Color Filter Array, assuming all else stays the same.
Thanks to two virtually identical captures by David Chew at getDPI, and Erik Kaffehr’s intelligent questions at DPR, in the following articles I will explore the effect on linear color of the new Trichromatic CFA (TC) vs the old one on the Standard Back (SB). In the process we will discover that – within the limits of my tests, procedures and understanding – the Standard Back produces apparently more ‘accurate’ color while the Trichromatic produces better looking matrices, potentially resulting in ‘purer’ signals. Continue reading Phase One IQ3 100MP Trichromatic vs Standard Back Linear Color, Part I→
In the last article we saw that the Point Spread Function and the Modulation Transfer Function of a lens could be easily obtained numerically by applying Discrete Fourier Transforms to its generalized exit pupil function twice in sequence.
Obtaining the 2D DFTs is easy: simply feed MxN numbers representing the two dimensional complex image of the pupil function in its space to a fast fourier transform routine and, presto, it produces MxN numbers that represent the amplitude of the PSF on the sensing plane, as shown below for the pupil function of a perfect lens with a circular aperture and MxN = 1024×1024.
Simple and fast. Wonderful. Below is a slice through the center, the 513th row, zoomed in. Hmm…. What are the physical units on the axes of displayed data produced by the DFT?
Less easy – and the subject of this article as seen from a photographic perspective.
Goodman, in his excellent Introduction to Fourier Optics, describes how an image is formed on a camera sensing plane starting from first principles, that is electromagnetic propagation according to Maxwell’s wave equation. If you want the play by play account I highly recommend his math intensive book. But for the budding photographer it is sufficient to know what happens at the exit pupil of the lens because after that the transformations to Point Spread and Modulation Transfer Functions are straightforward, as we will show in this article.
The following diagram exemplifies the last few millimeters of the journey that light from the scene has to travel in order to smash itself against our camera’s sensing medium. Light from the scene in the form of field arrives at the front of the lens. It goes through the lens being partly blocked and distorted by it (we’ll call this blocking/distorting function ) and finally arrives at its back end, the exit pupil. The complex light field at the exit pupil’s two dimensional plane is now as shown below:
This article is about specifying the units of the Discrete Fourier Transform of an image and the various ways that they can be expressed. This apparently simple task can be fiendishly unintuitive.
The image we will use as an example is the familiar Airy Disk from the last few posts, at f/16 with light of mean 530nm wavelength. Zoomed in to the left in Figure 1; and as it looks in its 1024×1024 sample image to the right:
Having shown that our simple two dimensional MTF model is able to predict the performance of the combination of a perfect lens and square monochrome pixel we now turn to the effect of the sampling interval on spatial resolution according to the guiding formula:
The hats in this case mean the Fourier Transform of the relative component normalized to 1 at the origin (), that is the individual MTFs of the perfect lens PSF, the perfect square pixel and the delta grid.
Sampling in the Spatial and Frequency Domains
Sampling is expressed mathematically as a Dirac delta function at the center of each pixel (the red dots below).
Now that we know from the introductory article that the spatial frequency response of a typical perfect digital camera and lens can be modeled simply as the product of the Modulation Transfer Function of the lens and pixel area, convolved with a Dirac delta grid at cycles-per-pixel spacing
The next few posts will describe a linear spatial resolution model that can help a photographer better understand the main variables involved in evaluating the ‘sharpness’ of photographic equipment and related captures. I will show numerically that the combined spectral frequency response (MTF) of a perfect AAless monochrome digital camera and lens in two dimensions can be described as the normalized multiplication of the Fourier Transform (FT) of the lens Point Spread Function by the FT of the (square) pixel footprint, convolved with the FT of a rectangular grid of Dirac delta functions centered at each pixel, as better described in the article
With a few simplifying assumptions we will see that the effect of the lens and sensor on the spatial resolution of the continuous image on the sensing plane can be broken down into these simple components. The overall ‘sharpness’ of the captured digital image can then be estimated by combining the ‘sharpness’ of each of them. Continue reading A Simple Model for Sharpness in Digital Cameras – I→
My camera has a 14-bit ADC. Can it accurately record information lower than 14 stops below full scale? Can it store sub-LSB signals in the raw data?
With a well designed sensor the answer, unsurprisingly if you’ve followed the last few posts, is yes it can. The key to being able to capture such tiny visual information in the raw data is a well behaved imaging system with a properly dithered ADC. Continue reading Sub Bit Signal→
Physicists and mathematicians over the last few centuries have spent a lot of their time studying light and electrons, the key ingredients of digital photography. In so doing they have left us with a wealth of theories to explain their behavior in nature and in our equipment. In this article I will describe how to simulate the information generated by a uniformly illuminated imaging system using open source Octave (or equivalently Matlab) utilizing some of these theories. Since as you will see the simulations are incredibly (to me) accurate, understanding how the simulator works goes a long way in explaining the inner workings of a digital sensor at its lowest levels; and simulated data can be used to further our understanding of photographic science without having to run down the shutter count of our favorite SLRs. This approach is usually referred to as Monte Carlo simulation.
We’ve seen how information about a photographic scene is collected in the ISOless/invariant range of a digital camera sensor, amplified, converted to digital data and stored in a raw file. For a given Exposure the best information quality (IQ) about the scene is available right at the photosites, only possibly degrading from there – but a properly designed** fully ISO invariant imaging system is able to store it in its entirety in the raw data. It is able to do so because the information carrying capacity (photographers would call it the dynamic range) of each subsequent stage is equal to or larger than the previous one. Cameras that are considered to be (almost) ISOless from base ISO include the Nikon D7000, D7200 and the Pentax K5. All digital cameras become ISO invariant above a certain ISO, the exact value determined by design compromises.
In this article we’ll look at a class of imagers that are not able to store the whole information available at the photosites in one go in the raw file for a substantial portion of their working ISOs. The photographer can in such a case choose out of the full information available at the photosites what smaller subset of it to store in the raw data by the selection of different in-camera ISOs. Such cameras are sometimes improperly referred to as ISOful. Most Canon DSLRs fall into this category today. As do kings of darkness such as the Sony a7S or Nikon D5.
In the last few posts I have made the case that Image Quality in a digital camera is entirely dependent on the light Information collected at a sensor’s photosites during Exposure. Any subsequent processing – whether analog amplification and conversion to digital in-camera and/or further processing in-computer – effectively applies a set of Information Transfer Functions to the signal that when multiplied together result in the data from which the final photograph is produced. Each step of the way can at best maintain the original Information Quality (IQ) but in most cases it will degrade it somewhat.
IQ: Only as Good as at Photosites’ Output
This point is key: in a well designed imaging system** the final image IQ is only as good as the scene information collected at the sensor’s photosites, independently of how this information is stored in the working data along the processing chain, on its way to being transformed into a pleasing photograph. As long as scene information is properly encoded by the system early on, before being written to the raw file – and information transfer is maintained in the data throughout the imaging and processing chain – final photograph IQ will be virtually the same independently of how its data’s histogram looks along the way.
Ever since Einstein we’ve been able to say that humans ‘see’ because information about the scene is carried to the eyes by photons reflected by it. So when we talk about Information in photography we are referring to information about the energy and distribution of photons arriving from the scene. The more complete this information, the better we ‘see’. No photons = no information = no see; few photons = little information = see poorly = poor IQ; more photons = more information = see better = better IQ.
Sensors in digital cameras work similarly, their output ideally being the energy and location of every photon incident on them during Exposure. That’s the full information ideally required to recreate an exact image of the original scene for the human visual system, no more and no less. In practice however we lose some of this information along the way during sensing, so we need to settle for approximate location and energy – in the form of photoelectron counts by pixels of finite area, often correlated to a color filter array.
When photographers talk about grayscale ‘tones’ they typically refer to the number of distinct gray levels present in a displayed image. They don’t want to see distinct levels in a natural slow changing gradient like a dark sky: if it’s smooth they want to perceive it as smooth when looking at their photograph. So they want to make sure that all possible tonal information from the scene has been captured and stored in the raw data by their imaging system.
My camera has an engineering Dynamic Range of 14 stops, how many bits do I need to encode that DR? Well, to encode the whole Dynamic Range 1 bit will suffice. The reason is simple, dynamic range is only concerned with the extremes, not with tones in between:
So in theory we only need 1 bit to encode it: zero for minimum signal and one for maximum signal, like so
Dynamic Range (DR) in Photography usually refers to the working tone range, from darkest to brightest, that the imaging system is capable of capturing and/or displaying. It is expressed as a ratio, in stops:
It is a key Image Quality metric because photography is all about contrast, and dynamic range limits the range of recordable/displayable tones. Different components in the imaging system have different working dynamic ranges and the system DR is equal to the dynamic range of the weakest performer in the chain.
There are several ways to extract Sensor IQ metrics like read noise, Full Well Count, PRNU, Dynamic Range and others from mean and standard deviation statistics obtained from a uniform patch in a camera’s raw file. In the last post we saw how to do it by using such parameters to make observed data match the measured SNR curve. In this one we will achieve the same objective by fitting mean and standard deviation data. Since the measured data is identical, if the fit is good so should be the results.
Sensor Metrics from Measured Mean and Standard Deviation in DN
We’ve seen how to model sensors and how to collect signal and noise statistics from the raw data of our digital cameras. In this post I am going to pull both things together allowing us to estimate sensor IQ metrics: input-referred read noise, clipping/saturation/Full Well Count, Dynamic Range, Pixel Response Non-Uniformities and gain/sensitivity.
There are several ways to extract these metrics from signal and noise data obtained from a camera’s raw file. I will show two related ones: via SNR in this post and via total noise N in the next. The procedure is similar and the results are identical.
Imperfections in an imaging system’s capture process manifest themselves in the form of deviations from the expected signal. We call these imperfections ‘noise’. The fewer the imperfections, the lower the noise, the higher the image quality. However, because the Human Visual System is adaptive within its working range, it’s not the absolute amount of noise that matters to perceived IQ as much as the amount of noise relative to the signal. That’s why to characterize the performance of a sensor in addition to noise we also need to determine its sensitivity and the maximum signal it can detect.
In this series of articles I will describe how to use the Photon Transfer method and a spreadsheet to determine basic IQ performance metrics of a digital camera sensor. It is pretty easy if we keep in mind the simple model of how light information is converted into raw data by digital cameras:
Olympus just announced the E-M5 Mark II, an updated version of its popular micro Four Thirds E-M5 model, with an interesting new feature: its 16MegaPixel sensor, presumably similar to the one in other E-Mx bodies, has a high resolution mode where it gets shifted around by the image stabilization servos during exposure to capture, as they say in their press release
‘resolution that goes beyond full-frame DSLR cameras. 8 images are captured with 16-megapixel image information while moving the sensor by 0.5 pixel steps between each shot. The data from the 8 shots are then combined to produce a single, super-high resolution image, equivalent to the one captured with a 40-megapixel image sensor.’
A great idea that could give a welcome boost to the ‘sharpness’ of this handy system. This preliminary test shows that the E-M5 mk II 64MP High-Res mode gives in this case a 10-12% advantage in MTF50 linear spatial resolution compared to the Standard Shot 16MP mode. Plus it apparently virtually eliminates the possibility of aliasing and moiré. Great stuff, Olympus.
So, is it true that a Four Thirds lens needs to be about twice as ‘sharp’ as its Full Frame counterpart in order to be able to display an image of spatial resolution equivalent to the larger format’s?
It is, because of the simple geometry I will describe in this article. In fact with a few provisos one can generalize and say that lenses from any smaller format need to be ‘sharper’ by the ratio of their sensor linear sizes in order to produce the same linear resolution on same-sized final images.
This is one of the reasons why Ansel Adams shot 4×5 and 8×10 – and I would too, were it not for logistical and pecuniary concerns.
Equivalence – as we’ve discussed one of the fairest ways to compare the performance of two cameras of different physical formats, characteristics and specifications – essentially boils down to two simple realizations for digital photographers:
metrics need to be expressed in units of picture height (or diagonal where the aspect ratio is significantly different) in order to easily compare performance with images displayed at the same size; and
focal length changes proportionally to sensor size in order to capture identical scene content on a given sensor, all other things being equal.
The first realization should be intuitive (future post). The second one is the subject of this post: I will deal with it through a couple of geometrical diagrams.
Several sites perform spatial resolution ‘sharpness’ testing of imaging systems for photographers (i.e. ‘lens+digital camera’) and publish results online. You can also measure your own equipment relatively easily to determine how sharp your hardware is. However comparing results from site to site and to your own can be difficult and/or misleading, starting from the multiplicity of units used: cycles/pixel, line pairs/mm, line widths/picture height, line pairs/image height, cycles/picture height etc.
This post will address the units involved in spatial resolution measurement using as an example readings from the slanted edge method.
Determining the Signal to Noise Ratio (SNR) curves of your digital camera at various ISOs and extracting from them the underlying IQ metrics of its sensor can help answer a number of questions useful to photography. For instance whether/when to raise ISO; what its dynamic range is; how noisy its output could be in various conditions; or how well it is likely to perform compared to other Digital Still Cameras. As it turns out obtaining the relative data is a little time consuming but not that hard. All you need is your camera, a suitable target, a neutral density filter, dcraw and free ImageJ, Octave or (pay) Matlab.
Effective Quantum Efficiency as I calculate it is an estimate of the probability that a visible photon – from a ‘Daylight’ blackbody radiating source at a temperature of 5300K impinging on the sensor in question after making it through its IR filter, UV filter, AA low pass filter, microlenses, average Color Filter – will produce a photoelectron upon hitting silicon:
One of the fairest ways to compare the performance of two cameras of different physical characteristics and specifications is to ask a simple question: which photograph would look better if the cameras were set up side by side, captured identical scene content and their output were then displayed and viewed at the same size?
Achieving this set up and answering the question is anything but intuitive because many of the variables involved, like depth of field and sensor size, are not those we are used to dealing with when taking photographs. In this post I would like to attack this problem by first estimating the output signal of different cameras when set up to capture Equivalent images.
It’s a bit long so I will give you the punch line first: digital cameras of the same generation set up equivalently will typically generate more or less the same signal in independently of format. Ignoring noise, lenses and aspect ratio for a moment and assuming the same camera gain and number of pixels, they will produce identical raw files. Continue reading Equivalence and Equivalent Image Quality: Signal→
Now that we know how to determine how many photons impinge on a sensor we can estimate its Effective Quantum Efficiency, that is the efficiency with which it turns such a photon flux into photoelectrons ( ), which will then be converted to raw data to be stored in the capture’s raw file:
I call it ‘effective’ because it represents the probability that a photon arriving on the sensing plane from the scene will be converted to a photoelectron by a typical digital camera sensor. It therefore includes the effect of microlenses, fill factor, CFA and other filters on top of silicon in the pixel. It is usually expressed as a percentage. For instance if an average of 100 photons per pixel within the sensor’s passband were incident on a uniformly lit spot of the sensor and on average each pixel produced a signal of 20 photoelectrons we would say that the Effective Quantum Efficiency of the sensor is 20%. Clearly the higher the eQE the better for Image Quality parameters such as SNR and DR. Continue reading What is the Effective Quantum Efficiency of my Sensor?→
How many photons impinge on a pixel illuminated by a known light source during exposure? To answer this question in a photographic context we need to know the area of the pixel, the Spectral Power Distribution of the illuminant and the relative Exposure.
We know the pixel’s area and we know that the Spectral Power Distribution of a common class of light sources called blackbody radiators at temperature T is described by Spectral Radiant Exitance – so all we need to determine is what Exposure this irradiance corresponds to in order to obtain the answer.
How many visible photons hit a pixel on my sensor? The answer depends on Exposure, Spectral power distribution of the arriving light and pixel area. With a few simplifying assumptions it is not too difficult to calculate that with a typical Daylight illuminant the number is roughly 11,850 photons per lx-s per micron. Without the simplifying assumptions* it reduces to about 11,260. Continue reading How Many Photons on a Pixel→
I measured the Spectral Power Distribution of the three CFA filters of a Nikon D610 in ‘Daylight’ conditions with a cheap spectrometer. Taking a cue from this post I pointed it at light from the sun reflected off a gray card and took a raw capture of the spectrum it produced.
An ImageJ plot did the rest. I took a dozen pictures at slightly different angles to catch the picture of the clearest spectrum. Shown are the three spectral curves averaged over the two best opposing captures. The Photopic Eye Luminous Efficiency Function (2 degree, Sharpe et al 2005) is also shown for reference, scaled to the same maximum as the green curve. Continue reading Nikon CFA Spectral Power Distribution→
The key variable as far as the tolerances required to position the lens for accurate focus are concerned (at least in a simplified ideal situation) is an appropriate approximate distance between the desired in-focus plane and the actual in-focus plane (which we are assuming is slightly out of focus). It is a distance in the direction perpendicular to the x-y plane normally used to describe position of the image on it, hence the designation delta z, or dz in this post. The lens’ allowable focus tolerance is therefore +/- dz, which we will show in this post to vary as the square of the format’s diagonal. Continue reading Focus Tolerance and Format Size→
The in-camera ISO dial is a ballpark milkshake of an indicator to help choose parameters that will result in a ‘good’ perceived picture. Key ingredients to obtain a ‘good’ perceived picture are 1) ‘good’ Exposure and 2) ‘good’ in-camera or in-computer processing. It’s easier to think about them as independent processes and that comes naturally to you because you shoot raw in manual mode and you like to PP, right? Continue reading Exposure and ISO→