Tag Archives: ILC

Pi HQ Cam Sensor Performance

Now that we know how to open 12-bit raw files captured with the new Raspberry Pi High Quality Camera, we can learn a bit more about the capabilities of its 1/2.3″ Sony IMX477 sensor from a keen photographer’s perspective.  The subject is a bit dry, so I will give you the summary upfront.  These figures were obtained with my HQ module at room temperature and the raspistill – -raw (-r) command:

Raspberry Pi
HQ Camera
raspistill
--raw -ag 1
Comments
Black Level256.3 DN256.0 - 257.3 based on gain
White Level4095Constant throughout
Analog Gain1Gain Range 1 - 16
Read Noise3 e-, gain 1
1.5 e-, gain 16
1.53 DN from black frame
11.50 DN
Clipping (FWC)8180 e-at base gain, 3400e-/um^2
Dynamic Range11.15 stops
11.3 stops
SNR = 1 to Clipping
Read Noise to Clipping
System Gain0.47 DN/e-at base analog gain
Star Eater AlgorithmPartly DefeatableAll channels - from base gain and from min shutter speed
Low Pass FilterYesAll channels - from base gain and from min shutter speed

Continue reading Pi HQ Cam Sensor Performance

Opening Raspberry Pi High Quality Camera Raw Files

The Raspberry Pi Foundation recently released an interchangeable lens camera module based on the Sony  IMX477, a 1/2.3″ back side illuminated sensor with 3040×4056 pixels of 1.55um pitch.  In this somewhat technical article we will unpack the 12-bit raw still data that it produces and render it in a convenient color space.

still life raw capture data file raspberry pi high quality hq cam f/8 1/2s base analog gain iso adobe rgb
Figure 1. 12-bit raw capture by Raspberry Pi High Quality Camera with 16 mm kit lens at f/8, 1/2 s, base ISO. The image was loaded into Matlab and rendered Half Height Nearest Neighbor in the Adobe RGB color space with a touch of local contrast and sharpening.  Click on it to see it in its own tab and view it at 100% magnification. If your browser is not color managed you may not see colors properly.

Continue reading Opening Raspberry Pi High Quality Camera Raw Files

Linear Color: Applying the Forward Matrix

Now that we know how to create a 3×3 linear matrix to convert white balanced and demosaiced raw data into XYZ_{D50}  connection space – and where to obtain the 3×3 linear matrix to then convert it to a standard output color space like sRGB – we can take a closer look at the matrices and apply them to a real world capture chosen for its wide range of chromaticities.

Figure 1. Image with color converted using the forward linear matrix discussed in the article.

Continue reading Linear Color: Applying the Forward Matrix

Color: Determining a Forward Matrix for Your Camera

We understand from the previous article that rendering color with Adobe DNG raw conversion essentially means mapping raw data in the form of rgb triplets into a standard color space via a Profile Connection Space in a two step process

    \[ Raw Data \rightarrow  XYZ_{D50} \rightarrow RGB_{standard} \]

The first step white balances and demosaics the raw data, which at that stage we will refer to as rgb, followed by converting it to XYZ_{D50} Profile Connection Space through linear projection by an unknown ‘Forward Matrix’ (as DNG calls it) of the form

(1)   \begin{equation*} \left[ \begin{array}{c} X_{D50} \\ Y_{D50} \\ Z_{D50} \end{array} \right] = \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix} \left[ \begin{array}{c} r \\ g \\ b \end{array} \right] \end{equation*}

with data as column-vectors in a 3xN array.  Determining the nine a coefficients of this matrix M is the main subject of this article[1]. Continue reading Color: Determining a Forward Matrix for Your Camera

Color: From Object to Eye

How do we translate captured image information into a stimulus that will produce the appropriate perception of color?  It’s actually not that complicated[1].

Recall from the introductory article that a photon absorbed by a cone type (\rho, \gamma or \beta) in the fovea produces the same stimulus to the brain regardless of its wavelength[2].  Take the example of the eye of an observer which focuses  on the retina the image of a uniform object with a spectral photon distribution of 1000 photons/nm in the 400 to 720nm wavelength range and no photons outside of it.

Because the system is linear, cones in the foveola will weigh the incoming photons by their relative sensitivity (probability) functions and add the result up to produce a stimulus proportional to the area under the curves.  For instance a \gamma cone may see about 321,000 photons arrive and produce a relative stimulus of about 94,700, the weighted area under the curve:

equal-photons-per-wl
Figure 1. Light made up of 321k photons of broad spectrum and constant Spectral Photon Distribution between 400 and 720nm  is weighted by cone sensitivity to produce a relative stimulus equivalent to 94,700 photons, proportional to the area under the curve

Continue reading Color: From Object to Eye

An Introduction to Color in Digital Cameras

This article will set the stage for a discussion on how pleasing color is produced during raw conversion.  The easiest way to understand how a camera captures and processes ‘color’ is to start with an example of how the human visual system does it.

An Example: Green

Light from the sun strikes leaves on a tree.   The foliage of the tree absorbs some of the light and reflects the rest diffusely  towards the eye of a human observer.  The eye focuses the image of the foliage onto the retina at its back.  Near the center of the retina there is a small circular area called fovea centralis which is dense with light receptors of well defined spectral sensitivities called cones. Information from the cones is pre-processed by neurons and carried by nerve fibers via the optic nerve to the brain where, after some additional psychovisual processing, we recognize the color of the foliage as green[1].

spd-to-cone-quanta3
Figure 1. The human eye absorbs light from an illuminant reflected diffusely by the object it is looking at.

Continue reading An Introduction to Color in Digital Cameras

How Is a Raw Image Rendered?

What are the basic low level steps involved in raw file conversion?  In this article I will discuss what happens under the hood of digital camera raw converters in order to turn raw file data into a viewable image, a process sometimes referred to as ‘rendering’.  We will use the following raw capture by a Nikon D610 to show how image information is transformed at every step along the way:

Nikon D610 with AF-S 24-120mm f/4 lens at 24mm f/8 ISO100, minimally rendered from raw as outlined in the article.
Figure 1. Nikon D610 with AF-S 24-120mm f/4 lens at 24mm f/8 ISO100, minimally rendered from raw by Octave/Matlab following the steps outlined in the article.

Rendering = Raw Conversion + Editing

Continue reading How Is a Raw Image Rendered?

Taking the Sharpness Model for a Spin – II

This post  will continue looking at the spatial frequency response measured by MTF Mapper off slanted edges in DPReview.com raw captures and relative fits by the ‘sharpness’ model discussed in the last few articles.  The model takes the physical parameters of the digital camera and lens as inputs and produces theoretical directional system MTF curves comparable to measured data.  As we will see the model seems to be able to simulate these systems well – at least within this limited set of parameters.

The following fits refer to the green channel of a number of interchangeable lens digital camera systems with different lenses, pixel sizes and formats – from the current Medium Format 100MP champ to the 1/2.3″ 18MP sensor size also sometimes found in the best smartphones.  Here is the roster with the cameras as set up:

Table 1. The cameras and lenses under test.

Continue reading Taking the Sharpness Model for a Spin – II

Taking the Sharpness Model for a Spin

The series of articles starting here outlines a model of how the various physical components of a digital camera and lens can affect the ‘sharpness’ – that is the spatial resolution – of the  images captured in the raw data.  In this one we will pit the model against MTF curves obtained through the slanted edge method[1] from real world raw captures both with and without an anti-aliasing filter.

With a few simplifying assumptions, which include ignoring aliasing and phase, the spatial frequency response (SFR or MTF) of a photographic digital imaging system near the center can be expressed as the product of the Modulation Transfer Function of each component in it.  For a current digital camera these would typically be the main ones:

(1)   \begin{equation*} MTF_{sys} = MTF_{lens} (\cdot MTF_{AA}) \cdot MTF_{pixel} \end{equation*}

all in two dimensions Continue reading Taking the Sharpness Model for a Spin

A Simple Model for Sharpness in Digital Cameras – Polychromatic Light

We now know how to calculate the two dimensional Modulation Transfer Function of a perfect lens affected by diffraction, defocus and third order Spherical Aberration  – under monochromatic light at the given wavelength and f-number.  In digital photography however we almost never deal with light of a single wavelength.  So what effect does an illuminant with a wide spectral power distribution, going through the color filter of a typical digital camera CFA  before the sensor have on the spatial frequency responses discussed thus far?

Monochrome vs Polychromatic Light

Not much, it turns out. Continue reading A Simple Model for Sharpness in Digital Cameras – Polychromatic Light

A Simple Model for Sharpness in Digital Cameras – Spherical Aberrations

Spherical Aberration (SA) is one key component missing from our MTF toolkit for modeling an ideal imaging system’s ‘sharpness’ in the center of the field of view in the frequency domain.  In this article formulas will be presented to compute the two dimensional Point Spread and Modulation Transfer Functions of the combination of diffraction, defocus and third order Spherical Aberration for an otherwise perfect lens with a circular aperture.

Spherical Aberrations result because most photographic lenses are designed with quasi spherical surfaces that do not necessarily behave ideally in all situations.  For instance, they may focus light on systematically different planes depending on whether the respective ray goes through the exit pupil closer or farther from the optical axis, as shown below:

371px-spherical_aberration_2
Figure 1. Top: an ideal spherical lens focuses all rays on the same focal point. Bottom: a practical lens with Spherical Aberration focuses rays that go through the exit pupil based on their radial distance from the optical axis. Image courtesy Andrei Stroe.

Continue reading A Simple Model for Sharpness in Digital Cameras – Spherical Aberrations

A Simple Model for Sharpness in Digital Cameras – Diffraction and Pixel Aperture

Now that we know from the introductory article that the spatial frequency response of a typical perfect digital camera and lens (its Modulation Transfer Function) can be modeled simply as the product of the Fourier Transform of the Point Spread Function of the lens and pixel aperture, convolved with a Dirac delta grid at cycles-per-pixel pitch spacing

(1)   \begin{equation*} MTF_{Sys2D} = \left|\widehat{ PSF_{lens} }\cdot \widehat{PIX_{ap} }\right|_{pu}\ast\ast\: \delta\widehat{\delta_{pitch}} \end{equation*}

we can take a closer look at each of those components (pu here indicating normalization to one at the origin).   I used Matlab to generate the examples below but you can easily do the same with a spreadsheet.   Continue reading A Simple Model for Sharpness in Digital Cameras – Diffraction and Pixel Aperture

A Longitudinal CA Metric for Photographers

While perusing Jim Kasson’s excellent Longitudinal Chromatic Aberration tests[1] I was impressed by the quantity and quality of the information the resulting data provides.  Longitudinal, or Axial, CA is a form of defocus and as such it cannot be effectively corrected during raw conversion, so having a lens well compensated for it will provide a real and tangible improvement in the sharpness of final images.  How much of an improvement?

In this article I suggest one such metric for the Longitudinal Chromatic Aberrations (LoCA) of a photographic imaging system: Continue reading A Longitudinal CA Metric for Photographers

COMBINING BAYER CFA MTF Curves – II

In this and the previous article I present my thoughts on how MTF50 results obtained from  raw data of the four Bayer CFA channels – off  a uniformly illuminated neutral target captured with a typical digital camera through the slanted edge method – can be combined to provide a meaningful composite MTF50 for the imaging system as a whole1.  Corrections, suggestions and challenges are welcome. Continue reading COMBINING BAYER CFA MTF Curves – II

Linearity in the Frequency Domain

For the purposes of ‘sharpness’ spatial resolution measurement in photography  cameras can be considered shift-invariant, linear systems.

Shift invariant means that the imaging system should respond exactly the same way no matter where light from the scene falls on the sensing medium .  We know that in a strict sense this is not true because for instance a pixel has a square area so it cannot have an isotropic response by definition.  However when using the slanted edge method of linear spatial resolution measurement  we can effectively make it shift invariant by careful preparation of the testing setup.  For example the edges should be slanted no more than this and no less than that. Continue reading Linearity in the Frequency Domain

Sub LSB Quantization

This article is a little esoteric so one may want to skip it unless one is interested in the underlying mechanisms that cause quantization error as photographic signal and noise approach the darkest levels of acceptable dynamic range in our digital cameras: one least significant bit in the raw data.  We will use our simplified camera model and deal with Poissonian Signal and Gaussian Read Noise separately – then attempt to bring them together.

Continue reading Sub LSB Quantization

Photographic Sensor Simulation

Physicists and mathematicians over the last few centuries have spent a lot of their time studying light and electrons, the key ingredients of digital photography.  In so doing they have left us with a wealth of theories to explain their behavior in nature and in our equipment.  In this article I will describe how to simulate the information generated by a uniformly illuminated imaging system using open source Octave (or equivalently Matlab) utilizing some of these theories.

Since as you will see the simulations are incredibly (to me) accurate, understanding how the simulator works goes a long way in explaining the inner workings of a digital sensor at its lowest levels; and simulated data can be used to further our understanding of photographic science without having to run down the shutter count of our favorite SLRs.  This approach is usually referred to as Monte Carlo simulation.

Continue reading Photographic Sensor Simulation

Information Transfer: Non ISO-Invariant Case

We’ve seen how information about a photographic scene is collected in the ISOless/invariant range of a digital camera sensor, amplified, converted to digital data and stored in a raw file.  For a given Exposure the best information quality (IQ) about the scene is available right at the photosites, only possibly degrading from there – but a properly designed** fully ISO invariant imaging system is able to store it in its entirety in the raw data.  It is able to do so because the information carrying capacity (photographers would call it the dynamic range) of each subsequent stage is equal to or larger than the previous one.   Cameras that are considered to be (almost) ISOless from base ISO include the Nikon D7000, D7200 and the Pentax K5.  All digital cameras become ISO invariant above a certain ISO, the exact value determined by design compromises.

ToneTransferISOless100
Figure 1: Simplified Scene Information Transfer in an ISO Invariant Imaging System at base ISO

In this article we’ll look at a class of imagers that are not able to store the whole information available at the photosites in one go in the raw file for a substantial portion of their working ISOs.  The photographer can in such a case choose out of the full information available at the photosites what smaller subset of it to store in the raw data by the selection of different in-camera ISOs.  Such cameras are sometimes improperly referred to as ISOful. Most Canon DSLRs fall into this category today.  As do kings of darkness such as the Sony a7S or Nikon D5.

Continue reading Information Transfer: Non ISO-Invariant Case

Information Transfer – The ISO Invariant Case

We know that the best Information Quality possible collected from the scene by a digital camera is available right at the output of the sensor and it will only be degraded from there.  This article will discuss what happens to this information as it is transferred through the imaging system and stored in the raw data.  It will use the simple language outlined in the last post to explain how and why the strategy for Capturing the best Information or Image Quality (IQ) possible from the scene in the raw data involves only two simple steps:

1) Maximizing the collected Signal given artistic and technical constraints; and
2) Choosing what part of the Signal to store in the raw data and what part to leave behind.

The second step is only necessary  if your camera is incapable of storing the entire Signal at once (that is it is not ISO invariant) and will be discussed in a future article.  In this post we will assume an ISOless imaging system.

Continue reading Information Transfer – The ISO Invariant Case

Information Theory for Photographers

Ever since Einstein we’ve been able to say that humans ‘see’ because information about the scene is carried to the eyes by photons reflected by it.  So when we talk about Information in photography we are referring to information about the energy and distribution of photons arriving from the scene.   The more complete this information, the better we ‘see’.  No photons = no information = no see; few photons = little information = see poorly = poor IQ; more photons = more information = see better = better IQ.

Sensors in digital cameras work similarly, their output ideally being the energy and location of every photon incident on them during Exposure. That’s the full information ideally required to recreate an exact image of the original scene for the human visual system, no more and no less. In practice however we lose some of this information along the way during sensing, so we need to settle for approximate location and energy – in the form of photoelectron counts by pixels of finite area, often correlated to a color filter array.

Continue reading Information Theory for Photographers

Comparing Sensor SNR

We’ve seen how SNR curves can help us analyze digital camera IQ:

SNR-Photon-Transfer-Model-D610-4

In this post we will use them to help us compare digital cameras, independently of format size. Continue reading Comparing Sensor SNR

SNR Curves and IQ in Digital Cameras

In photography the higher the ratio of Signal to Noise, the less grainy the final image normally looks.  The Signal-to-Noise-ratio SNR is therefore a key component of Image Quality.  Let’s take a closer look at it. Continue reading SNR Curves and IQ in Digital Cameras

The Difference between Peak and Effective Quantum Efficiency

Effective Quantum Efficiency as I calculate it is an estimate of the probability that a visible photon  – from a ‘Daylight’ blackbody radiating source at a temperature of 5300K impinging on the sensor in question after making it through its IR filter, UV filter, AA low pass filter, microlenses, average Color Filter – will produce a photoelectron upon hitting silicon:

(1)   \begin{equation*} EQE = \frac{n_{e^-} \text{ produced by average pixel}}{n_{ph} \text{ incident on average pixel}} \end{equation*}

with n_{e^-} the signal in photoelectrons and n_{ph} the number of photons incident on the sensor at the given Exposure as shown below. Continue reading The Difference between Peak and Effective Quantum Efficiency

Equivalence and Equivalent Image Quality: Signal

One of the fairest ways to compare the performance of two cameras of different physical characteristics and specifications is to ask a simple question: which photograph would look better if the cameras were set up side by side, captured identical scene content and their output were then displayed and viewed at the same size?

Achieving this set up and answering the question is anything but intuitive because many of the variables involved, like depth of field and sensor size, are not those we are used to dealing with when taking photographs.  In this post I would like to attack this problem by first estimating the output signal of different cameras when set up to capture Equivalent images.

It’s a bit long so I will give you the punch line first:  digital cameras of the same generation set up equivalently will typically generate more or less the same signal in e^- independently of format.  Ignoring noise, lenses and aspect ratio for a moment and assuming the same camera gain and number of pixels, they will produce identical raw files. Continue reading Equivalence and Equivalent Image Quality: Signal

Why Raw Sharpness IQ Measurements Are Better

Why Raw?  The question is whether one is interested in measuring the objective, quantitative spatial resolution capabilities of the hardware or whether instead one would prefer to measure the arbitrary, qualitatively perceived sharpening prowess of (in-camera or in-computer) processing software as it turns the capture into a pleasing final image.  Either is of course fine.

My take on this is that the better the IQ captured the better the final image will be after post processing.  In other words I am typically more interested in measuring the spatial resolution information produced by the hardware comfortable in the knowledge that if I’ve got good quality data to start with its appearance will only be improved in post by the judicious use of software.  By IQ here I mean objective, reproducible, measurable physical quantities representing the quality of the information captured by the hardware, ideally in scientific units.

Can we do that off a file rendered by a raw converter or, heaven forbid, a Jpeg?  Not quite, especially if the objective is measuring IQ. Continue reading Why Raw Sharpness IQ Measurements Are Better

Nikon CFA Spectral Power Distribution

I measured the Spectral Photon Distribution of the three CFA filters of a Nikon D610 in ‘Daylight’ conditions with a cheap spectrometer.  Taking a cue from this post I pointed it at light from the sun reflected off a gray card  and took a raw capture of the spectrum it produced.

CFA Spectrum Spectrometer

An ImageJ plot did the rest.  I took a dozen captures at slightly different angles to catch the picture of the clearest spectrum.  Shown are the three spectral curves averaged over the two best opposing captures, each proportional to the number of photons let through by the respective Color Filter.   The units on the vertical axis are raw black-subtracted values from the raw file (DN), therefore the units on the vertical axis are proportional to the number of incident photons in each case.   The Photopic Eye Luminous Efficiency Function (2 degree, Sharpe et al 2005) is also shown for reference, scaled to the same maximum as the green curve (although in energy units, my bad). Continue reading Nikon CFA Spectral Power Distribution

MTF50 and Perceived Sharpness

Is MTF50 a good proxy for perceived sharpness?   In this article and those that follow MTF50 indicates the spatial frequency at which the Modulation Transfer Function of an imaging system is half (50%) of what it would be if the system did not degrade detail in the image painted by incoming light.

It makes intuitive sense that the spatial frequencies that are most closely related to our perception of sharpness vary with the size and viewing distance of the displayed image.

For instance if an image captured by a Full Frame camera is viewed at ‘standard’ distance (that is a distance equal to its diagonal), it turns out that the portion of the MTF curve most representative of perceived sharpness appears to be around MTF90.  On the other hand, when pixel peeping, the spatial frequencies around MTF50 look to be a decent, simple to calculate indicator of it with a current imaging system in good working conditions. Continue reading MTF50 and Perceived Sharpness