Image Quality: Raising ISO vs Pushing in Conversion

In the last few posts I have made the case that Image Quality in a digital camera is entirely dependent on the light Information collected at a sensor’s photosites during Exposure.  Any subsequent processing – whether analog amplification and conversion to digital in-camera and/or further processing in-computer – effectively applies a set of Information Transfer Functions to the signal  that when multiplied together result in the data from which the final photograph is produced.  Each step of the way can at best maintain the original Information Quality (IQ) but in most cases it will degrade it somewhat.

IQ: Only as Good as at Photosites’ Output

This point is key: in a well designed imaging system** the final image IQ is only as good as the scene information collected at the sensor’s photosites, independently of how this information is stored in the working data along the processing chain, on its way to being transformed into a pleasing photograph.  As long as scene information is properly encoded by the system early on, before being written to the raw file – and information transfer is maintained in the data throughout the imaging and processing chain – final photograph IQ will be virtually the same independently of how its data’s histogram looks along the way.

By IQ here I mean the quantitative version, measurable in physical units of photoelectrons, producing figures of merit such as Signal to Noise ratio (SNR), Dynamic Range (DR) or Weber-Fechner Fraction.

In terms of IQ, the less we do to the Signal out of the photosites, the less of a chance to lose or corrupt image Information.  The most minimalist mode for many current digital cameras is around base ISO, which applies minimum amplification before feeding the Signal into the ADC for conversion to digital and storage into the raw data file.  This is certainly the case for many of today’s near ISOless/invariant DSCs.  I will explain in a future article that this is also true for ‘ISOful’ cameras that are not able to transfer to the raw data the full information collected at the photosites  in one go.

What is this Fear of Banding when Pushing Data?

But if it’s true that for many cameras IQ at base ISO is as good as it gets and scene information is properly encoded by the system as discussed earlier** – then what is all this fear of posterization (also known as contour banding), as long as proper information transfer is maintained in the data throughout the processing chain?

Emil Martinec confirms in his excellent treatise on noise that if the imaging system is properly designed** the raw data is never posterized, including at base ISO.

If for a given shutter speed and aperture, scene information collected by a well designed camera** is not posterized when minimally processed and saved into the raw data at base ISO,  why should it show banding when the data it is stored in is pushed digitally (multiplied linearly) by a power of two integer before raw conversion, as exemplified in the previous article?  Scene Information is the same, data bit depth has been expanded but SNR, DR and Weber-Fechner Fractions are unchanged – hence also is IQ.

Conversely, if for the same camera and Exposure scene Information is accidentally (or not) posterized out of the sensor, why would analog amplification (accomplished by raising ISO in-camera) remove the visible banding?  In a well designed imaging system** the ADC is already properly primed by base read noise, and scene information from the photosites is the exact same independently of how it is amplified.  All we are doing by raising ISO is potentially further degrading image information – hence IQ.

Here is a diagram showing the two approaches to linear brightening, with the alternative digital (push, top path) and analog (ISO, bottom path) same amplification under discussion:

The frame is different, but the picture is the same
The frame is different, but the picture is the same

The data container has changed, but the image information has not.  Different frame, same picture.  So what is all this fear of (contour) banding?

Show Me

The first time I came across such a demonstration (threads at LuLa and DPR) was when the D7000 and K5 came out five years ago.  I couldn’t believe that smaller raw files which produced toothy histograms could result in final images with the same IQ as those with in-camera analog amplification via ISO and a proper, full bell-looking histogram.  So I downloaded the raw files and played with them trying to prove them wrong, stretching them and squeezing them to smithereens.  I couldn’t.  There was just a hint of a difference in the character of the noise of the deepest shadows, and with practice I could tell which was which, but in terms of IQ practically no difference.  Nor did I have a subjective preference between the two.  What I didn’t realize at the time is that the data was indeed different but image information was the same.

For this test I wanted to induce contour banding, so I was looking for a slow changing dark gradient, such as is sometimes found in subjects with a dark sky.  I found a halogen light, set it up on a slant close to a wall and took two out-of-focus captures of the wall at the same exposure back to back, one at ISO 100 and one at ISO 1600 as shown in the diagram above.   The camera was a Nikon D610 (not ISOless for the first couple of stops) at 1/125s and f/2.8.  It looks like I moved it slightly when I changed ISO (just sitting on a table, tsk, tsk) but no matter.

The processing was performed by Matlab and minimalist, in order to introduce the least number of variables: both capture’s raw data was unpacked by dcraw -D -T -4 and white balanced based on a region of interest, the square area at 4500:1000,100×100 (RawDigger notation).

Next the ISO100 white balanced 14-bit raw data was multiplied by 15.7758 (the ratio of the green channel means in the ROI of the two captures) and stored as 16-bit integer data.  The histograms of the white balanced ROIs looked like this at this point:

Histograms of WB Raw Data 100 1600
Note the roughly 16 level gaps in the ISO100 histogram introduced by digital multiplication by 15.7758

The resulting maximum value of the ISO100 data was used to normalize both sets of data (bonus points if you can figure out why it would be higher than the ISO1600’s), which were then demosaiced by Matlab’s built-in function and gamma’d to the tune of 1/2.2 before being saved to 16-bit TIFF files.  A quick check that the means of the ROIs were still virtually equal confirmed that the two images  were directly comparable.

About the only major steps missing compared to a typical raw conversion are the lack of color space transformations and the fine tuning normally specified in ‘Camera Profiles’ (you know:Neutral, Landscape, Portrait etc). This is how the resulting untagged TIFFs look when saved as jpegs, ISO100 x 16 pushed data first.  Click on the images to see them at 100% and make sure your browser is not zoomed (ctrl-zero in Chrome) because browser’s resizing algorithms are known to introduce all sorts of artifacts of their own.

White balanced, gamma and Demosaiced ISO100x16
ISO100 x 16 Jpeg, White Balanced, Gamma, Demosaiced

Then the ISO1600 image

ISO1600 Jpeg, White Balanced, Gamma, Demosaiced
ISO1600 Jpeg, White Balanced, Gamma, Demosaiced

Look for differences at 100%.  The first obvious one is that the ISO1600 image is clipped where the beam of light hits the wall, while the ISO 100 x 16 image retains full information there.  But we knew that raising ISO would automatically reduce DR stop for stop and compromise the relative highlights, losing information accordingly.

Next check for additional noise.  I don’t see it looking at the 16-bit TIFFs, but if I measure it, SNR is a smidgen lower at ISO1600 than at ISO100x16.  We knew this too, because the D610 is not ISOless for the first couple of stops above base.

This subject was chosen to show posterization, and there is definitely a region dark and smooth changing enough that we should be able to see some if it’s there.  Any visibly obvious banding in the ISO 100×16 version, despite the huge gaps in the histogram?  Not really.  Maybe just a hint, but then again I see the exact same shapes on the ISO1600 image when loaded as separate layers in CS5, aligned and switched back and forth.   I am looking at the original 16 bit TIFFs on my carefully maintained video path feeding a real 8-bit monitor, a DELL U2410.  Could the 8-bit video path be playing tricks on me?

Taking the Video Path out of the Equation

How do you exclude the video path when testing for posterization?  I am thinking that one way is to sharpen the heck out of 16-bit image data: if posterization is introduced/emphasized by the video path there should be no ‘edges’ in the underlying image information to bite on, so the displayed image should not show amplified step-like artifacts; if instead the information in the data has become posterized then sharpening should amplify its edges making the contour banding much more obvious.  I figured that high radius low amplitude unsharp masking in Photoshop should do the trick, but when I saw no difference after applying  my usual moderate settings I went all out just to see what I could get.  While I was at it I subjected both images to an aggressive curve adjustment as follows:

Curves+USM

The resulting images below are distorted and look nothing like the scene I saw.  Jpegs this time are about 14MB each, about ten times bigger than the set above (Why?  Noise amplified by the curve?).  ISO100 x16 pushed image first, click on them to view at 100%

ISO100x16 Jpeg, after USM500% 250px HiRaLAm Sharpening and Shown Curve adjustment
ISO100x16 Jpeg, after USM500% 250px HiRaHiAm Sharpening and Shown Curve adjustment

and the ISO1600 Jpeg

ISO1600 USM 500 250px
ISO1600 Jpeg, after USM500% 250px HiRaHiAm Sharpening and Shown Curve adjustment

The same weird shapes in both versions, introduced/emphasized by the ridiculously aggressive sharpening and curves.  Some small differences are to be found near clipping in the ISO1600 image because the three channels saturate at different levels, the ISO100x16 supplying more complete information there.  Noise looks about the same in both to me as well.

Different Data Does Not Imply Different IQ

These are results similar to what I saw 5 years ago from the D7000 and K5 raw files when I stretched them to bits: virtually no difference in IQ.  And remember, the data and the histograms of the two files are very different.  But different data does not necessarily mean better or worse image Information Quality, as better explained in this article.  That’s because the human visual system receives information about the scene from photons (emitted by a monitor or reflected by a print) hitting the eyes: as long as the information is there it does not care whether in some intermediate step it was saved as a 16-bit or 32-bit or floating point format.

You will not be able to get much more than this from these 8-bit jpegs so if you would like the originals for some real investigative work send me an email via the form in the ‘About’ tab top right and I will mail them to you as layers in a Photoshop TIF file (warning, 350MB download).  Let me know if you spot something I am missing.

What Contour Banding Looks Like

Just in case you were curious it’s pretty easy to induce banding in captures with such smooth gradients when the information is not fully transferred to the data used to display the final photograph (the frame is smaller than the picture, as it were).  For instance, say we dropped the bit depth to 8-bits before applying the aggressive adjustments above – then applied them.   At base ISO most DSC sensors collect more than 8-bits of scene information so storing it into only 8-bits will result in some information loss.  In addition the noise in the system will  not be enough to properly dither the Weber-Fechner Fractions to perceptual smoothness.  Here is the ISO100x16 data treated to the adjustments in the earlier figure at 8-bits (view at 100%, ctrl-zero)

ISO100x16 data converted to 8-bits then subjected to USM 250% 10px plus the curve above
ISO100x16 data converted to 8-bits then subjected to USM 250% 10px plus the curve above

That’s what contour banding looks like.  And here is the ISO1600 data subjected to the same treatment

ISO1600 data converted to 8-bits then subjected to USM 250% 10px plus the curve above
ISO1600 data converted to 8-bits then subjected to USM 250% 10px plus the curve above

Virtually identical visual information, save for the clipping in the ISO1600 version (and the fact that I moved the camera slightly).  Which do you prefer?

No Fear Also when Post Processing Aggressively

As you’ve seen this reasoning holds just as well for aggressive post processing of data with gappy histograms, say for instance unduly squeezing ‘Levels’, introducing a severe ‘S curve’ or applying high-radius USM sharpening, in PS  parlance.  Absent some form of noise reduction downstream, if the imaging system is properly primed** at the source, either you will see or be able to induce posterization in equally processed images from both analogically and digitally amplified original data – or you will not in either.   The data may be different, but the image information within it is the same.  If you have examples that suggest otherwise, I would be interested to see them.

So in my humble opinion, if you are working within your camera’s ISOless range, don’t worry about posterization:  choose the exposure and brightening strategy that best suits your subject and style, forgetting about potential gaps in histograms of the working data.  If the information was properly encoded and transferred throughout the system, gaps in the data have nothing to do with IQ.

What does this say about clever compression algorithms like Nikon’s?  We’ll leave that for another time.

 

 

**  In a well designed imaging system the encoded bit depth needs to be maintained higher than the ratio between the capacity of the ADC and noise at its input, in the same units, expressed as a power of two logarithm.

Only the manufacturer can measure the actual noise level at the input of the ADC.  What we can estimate instead thanks to Photon Transfer Curves is the random read noise referred to the output of the photosites in physical units of photoelectrons (e-).  If analog amplification and transfer of the e- to the ADC adds little noise, we can assume that the estimated noise out of the photosites is about the same as that at the input of the ADC.  That is not always the case.  This subtle difference can sometimes result in interesting PTC responses near base ISO for overly clean sensors, even with an estimated input-referred read noise larger than 1 LSB (latest Exmors, see for example  the base ISO curves here).

2 thoughts on “Image Quality: Raising ISO vs Pushing in Conversion”

  1. Hello,
    your testing procedure looks incorrect.
    you should reduce the exposure time so that there is no highlight clipping in the picture – then you will see the difference in the shadows.
    this is how they do the measurement on dpreview

    1. Hi Max, the objective of this post was to show the difference, if any, between raising ISO and keeping ISO at base in near ISOless cameras while compensating for the resulting lower brightness during raw conversion, as you can read in the text. It seems to me that the procedure was correct for that.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.