Why Raw Sharpness IQ Measurements Are Better

Why Raw?  The question is whether one is interested in measuring the objective, quantitative spatial resolution capabilities of the hardware or whether instead one would prefer to measure the arbitrary, qualitatively perceived sharpening prowess of (in-camera or in-computer) processing software as it turns the capture into a pleasing final image.  Either is of course fine.

My take on this is that the better the IQ captured the better the final image will be after post processing.  In other words I am typically more interested in measuring the spatial resolution information produced by the hardware comfortable in the knowledge that if I’ve got good quality data to start with its appearance will only be improved in post by the judicious use of software.  By IQ here I mean objective, reproducible, measurable physical quantities representing the quality of the information captured by the hardware, ideally in scientific units.

Can we do that off a file rendered by a raw converter or, heaven forbid, a Jpeg?  Not quite, especially if the objective is measuring IQ.

Once information from the scene has gone through the optics and has been captured by the sensor it is either stored linearly as-is as raw data or it is necessarily put through a dizzying array of algorithms and non-linear processing on the way to being rendered and transformed into what the manufacturer (or the raw conversion software producer) thinks is a pleasing image.  Even before we start playing with settings and sliders.

Here is a simplified diagram depicting how spatial resolution information is captured by a digital camera and then modified through processing:

During the capture process light from the scene goes through the lens and  forms a continuous image of varying intensity on the sensing plane.  The sensor, sitting behind various filters (Infra Red, Optical Low Pass/AA, Color Filter Array) and microlenses, samples image intensity at each pixel during exposure.  It then amplifies this signal, converts it to digital data which gets packaged and stored into a raw file.  In advanced digital cameras that’s mostly it.  The process is pretty well quantitative, linear and easily described by centuries old physics in terms of very few variables.  For an image captured with good technique off a sturdy tripod the biggest unknown is lens performance throughout the field of view as seen by the sensor in question.

On the other hand here is a list of the adjustments relevant to spatial resolution that are typically applied to raw data by the in-camera processing engine or by the raw conversion software in order to produce a pleasing SOOC (or straight-out-of-raw converter) image – before a single setting or slider have been touched.   Not necessarily in this order:

  • Black point, White point, White balance
  • Microlens Vignetting Corrections, by color channel
  • Lens Vignetting Correction
  • Lens Distortion Corrections
  • Demosaicing (Upsizing, Filtering)
  • Chromatic Aberration corrections
  • Color corrections
  • Tone Mapping
  • Macro, Micro and Local Contrast adjustments
  • Sharpening (Capture, Creative and Output)
  • Conversion to a Gamma color space
  • Noise Reduction and other filtering
  • Lossy Compression (in the case of a Jpeg)

and more.  Most are subjective, non-linear and – without deep knowledge of the underlying algorithms – irreversible transformations.

Together these adjustments, whether performed by the  in-camera processing engine in order to provide the SOOC jpeg or by raw conversion software, modify the spatial resolution information arbitrarily and often irreversibly to the point that it becomes disconnected from the actual IQ capabilities of the camera/lens hardware system.  The number of uncontrolled variables is large.  Measuring the spatial resolution off an 8-bit compressed, subjectively processed and sharpened jpeg is equivalent to asking: ‘what do you want the spatial resolution to be?’  That’s your answer.

Various types of contrast adjustments and sharpening are extremely good at muddying the waters through which we are trying to detect the sharpness IQ of our gear.   Even though the spatial information captured in the raw data is exactly the same, those characteristics are changed significantly in the rendered image because of subjective processing controlled by even the most basic camera (or raw converter) settings: say choosing Standard vs Landscape vs Portrait mode.

Here for instance are slanted edge MTF measurements off the same raw file, with the raw spatial frequency response represented by the blue line; and the jpeg response as first opened by ACR Adobe Standard profile, with all sliders in their default positions, represented by the red line:

Note the much higher MTF values and different shape obtained by the Jpeg.  The difference between the two curves is due entirely to image processing.  When we start adding local contrast, unsharp masking and deconvolution to taste, the sky is the limit.

Instead, the best place to measure the full, untouched spatial resolution information captured by a camera and lens is straight from the raw file as it rolls off the sensor and electronics.  No demosaicing, no processing, no sharpening  (well, assuming no subliminal tricks by manufacturers before writing data to the raw file) – just what the imaging system’s hardware saw objectively according to the laws of physics, in order to judge its capabilities and IQ.

Lots of sharpness tests on the internet, very few that follow this simple precept.  Lenstip.com is one of the few.

By this I don’t mean to imply that there is no value in measuring sharpness once the image has been rendered, processed and sharpened – there is, especially qualitatively – but I personally would not want to make a call on a defective lens or make a buying decision based on qualitative factors alone.   The results are arbitrary,  difficult to replicate or compare to other sources.  Heck LR changes its parameters from camera to camera and lens to lens, let alone from manufacturer to manufacturer.

On the other hand I could easily make such decisions based on quantitative, repeatable, objective, comparable spatial resolution IQ measurements off the raw data alone.  They may not be perfect, but if the hardware’s IQ is good to start with I know I can then typically process the image into a more pleasing final result.  The opposite is not true.

Therefore measure the sharpness (spatial resolution) IQ of hardware off the raw data as-is, no demosaicing, no processing.  In the next post we will see how to do just that.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.