Why Raw? The question is whether one is interested in measuring the objective, quantitative spatial resolution capabilities of the hardware or whether instead one would prefer to measure the arbitrary, qualitatively perceived sharpening prowess of (in-camera or in-computer) processing software as it turns the capture into a pleasing final image. Either is of course fine.
My take on this is that the better the IQ captured the better the final image will be after post processing. In other words I am typically more interested in measuring the spatial resolution information produced by the hardware comfortable in the knowledge that if I’ve got good quality data to start with its appearance will only be improved in post by the judicious use of software. By IQ here I mean objective, reproducible, measurable physical quantities representing the quality of the information captured by the hardware, ideally in scientific units.
Can we do that off a file rendered by a raw converter or, heaven forbid, a Jpeg? Not quite, especially if the objective is measuring IQ.
Once information from the scene has gone through the optics and has been captured by the sensor it is either stored as-is as raw data or it is necessarily put through a dizzying array of algorithms and processing on the way to being rendered and transformed into what the manufacturer (or the raw conversion software producer) thinks is a pleasing image. Even before we start playing with settings and sliders.
Here is a simplified diagram depicting how spatial resolution information is captured by a digital camera and then modified through processing:
During the capture process light from the scene goes through the lens and forms a continuous image of varying intensity on the sensing plane. The sensor, sitting behind various filters (Infra Red, Optical Low Pass/AA, Color Filter Array) and microlenses, samples the image intensity at each pixel during exposure. It then amplifies this signal, converts it to digital data which gets packaged and stored into a raw file. That’s mostly it. The process is pretty well quantitative, linear and easily described by centuries old physics in terms of very few variables. For an image captured with good technique off a sturdy tripod the biggest unknown is lens performance throughout the field of view as seen by the sensor in question.
On the other hand here is a list of the adjustments relevant to spatial resolution that are typically applied to raw data by the in-camera processing engine or by the raw conversion software in order to produce a pleasing SOOC (or straight-out-of-raw converter) image – before a single setting or slider have been touched. Not necessarily in this order:
- Moiré and other filtering
- Noise Reduction
- Local Chromatic Aberration corrections
- Distortion Corrections
- Demosaicing (Upsizing, Weighted Averaging)
- Conversion to a Gamma color space
- Black point, White point
- Contrast adjustments
- Local Contrast adjustments
- Micro Contrast adjustments
- Sharpening (Capture, Creative and Output)
- Lossy Compression (in some cases only)
and more. Most are subjective, non-linear and irreversible transformations.
Every single one of these adjustments, whether performed by the in-camera processing engine in order to provide the SOOC image or by raw conversion software, modifies the spatial resolution information arbitrarily and often irreversibly to the point that it becomes disconnected from the actual IQ capabilities of the camera/lens hardware system. The number of uncontrolled variables is large. Measuring the spatial resolution off an 8-bit compressed, processed and sharpened image is equivalent to asking: ‘what do you want the spatial resolution to be?’ That’s your answer.
Contrast adjustments and sharpening are extremely good at muddying the waters through which we are trying to detect the sharpness IQ of our gear, as discussed in an earlier post. Rendered and processed images are typically not a good place to look if you want to know how the hardware performs:
Instead, the best place to measure the full, untouched spatial resolution information captured by a camera and lens is straight from the raw file as it rolls off the sensor and electronics. No demosaicing, no processing, no sharpening (well, assuming no subliminal tricks by manufacturers before writing data to the raw file) – just what the hardware saw according to the laws of physics, in order to judge its capabilities and IQ.
Lots of sharpness tests on the internet, very few that follow this simple precept. Lenstip.com is one of the few.
By this I don’t mean to imply that there is no value in measuring sharpness once the image has been rendered, processed and sharpened – there is, especially qualitatively – but I personally would not want to make a call on a defective lens or make a buying decision based on qualitative factors alone. The results are arbitrary, difficult to replicate or compare to other sources. Heck LR changes its parameters from camera to camera, let alone from manufacturer to manufacturer.
On the other hand I could easily make such decisions based on quantitative, repeatable, comparable spatial resolution IQ measurements off the raw data alone. They may not be perfect, but if the hardware’s IQ is good to start with I know I can then typically process the image into a more pleasing final result. The opposite is not true.
Therefore measure the sharpness (spatial resolution) IQ of hardware off the raw data as-is, no demosaicing, no processing. In the next post we will see how to do just that.