Raw Converter Sharpening with Sliders at Zero?

I’ve mentioned in the past that I prefer to take spatial resolution measurements directly off the raw information in order to minimize often unknown subjective variables introduced by demosaicing and rendering algorithms unbeknownst to the operator, even when all relevant sliders are zeroed.  In this post we discover that that is indeed the case for ACR/LR process 2010/2012 and for Capture NX-D – while DCRAW appears to be transparent and perform straight out demosaicing with no additional processing without the operator’s knowledge.

An easy way to demonstrate processing additional to straight demosaicing and rendering of a raw image is to take a close look at the profile of a sharp transition like a clean black and white edge when projected on a sensor.  If we were to normalize the black to white transition with black around zero intensity and white near one and take readings across it (perpendicular to it), ideally we would read zeros in all the pixels to the left of the step and suddenly switch to ones in all the pixels to the right of it.  But because of imperfections in the lenses and imaging system, the edge is in practice actually spread out over a few pixels as follows:

D810 ESF Green H 4

The blue line above is an actual edge intensity profile (Edge Spread Function, ESF) as measured by excellent open source MTF Mapper by Frans van den Bergh directly from unprocessed raw data.  Because of super-sampling it is able to measure the intensity profile across an edge to a 1/8th of a pixel resolution.

An algorithm that performs demosaicing and rendering only – without additional processing – would track this curve relatively closely, without any of the overshoots, undershoots or ringing typical of subsequent sharpening shown in an earlier article.  Open source DCRAW by David Coffin is often considered to be such a minimalist converter, so let’s see how the edges it creates compare to those from the raw data alone.

I used a 400×150 pixel area of the edge below the center of file DSC_7553.NEF, a D810+85mm:1.8G f/5.6 capture at base ISO by dpreview.com.  This is what  it looks like, just the raw data after CFA normalization (shown in the following graphs as WB Raw):EdgeThe NEF was also white balanced and demosaiced to sRGB by dcraw with the AHD* and VNG algorithms, cropped as above and fed to MTF Mapper which produced the following intensity profiles (ESFs):

Edge Profile

The edge profiles look pretty similar, with no signs of additional processing, squeezing or acutance stretching.  This is what the top quadrant of that graph looks like up close:

Well Behaved Converters 1

The main differences around the bends are explainable by a number of standard adjustments  required for rendering, including gamma and contrast.  If not abused (as in a relatively neutral rendering), those tend not to degrade MTF performance much.  It’s notable, however, that there are no tell tale signs of additional processing.

Next the NEF was white balanced and demosaiced to sRGB by ACR using  both 2010 and 2012 process versions with the Adobe Standard profile and all sliders (including those below) set at zero – as well as by the 2012 process with everything at default:ACR 2012 Sliders

I also threw in new-as-of-last-year Capture NX-D because the images it renders with all its relevant sliders at zero appear sharper to me than those produced by now-left-in-the-lurch Capture NX2 with the same setup, which did not use to perform any ‘subliminal’ processing.   Here are their ESF’s, same crops as above:

Sharpened Converters 1

And that’s what additional processing looks like: overshoots (undershoots at the other end) and ringing.  Sure enough, CNX-D is right in the thick of it, guilty as charged; though I would arguably call it the best compromise out there.  And so is ACR even with all sliders at zero.  This is not necessarily bad when rendering an image for display, but it does introduce a number of uncontrolled variables when attempting to measure different hardware performance.

ACR process 2012 with all its sliders at default supposedly sharpens the image by a blend of unsharp masking and deconvolution, the relative strength of the effects controlled by the detail slider, in this case at 25%.  Even though often considered insufficient for capture sharpening by users, that’s quite a potent mix with sometime amusing results from  online testing sites which use those settings to render the images they test.  Especially noteworthy are those that show more line widths per picture height at MTF50 than the number of physical rows that a sensor has.  With an AA!  How many more phantom rows will the AAless version of that same sensor in the current generation’s body have, I wonder?  I’ve noticed some of those results have been pulled since the last time I looked.  Otus, Otus, why don’t they test you?  I think I know.

But kidding aside and ignoring ACR at default for a moment, MTF50 readings from the algorithms with minimized sliders were fairly close, ranging from 0.284 to 0.301 cy/px for the edge at hand – with the highest score obtained by the unprocessed normalized raw file.  This makes sense because demosaicing entails something akin to a low pass filtering function when data from neighbouring pixels is necessarily mixed in order to gather color information where there was none.  Nevertheless I have seen in the past that  good demosaicing algorithms are able to preserve linear spatial resolution information during the rendering process and come close to results from unprocessed raw data.  Today dcraw AHD* came in a very close second, practically matching WB Raw.  I am impressed:

SFR of Demosaicing Algorithms

The chart shows clearly what algorithms perform additional processing. DCraw’s renderings are well behaved, close to the raw-data benchmark.  ACR 2012 with all sliders at zero holds up well up to MTF50 but loses steam above that, hovering on in the aliased upper frequencies.  That’s probably due to some additional filtering somewhere in the chain (see also the same kink in the Default rendition), but not very desirable behavior – as far as retaining linear spatial resolution information either of the two Dcraw algorithms would appear to be preferable.  On the other hand ACR 2012 at default hits MTF50 at a 1/3 higher amplitude than the others, 0.416 cycles per pixel – with the unwelcome side effect of also amplifying higher frequency false detail (aliasing), moiré and noise.  Alas, the perfect sharpener has not been invented yet.

Most every other converter other than DCraw has its  own secret sauce thrown in whether that be perceptual and/or physical.   Some, like ACR/LR, are also known to use different parameters for different cameras.  This variability is the reason that I prefer to stick to unprocessed, raw measurements when investigating the linear spatial resolution performance of different hardware.  Fewer, better controlled variables make for a more even playing field and better results.

 

* Note that Jean Pierre suggests in the comments below that what is called AHD in DCraw is actually implemented by the LMMSE algorithm.

9 thoughts on “Raw Converter Sharpening with Sliders at Zero?”

  1. how about converting raw to DNG using Adobe tools and changing “BaselineSharpenss” tag in DNG file to see if you can switch extra sauce in ACR/LR off.

  2. Interesting idea, although not quite in line with the target audience of the article. I’ve never done it though, so if you’d like me to look into it I will need detailed instructions via an email. You can send one to me by pressing the ‘About’ tab above.

    Jack

    1. it is a simple exif tag (numeric) in DNG, so a well known exiftool utility (or any GUI frontend for exiftool) shall be able to change it… it certainly affects how ACR/LR do their sharpening available to users in UI (“Detail” tab), but whether or not they also control behind the scene demosaicking (which is as you noted possibly combined with some non user available sharpening at the same time) I do not know… nevermind if that is of no interest, Adobe does not provide an easy way to change it anyways (unlike baseline exposure related things when you can put that in a dcp profile)

  3. Hi Jack,
    Many thanks for your test. IMO it depend on picture which is better ahd or vng (DCRaw). I try both and take the best “looking” one. DCRaw is very powerful and much better, then ACR or other RawConverter. The 16bitTIFF with ProPhotoRGB works fine with Photoshop or other image software.

  4. Well, on Mac-terminal with DCRaw (version 9.24) you can only use:
    q=0 is default dcraw high-speed, low-quality bilinear interpolation
    q=1 is default dcraw VNG (Variable Number of Gradients) interpolation
    q=2 is default dcraw PPG (Patterned Pixel Grouping) interpolation
    q=3 ought to be AHD (Adaptive Homogeneity-Directed), but really is LMMSE
    All other do not work anymore (all others david has taken away). So, you can only use above 3 interpolation.

    But, if you use RAWTherapee (software) you can choice between:
    amaze, igv, lmmse, eahd, hpdh, vng4, dcb, ahd, fast, mono and none.
    For Sony-Bayersensor “lmmse” works great in RawTherapee. But, if you have an other manufacturer of bayersensor, as Toshiba, then you have to try an other!

    And darktable (software) has 3 option: PPG, amaze and vng4. For Sony-Bayersensor “amaze” works best!

    And at least, I want to mention. RPP (http://www.raw-photo-processor.com/RPP/Overview.html). This RawConverter is powerful, because you can put out “raw-mode-untagged” TIFF32bit, quasi raw in TIFF-form! But, you can only choice VCDMF, AHDMF or Half!

    And only darktable can export 32bitTIFF! All other export only 16bitTIFF.

    And you have to know, that not all rawconverter-software can open 32bitTIFF file. Photoshop can do it and you will see a difference between 32bitTIFF vs raw-file open with ACR!

    For FF-Digicam (which has 14bit) I would suggest to use darktable or RPP and export 32bit!
    For all other Digicam (which has 12bit) the tonal range is not much different between 16 or 32bitTIFF!

    I made very much tests the last 3 months, and if I found a better way with one rawconverter, then the other one were updated mean time! Oh, my god, I could begin the tests…

    At the moment, I can highly recommend darktable or RPP! But, these are only running on Linux or Mac.
    For Windows-User I recommend RawTherapee.

    I want to point out:
    This is really “pixel-peeping”, or to squeeze the maximum from a RAW-file to have maximum tonalrange and dynamic range!!
    If someone will have a quick result of his raw-file-images, then he can go with Lightroom, or others image-software, and develope with default settings! It is not bad, but not the best!!

  5. Thanks Jack,
    There is no final discussion. Because, the stakes are high. Which sensor (bayer, x-trans) is used in digicam? Which manufacturer has built it? Which bayer grid is used? Which pattern?
    Which lens is used – prime or zoom? High-quality or less quality?
    Which algorithm is used in the digicam processor? Which 12 or 14 bit, lossless compressed, compressed, or uncompressed file format is used?
    Which algorithms for democaising use the different raw-converter softwares? Which algorithms for sharpening, denoising, and so on, use the different raw-converter software?

    And last but not least… for each model renewal the manufacturers tries to improve the algorithms in the processor for better image quality and less noise!
    You see, it is impossible to have ONE developping workflow, which is valid for the next years!!! No, you have to make test with raw-file from the new model! As exemple a NEF-file from D5000 is not the same as NEF-file from D5500 (by using the same lens)!!
    Only for this exemple you will have two different workflows, if you will put out the maximum from the NEF-file!

Leave a Reply to Jean Pierre Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.