It is always interesting when innovative companies push the envelope of the state-of-the-art of a single component in their systems because a lot can be learned from before and after comparisons. I was therefore excited when Phase One introduced a Trichromatic version of their Medium Format IQ3 100MP Digital Back last September because it could allows us to isolate the effects of tweaks to their Bayer Color Filter Array, assuming all else stays the same.
Thanks to two virtually identical captures by David Chew at getDPI, and Erik Jaffehr’s intelligent questions at DPR, in the following articles I will explore the effect on linear color of the new Trichromatic CFA (TC) vs the old one on the Standard Back (SB). In the process we will discover that – within the limits of my tests, procedures and understanding – the Standard Back produces apparently more ‘accurate’ color while the Trichromatic produces better looking matrices, potentially resulting in ‘purer’ signals. Continue reading Phase One IQ3 100MP Trichromatic vs Standard Back Linear Color, Part I→
Now that we know how to create a 3×3 linear matrix to convert white balanced and demosaiced raw data into connection space – and where to obtain the 3×3 linear matrix to then convert it to a standard output color space like sRGB – we can take a closer look at the matrices and apply them to a real world capture chosen for its wide range of chromaticities.
We understand from the previous article that rendering color during raw conversion essentially means mapping raw data represented by RGB triplets into a standard color space via a Profile Connection Space in a two step process
The process I will use first white balances and demosaics the raw data, which at that stage we will refer to as , followed by converting it to Profile Connection Space through linear transformation by an unknown ‘Forward Matrix’ (as DNG calls it) of the form
How do we translate captured image information into a stimulus that will produce the appropriate perception of color? It’s actually not that complicated.
Recall from the introductory article that a photon absorbed by a cone type (, or ) in the fovea produces the same stimulus to the brain regardless of its wavelength. Take the example of the eye of an observer which focuses on the retina the image of a uniform object with a spectral photon distribution of 1000 photons/nm in the 400 to 720nm wavelength range and no photons outside of it.
Because the system is linear, cones in the foveola will weigh the incoming photons by their relative sensitivity (probability) functions and add the result up to produce a stimulus proportional to the area under the curves. For instance a cone will see about 321,000 photons arrive and produce a relative stimulus of about 94,700, the weighted area under the curve:
While perusing Jim Kasson’s excellent Longitudinal Chromatic Aberration tests I was impressed by the quantity and quality of the information the resulting data provides. Longitudinal, or Axial, CA is a form of defocus and as such it cannot be effectively corrected during raw conversion, so having a lens well compensated for it will provide a real and tangible improvement in the sharpness of final images. How much of an improvement?
For the purposes of ‘sharpness’ spatial resolution measurement in photography cameras can be considered shift-invariant, linear systems.
Shift invariant means that the imaging system should respond exactly the same way no matter where light from the scene falls on the sensing medium . We know that in a strict sense this is not true because for instance a pixel has a square area so it cannot have an isotropic response by definition. However when using the slanted edge method of linear spatial resolution measurement we can effectively make it shift invariant by careful preparation of the testing setup. For example the edges should be slanted no more than this and no less than that. Continue reading Linearity in the Frequency Domain→
Dynamic Range (DR) in Photography usually refers to the working tone range, from darkest to brightest, that the imaging system is capable of capturing and/or displaying. It is expressed as a ratio, in stops:
It is a key Image Quality metric because photography is all about contrast, and dynamic range limits the range of recordable/displayable tones. Different components in the imaging system have different working dynamic ranges and the system DR is equal to the dynamic range of the weakest performer in the chain.
Several sites perform spatial resolution ‘sharpness’ testing of imaging systems for photographers (i.e. ‘lens+digital camera’) and publish results online. You can also measure your own equipment relatively easily to determine how sharp your hardware is. However comparing results from site to site and to your own can be difficult and/or misleading, starting from the multiplicity of units used: cycles/pixel, line pairs/mm, line widths/picture height, line pairs/image height, cycles/picture height etc.
This post will address the units involved in spatial resolution measurement using as an example readings from the slanted edge method.
You have obtained a raw file containing the image of a slanted edge captured with good technique. How do you get the MTF curve of the camera and lens combination that took it? Download and feast your eyes on open source MTF Mapper by Frans van den Bergh. No installation required, simply store it in its own folder.
Now that we know how to determine how many photons impinge on a sensor we can estimate its Effective Quantum Efficiency, that is the efficiency with which it turns such a photon flux into photoelectrons ( ), which will then be converted to raw data to be stored in the capture’s raw file:
I call it ‘effective’ because it represents the probability that a photon arriving on the sensing plane from the scene will be converted to a photoelectron by a typical digital camera sensor. It therefore includes the effect of microlenses, fill factor, CFA and other filters on top of silicon in the pixel. It is usually expressed as a percentage. For instance if an average of 100 photons per pixel within the sensor’s passband were incident on a uniformly lit spot of the sensor and on average each pixel produced a signal of 20 photoelectrons we would say that the Effective Quantum Efficiency of the sensor is 20%. Clearly the higher the eQE the better for Image Quality parameters such as SNR and DR. Continue reading What is the Effective Quantum Efficiency of my Sensor?→
The in-camera ISO dial is a ballpark milkshake of an indicator to help choose parameters that will result in a ‘good’ perceived picture. Key ingredients to obtain a ‘good’ perceived picture are 1) ‘good’ Exposure and 2) ‘good’ in-camera or in-computer processing. It’s easier to think about them as independent processes and that comes naturally to you because you shoot raw in manual mode and you like to PP, right? Continue reading Exposure and ISO→