For the purpose of image processing, Dynamic Range describes the difference between the smallest and largest discernible quantity of light. Anything that is darker is hidden in the shadow and anything that is lighter is blown out white – out of range.
Bernhard Vogl recently defined High Dynamic Range (HDR) in relation to Low Dynamic Range (LDR), where LDR is defined as “the range, your camera can capture with one shot”.
Our inherently flawed capture and display methods (as Mark Banas calls them in a private email exchange) improve over time. Consequently Bernhard’s definitions of LDR and HDR are a moving target.
Dynamic Range (DR) has a fixed relationship with the Contrast Ratio (CR), the ratio of the luminance of the brightest white to the luminance of the darkest black. They are both derived from the two extreme light quantities in the scene. They both define the boundaries of the range, but they still don’t say anything about what’s inside the range, it’s quality.
The stripes below represent what’s inside the range for the same, arbitrarily chosen DR, between the same white and the same black. They also have the same CR from white to black.
The last stripe, at 8 bits, most likely looks like a continuous gradient from white to black. A few modern displays have higher DR nowadays. Even if you are lucky enough to look at this page through one of those 50.000 USD displays, very few eyes are trained to discern such small contrasts. At less bits, banding occurs when the contrast between two adjacent steps on the scale becomes apparent to the average human eye.
What changes between the different stripes are the gradient steps, a characteristic specific of the primitive digital workflow where the number of steps is discreet and depends on the number of bits. At 1 bit there are 2 steps and at n bits there are n^2 steps. The quality of the DR is good enough when the contrast between two adjacent steps, is small enough not to be discernible as banding.
So what is high dynamic range? it is a push beyond LDR in two dimensions. First and foremost, to cover the scene’s complete Natural Dynamic Range and avoid clipping out of range luminance values. And then it is about the improvement of the quality inside the range.
Bernhard goes on to say that “you will certainly use [LDR] most of your times”. Similarly, Mark stated in the mentioned private exchange that before recurring to techniques to expand the captured dynamic range, proper exposure should be used to make good use of the dynamic range available from the capturing device.
That’s the starting point indeed. Beyond that, there are cases for extra-exposures, that can be grouped in two categories:
1. Increased Dynamic Range (IDR): is what photographers like me do when they add a couple of exposures at the capture stage, usually with the in-camera Auto Exposure Bracket (AEB) function, to improve the detail in the image.
There is a strong case for using this technique also with scenes that would fit in the default dynamic range of the capture device. The capture device’s response is not constant across the whole range. The extra exposures can improve the noise to signal ratio, particularly at the darker end:
And they can recover more detail in the highlights:
Averaging pixels across an exposure stack zeroes out sensor noise.
The technique is not always indicated since it costs more effort and has also drawbacks such as the ghosting when there is movement between the exposures.
2. “True” High Dynamic Range (HDR) is what graphic artists do when they make computer generated images (CGI) from virtual worlds using ray tracing software. When they use photographic input as image based lighting (IBL), they try to capture the whole Natural Dynamic Range. This often means capturing more exposures than what AEB as implemented in most prosumer and many professional cameras can do, often across ten and more exposures, either manually, paying attention not to move the camera when changing the shutter speed between exposures, or with computer-controlled cameras via USB or serial cable.
One could argue that at least theoretically there is an absolute natural dynamic range, marked at one end by the complete absence of photons and at the other end by the physically highest possible concentration of photons in space/time. But what is that highest concentration? nuclear fusion, like in the Sun? and how does distanceof the capturing sensor from the source of light affect the measured luminosity?
Both IDR and HDR processing require extra steps along the process. And tone mapping in the end: