This is a crop of a common picture. For now disregard the grayscale bar that was added underneath. It was taken with the camera on automatic exposure mode. The automatic exposure selects an average exposure time for the whole picture. Parts of the windows are overblown and their detail is not visible. Same for the detail of the statues in the shadow.
Below is the same picture, overexposed. Or: exposed for the statues. The windows are even further overblown, but at least we can clearly see faces of the statues.
Next is the same picture, underexposed. Again, the automatic exposure has been overridden and the picture has been exposed for the windows. There is even more shadow than in the average exposure (and it was accentuated during RAW development), but most of the details in the windows are visible.
Modern dSLR cameras have a built-in Auto Exposure Bracketing (AEB) function that makes it relatively easy to obtain the three pictures automatically. A tripod and a steady subject help. But if it is so easy, why doesn’t the camera capture all of the details in the shadow and in the highlight right away?
Because the camera’s sensor has a limited dynamic range, which in many cases is smaller than the dynamic range of the scene being depicted. This is particularly true for panoramas.
For the purpose of image processing, dynamic range describes the difference between the smallest and largest discernible quantity of light. Anything that is smaller is hidden in the shadow and anything that is lighter is blown out white.
The depicted scene has a given dynamic range. Any input device such as a camera sensor responds to a dynamic range which often is a subset of what is present in the scene. And any output device such as a display or a printer is capable of reproducing a dynamic range that often is a subset of what is available in the recording. At any given step of the process, the dynamic range of the input is mapped to the dynamic range of the output.
The grayscale bars under the pictures above map approximately the dynamic range of the sensor in relationship to the dynamic range of the scene. The black and white areas are out of the range. Light in the original scene that falls out of the range is not discernible in the individual exposure.
In this case it took three exposures to record the most relevant parts of the dynamic range in the scene. Sometimes it takes more like in this example I shot earlier this year. These exposures need to be merged back into a single image reconstructing the original scene in as much detail as possible. And that single image needs to be mapped to the dynamic range of the output device. There are different techniques to achieve this. These techniques can be summarized into two groups: exposure blending and HDR/tonemapping.
Below is an example of exposure blending, using state of the art exposure fusion. Tom Mertens, Jan Kautz and Frank Van Reeth have devised a mathematical way based on simple quality measures like saturation and contrast to mix the best exposed pixels from every pictures and fuse them into a single, visually pleasant and nearly “realistic” result. Andrew Mihal programmed Enfuse based on their algorythm. Enfuse, available for download from this site, can do it for you, out of the box.
Next time you go out photoshooting, set your camera to AEB. A little bit of post-production magic may reveal some unexpected detail in your images.