• Subscribe

    Subscribe to This Week In Panospace by eMail.
    Subscribe in a reader
  • License

    Creative Commons License
    This work is © 2008-2012
    by Yuval Levy
    and licensed under a
    Creative Commons License.
  • Entries

    April 2014
    M T W T F S S
    « Dec    
     123456
    78910111213
    14151617181920
    21222324252627
    282930  
  • Archives

Releases, Releases.

Somehow the end of the Gregorian calendar year seems to be a popular time for software releases. There have been a number of interesting releases in the past two weeks in the area of Free photography in general and panorama making in particular. Too many to mention them all.

The one that caught my attention is PTStitcherNG 3.0b by Prof. Dr. Helmut Dersch. Helmut has been at the forefront of panorama making software for more than a decade now, and his new tool optimizes for speed and memory footprint. A 1.5 gigapixel panorama will stitch within less than 100MB RAM. To me this comes very convenient: since I upgraded my photo gear the resulting images are too big to stitch on my ailing notebook maxed out on RAM.

Unfortunately for the time being it is free and not Free. Binaries are freely available for Windows, OSX, and OpenSuse 11.1 (x86_64). Users of Linux distributions other than OpenSuse will need to run the Windows binary through Wine and take a 30% performance hit. The currently distributed binaries only work on SSE3 processors. Helmut says that a source code release is pending the scientific publication of his results. For the Pentium M / Ubuntu 9.04 notebook whose life I’m trying to extend beyond shelf-date  I’ll have to wait for SSE2 binaries or for the source code. I worked around it by rsyncing the source images to my Kubuntu 9.10 office workstation and running PTStitcherNG there, remotely.

Another notable release is Enblend-Enfuse 4.0 that has been overhauled by Christoph Spiel and has spurned a release of newer versions of related GUIs,   ImageFuser0.7.4 by Harry van der Wolf and EnfuseGUI 2.1 by Ingemar Bergmark.

Last but not least, Bruno released Hugin 2009.4.0. completing the work I left unfinished, and started the release cycle for 2010.0.0. James topped the Holiday present by merging into trunk the much awaited new layout branch, with his Google Summer of Code project and Pablo’s XYZ  parameters for image position.

Google Summer of Code 2009 Wrap Up

Another summer is over. The maples in my neighborhood are changing color and we just finished our third Google Summer of Code (GSoC) participation, the best ever. Like the previous two editions, GSoC has been a catalyst beyond the five project slots generously allocated to us by Google. GSoC has been more than money. More than an opportunity to attract fresh contributors and get things done. It has been first and foremost a transformational opportunity, making an inherently insider organization an open and outward looking one.

Allocation

Google generously allocated five slots to us and this year we decided to allocate one of them to a smaller Open Source project. It seems to me that there is an ideal mentoring organization size (or at least minimum size) for GSoC participation. Smaller mentoring organizations require a similar amount of administrative effort as larger ones. Sharing is a way of enabling smaller projects to participate while limiting the burden on Google’s Open Source Programm Office.

There are many small Open Source projects in our related field, and in talks I had with them most expressed interest. I decided to work with Professor Sébastien Roy’s Vision 3D Lab at Université de Montréal and his LightTwist project for a few reasons:

  • I needed to make a case to our community that sharing is in our interest. Indeed there was a very short discussion about LightTwist’s relevance to the Hugin community.
  • Meetings and contacts between members of the Hugin community and of the Vision 3D Lab go back to Libre Graphics Meeting (LGM) 2007. We share a similar culture / attitude.
  • Last but not least, I was looking for more academic contacts to promote GSoC recruiting efforts.

The experience worked better than expected. We even organized a joint display of Hugin artwork projected with Lighttwist at LGM 2009. I warmly recommend other organizations to do the same: Next year give a lift to a smaller project!

Recruiting

This was my biggest deception this year. I did put a lot of effort upfront – contacting professors at relevant faculties in my region and beyond. The turnout – at least for us – was zero. Recruiting is indeed our bottleneck. This year again we had more mentors than allocated students, so we rotated mentoring responsibilities. Our recruiting needs improvement.

Students

One of the biggest question was to hire new students or returning ones? I am thankful to my fellow mentors for changing my mind on this one, and I hope our faithful returning students will forgive me. Initially I even convinced one of last year’s students to participate as a mentor. Then I asked him to submit a project proposal. Thank you, Tim, for your flexibility! Student’s life is IMO the most beautiful period in one’s lifetime, enjoy it while it lasts!

Selection

Three months are a short period. To make the most out of it candidates must be up to speed from day one. Following the lead of other mentoring organizations, we introduced qualification tasks. Despite some controversy amongst a few vocal students, it has worked well for us. Our goal was to determine basic proficiency with the tools. Other organizations, notably x264, have put more effort in the qualifying tasks so that they also determine if the candidate has the skill to complete the task. It may be more work, but it is definitely best practice IMO, and we should consider putting in this upfront effort.

Meetings

Remote collaboration works better after a physical handshake. I interviewed all serious candidates by phone and try to organize as many face to face meeting with community members as possible. More by lucky circumstances than by design we managed to meet all students this year: two at LGM in Canada and three at a dedicated meeting in the UK. These meetings have been very productive. Even face to face meeting did not prevent us from failing one student, but they are useful and we should try to bridge physical distance in the future as well.

Projects

Our two returning students did not have to get up to speed with the code base and could take on more ambitious projects. And indeed they did, beyond my dreams and expectations.

James Legg, mentored by Bruno Postle, refactored code deep into the core to implement his new layout model. The model works. Minor quirks in specific situations will be fixed. This project corrects a design weakness, unintended heritage of the pre-HDR time.

Tim Hugent, mentored by Tom Sharpless, produced an automatic lens calibration tool. The calibrated lens model still need a lot of validation work through practical use in the field. The potential benefits are improved precision and, beyond panorama making (and Hugin) better images from less than perfect lenses. This project was the first step in an exciting new direction.

Lukáš Jirkovský, mentored by Andrew Mihal, brought deghosting (the removal of moving artifacts from multiple exposures) to the next level and made it available to enfuse, the preferred choice for photo realistic exposure bracketing. Also this project will continue after the end of GSoC.

Yulia Kotseruba, mentored by Sébastien Roy, added new blending algorithms to LightTwist, making it possible to extend the projection surface vertically and opening up new fields of uses such as dome projection or, given enough wall space, a low-cost high-resolution IMAX replacement. Work continues and will be published in a scientific review.

Last but not least a word about Dev Ghosh, mentored by Daniel German. Unfortunately we had to fail Dev because he had not achieved the agreed (and revised) milestone. To his credit he got the mathematics right – his MatLab implementation works. Chances are that the code will be implemented in our code base, sooner or later.

A big THANK YOU to all the students for their effort and dedication!

Update: The only reason to fail Dev was quantity. The quality of his work was excellent. Daniel has implemented it in Libpano and it may become the first chunk of 2009 code to be officially released, with the upcoming libpano 2.9.15.

Integration

We found the right recipe for this last year and we recidivate. During GSoC students focus on implementation. They will integrate their code after GSoC. As an encouragement, they will be sponsored a panoramic head from Nodal Ninja, an industry leader that sees the benefits of supporting Open Source software that drives sales of its hardware. At the time of writing, the first project has been already integrated in trunk. Thank you, Bill Bailey, marketing director for Fanotec.

Community Development

On this year’s mentoring organization application form there was one interesting question about “criteria used to select community members“. This lead to some introspection, following which I drafted this community charter. Thank you, Google Open Source Program Office, for giving us food for thoughts.

Speed

GSoC projects have accelerated the pace of development. Our sequential development process with trunk freezes and long release cycles was no longer adequate. I’ve taken it upon myself to re-engineer it and at the time of writing we’re getting ready to release for the first time under the new, parallel process. I hope we can integrate and release all the code developed for GSoC 2009 before Christmas, and that our project team will increase it’s capacity to absorb and release code changes.

Generations

After graduating university I was offered a job at a company that I still admire (I accepted another job because it gave me the opportunity to expatriate). That company had a three year cycle: the first year you’re hired as a junior and trained on the job. if you did it well the second year your boss moves on and you put your imprint on the job, adding to it what you have learned. On the third year, you train a junior to replace you and if everything went well you’re promoted to the next job. This helped the company stay nimble and up to date. We need a similar attitude in our project. People come and go, and we need to groom the next generation of committers before the current generation fades out; and to document our processes and know how for the perusal of the next generation. GSoC has been a great recruiting ground and our next project leader may be amongst this year’s GSoC students.

Conclusion

Before our GSoC participations, joining Hugin was a steep learning curve. Still today we get consistent feedback from GSoC candidates and students that getting up to speed with the code base is the biggest hurdle they face. Things have improved. In 2007 with an SDK for Windows developers. In 2008 with the equivalent for OSX developers and with build documentation for different platforms. This year it was the release process, and there is more on the plate for the years to come – much of this thanks to GSoC. Thank you, Google!

Auto-Exposure

The general recommendation (which still stands) is to keep exposure and white balance fixed across shots when shooting for a panorama. This is not always possible, e.g. with cell phone cameras. Hugin had photometric adjustments and other functionality to deal with exposure and white balance variations for a while. Now in the upcoming 2009.2.0 release this is all accessible through a single tick on the Stitcher tab: Blended and fused panorama.

blended_n_fused

Bruno Postle has published a tutorial about stitching auto-exposed panoramas.

Hugin-0.8.0 for Gentoo Linux

hugin-logoGentoo Linux is a “special flavor of Linux that can be automatically optimized and customized for just about any application or need. Extreme performance, configurability and a top-notch user and developer community are all hallmarks of the Gentoo experience”. At the core of Gentoo is Portage, a software distribution system. Unlike most Linux distributions, Portage does not distribute binaries. Instead, like the FreeBSD ports collection, it distributes scripts that automatically get the appropriate source code and build it, customized for the user’s specific needs.

Thomas Pani has updated both the release and snapshots available to Gentoo users via Portage. Gentoo is the first Linux distribution known to me that gives all of its users access to Hugin-0.8.0. That’s the equivalent to a binary distribution on other platforms. If you want to stay on top with Hugin and related tools, Gentoo is an excellent choice.

Give a Man a Fish

Groundhog protecting his holeSo the Hugin project team released 0.8.0 three weeks ago. What are the expectations, and what does release actually mean?

Some users, particularly in the Windows world, expect the release of a shrink-wrapped product. They expect to click, download, install, use. And of course everything for free.

Easy? Not so quick. What was actually released is nothing more than a tarball, an archive of the source code in a state known to build and work on the major supported platforms. So it could be used to produce executable binaries.

How are those binaries produced? What do they depend upon? What is to be produced? Who does it? How are they distributed? What is missing?

How

The user community has prepared platform-specific instructions. They may not be perfect but do the job.

Dependencies

There are some dependencies that need to be fulfilled for the recipes to work. Besides the obvious ones of enough disk space; a supported operating system; and installation of the building tools, Hugin 0.8.0 depends on other software packages, both at build time and at run time.

On the Ubuntu package description the dependencies are clearly listed. They are roughly the same for all platforms. They include enblend, enfuse, libpano.

What

On most Linux based systems a package manager takes care of the dependencies. This means that the packager only has to build Hugin-0.8.0 itself. The package manager will download and install the appropriate dependencies.

On OSX and Windows there is no package manager. The Hugin-0.8.0 binaries alone are useless. The packager has to do extra work.

Who

Anybody with access to a computer can act as a packager, use the instructions and the tarball to produce executable binaries and do whatever they want with it. It’s free.

Distribution

Anybody can put up the binaries for download anywhere on the internet. In the download section of this blog there are links. On top of that there are two distribution channels with special meaning: the package manager and the official Hugin download page.

The package manager is responsibility of the Linux distributors. They carefully test and approve each package that goes into their distribution. And they don’t backport often. For example, Ubuntu still supports its 8.04LTS distribution until April 2011 for desktops and April 2013 for servers, as well as 8.10, 9.04 and the upcoming 9.10. As of today, the binary of Hugin distributed by the package manager for Ubuntu 8.04LTS is Hugin-0.7.0_beta4. Users of Ubuntu 8.10 are still served with Hugin-0.7.0_beta5. Users of the most recent Ubuntu 9.04 have access to Hugin-0.7.0 binaries. And for the upcoming 9.10, Ubuntu’s plan seems to be to still ship Hugin-0.7.0. We hope that this will change but it is not up to us to change this.

Dependencies Revisited

Specifically, there are three dependencies that currently cause pain. In order of importance:

  1. Enblend and enfuse are mandatory. Without them, Hugin-0.8.0 does not work. The current enblend and enfuse packages have serious problems, causing them to crash. They are a regression compared to previous CVS snapshots, in particular to those versions delivered with the old Windows SDK and with the Hugin-0.7.0 installer for Windows. There are fixes in the staging branch of enblend-enfuse. And Harry has been using that branch to build the OSX Hugin-0.8.0 bundle until his Mac died.
    Unfortunately the staging branch does not build in Windows yet. I researched the problem and fixed part of it (accepted as revision 347 by Christoph Spiel on August 3). Thomas Modes had a look at it before going to holiday and added some more pieces to the solution. But it does not build yet because of Microsoft’s Visual C++ quirks.
  2. Libpano is mandatory. Until recently libpano had a minor but annoying problem: it reverted the language of the user interface to English. Daniel M. German and Thomas Modes solved it in the repository.
  3. A control point generator is a critical. Until recently the autopano-SIFT-C package had serious problems, causing memory leaks and crashes; failing where the previous version bundled with the 0.7.0 OSX bundle and Windows installer succeeded.

On systems with package managers (i.e. Linux) the packager can simply publish Hugin-0.8.0 and rely on the package manager to solve the above listed issues. For systems without package manager the packager must add them to the bundle/installer or to the SDK used to build it.

Distribution Revisited

Distributing a binary that is known to have a regression over previous versions as official release is bad practice. It not only affects the project reputation, it ruins a working system for those users who overwrite the working version with the newer one.

The OSX Hugin users community fixed the problem and an official OSX binary is available from the Hugin site. There is no solution yet in sight for Windows, so currently Windows users are directed to the 0.7.0 installer. Maybe we should direct them to the tarball instead?

Conclusion

Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime. This works with most of our users ond most supported platforms.

Unfortunately for some Windows users it does not seem to work. Give them free fish and they will come for more. Give them the freedom to feed themselves and they will ignore it.

Revisting Old Places

hugin-logoI’m currently at my parents in southern Switzerland. Usually this is a wonderful place with plenty of sunshine (there is an institute for solar research just around the corner). For the past three days it was under torrential rain (more than 150 liters per square meter in 12 hours). The rain has not stopped Fulvio Senore and Alessandro Ugazio from giving us a warm welcome on a train stop in Domodossola, Italy (restaurant panorama is in the making; the RAW files have been converted with RAWstudio; Hugin has stitched them; I’m struggling with the masks and layers in GIMP). My parents are enjoying their grandchild.

I had plans to meet old school friends and to take some pictures in old and familiar places. No pictures under the rain. Instead, I revisited a different kind of old and familiar places. It has been about a year since the last time I’ve looked back at the effort I initiated to document the Hugin build process. My hope for that page (and for the build and release process) was that it would take care of itself over time. While there is now a thriving community of builders and testers, some part of that effort had not materialized as I hoped. During the last LGM I discussed some of the learnings with Pablo and we’ll start implementing them after the 0.8.0 release. In the meantime, I started updating the page.

That page, with the pages linked from it, represent an important step to me: the stepping stone from user to contributor. When I joined Hugin, I was just a user. Actually, I was a pain in the neck user, asking for stuff all the time. I wish I could have contributed stuff, but I lacked the necessary knowledge and the learning curve looked daunting steep. So I contributed my chutzpah and organized our first participation to the Google Summer of Code in 2007. One feedback we got from the first crop of GSoC students was that the learning curve is daunting steep. So I’ve pushed and pulled around the community to ask those with the knowledge to document it into that bare structure I’ve started. With the help of many community members, and support from the core developers, we put together a documentation self-explaining enough to get more users over that stepping stone. It did not take long until it was picked up and we now have a thriving community of builders and testers. Moreover the instructions helped new developers (such as the 2008 and 2009 Google Summer of Code students) to get faster over the learning curve and become productive faster.

The build process is now self-sufficient. When dependencies change or features are added, the feedback loop between developers and builders is fast and efficient. The documentation still needs updating now and then. I look forward for the 0.8.0 release so that we can move on to improve the release cycle (and the closely related debugging cycle) along similar lines. But first, I look forward for the weather to clear, and to spend some quality time in the place of my youth.

LGM Day 3, Day 4, Follow Up

Days went by so quick, so here is a summary.

Day three was intense. The presentations were very interesting, particularly those of Andrew Mihal about Enblend-Enfuse and about GPU stitching. In the evening we had the conference supper, a very pleasant social event at which I learned from about Øyvind’s Now By Then installation and got a chance to purchase one of the last “Architecture Fiver“‘s  that Stani brought along for his talk. He also patiently gave me an insight into the thinking pattern of curators, lifting my morale from the previous day.

On day four I had a near-death experience. Since Tom Sharpless could not attend, I picked up his slide and hosted the talk. In the morning I prepared my notebook in dual display mode, so that on one display I can run the demonstrations of Panini, MathMap and Hugin, while on the other display I had my notes, which I kept editing until shortly before my talk. Without saving. Disaster strikes when I connect the notebook to the projector. My notebook’s native resolution is 1400×1050, too much for the projector to handle. Must restart X. Lose all presets and notes! I froze on the spot. Had to cut on a few gimmicks such as recording the talk with the catadioptric lens or shooting a stitched panorama during the talk (I made the move with the camera around the tripod, but had no available brain cycles to even think of setting the exposure or pressing the real button. I felt like a zombie and was disappointed at my performance which I felt was terrible, although a look at its recording comforted me: it was not that bad after all, even if I forgot to say half the things I wanted to say, the live demo and the interaction with the public were not that catastrophic.

The good news from day four was that Sébastien and Vincent debugged the flat display in extremis so that people walking out of for the lunch break passed by and could admire Guillaume’s stunning Boulevard Bancel (we’ll gladly admit that once this was on screen and working we pulled the cables on the slide show so that nothing can go wrong. Proper slide show the next time).

Since the cafeteria was closed on Sunday, we had to order pizzas for lunch, which was great as it inspired more exchanges before the last presentations. In the end, I even got the honor of shooting the official group picture, and to offer some of the present teams a panorama inside a panorama in the Cyclorama, while our team helped folding up the exhibit canvases.

After protracted good byes, I drove eastwards with Alexandre and Pablo. Alexandre only joined us for a tour of Québec-City. Pablo continued with us to Boréalie before I drove him back to Montréal for his flight on Wednseday. The rest of the week, and the weekend, I had to catch up with pent up business. Particularly the weekend was difficult, with a few difficulties upgrading servers remotely from FreeBSD 6.3 to FreeBSD 7.2 (the worse thing is that all manipulations worked well on the guinea-pig server in the office that is pretty much an exact mirror of the servers to be upgraded).

I still made time on Sunday evening for another quick hop to Montréal, and I am happy I did. Joergen Geerds was visiting for the weekend with his girlfriend. Interesting people.

After

Exposures Stacks

Stacking…

An exposure stack is a series of completely overlapping images at different exposures to capture a wider dynamic range beyond the limit of the sampling sensor.

Ideally they are fully overlapping (e.g. shot from a tripod) and the subject is static. For less than ideal conditions, hugin’s align_image_stack can correct for small perspective changes (e.g. hand held shots) and deghosting software such as the experimental hugin_hdrmerge from Jing Jin’s Google Summer of Code 2007 project can partially correct for scene movement.

There are a number of ways to merge a stack into a single image:

  • Exposure Blending: many photographers load the stacked images into layers and use masks to manually reveal details from the appropriate layer, in Photoshop, Gimp, or any image editor that supports layers This is often used for manual deghosting as well.
  • Exposure Fusion: For almost a year now photographers had access to completely automated exposure fusion with Andrew Mihal’s Enfuse.
  • HDR/Tonemapping with tools like Qtpfsgui or Photomatix has been around for some years. While the merging of an HDR image under ideal conditions is straight forward, the tone mapping step can be computationally intensive and require human judgment/intervention, which makes the process more time consuming than exposure fusion.

… and Stitching…

Stitching is the two steps process of aligning partially overlapping images in space and blending them into a single, bigger picture.

Combined!

The combination of stitching and stacking is now done inside hugin automatically. Select the desired type of output (enfused panorama for exposure fusion, or HDR panorama) and let the magic happen. Sometimes the artist wants more control over the process, e.g. to mask out which of two overlapping images will show in the stitched panorama.

There are basically two ways to control manually the rendering of a stacked panorama: stack and stitch, or stitch and stack. The result is nearly the same:

Stitching and Stacking…

In the past, the general recommendation was to stitch first. This was before Pablo d’Angelo implemented photometric adjustment in hugin and before Andrew Mihal gave us with enfuse an easy exposure blending tool.

The rationale:

  • keep the color balance constant
  • optimize alignment
  • set the stage for manual deghosting across blending seams

Keeping color space constant is no longer an issue, thanks to hugin’s photometric adjustment and end-to-end HDR stitching. Enfuse is anyway not affected as it choice of pixel weight is independent of color balance. And for optimal alignment, in most cases the results of align_image_stack are better than the alternatives. Masking / deghosting can be indeed easier on a stitched panorama.

Stitching first presents the artists with a few pitfalls / disadvantages:

  • The stitched layers must be aligned. The classic way to achieve this is to stitch a single layer and then use it as a template to stitch the rest.
  • Blending seams within each layer may be different, resulting in visible artefacts.
  • A stitching before stacking process is generally slower than a stacking before stitching process.
  • Not all tools can handle seams in the stack – specifically enfuse and enblend do not handle well nadir and zenit seams (and the top and bottom edge of an equirectangular image are just that). Qtpfsgui does not handle well the 360° deam (the left and right edge of a 360° image).

Or Stacking and Stitching?

So we can stack and stitch. Running the stacks through the stacking process can be automated, for example with Erik Krause’s droplets (for Windows only) that pass an entire folder or a selection of images through hugin_align_image_stack and enfuse, or with hugin_hdrmerge if we want to stack HDR.

In many cases, stacking before stitching will yield acceptable results, even if the smaller image surface will allow for a smaller number of levels while blending / fusing. It will sure yield a shorter processing time.

Conclusion

As a VR artist, both processes have their place in my toolbox. I use stacking and stitching when I know I want simple, photorealistic results. Then I stack the images with enfuse (align_image_stack optional) before stitching. When I want to play with the tone mapping, I either stitch first then compose to HDR, or I let Hugin compose the HDR right from the input images in a single step. Deghosting and Masking can be applied on the input images, on the stacks, and on the stitches, as needed.

Contribution

It is always great to see a Google Summer of Code student blend well into the community. Fahim Mannan is such a student. Last spring, after he got selected for GSoC, I met him in Montréal, Canada, where he is currently enrolled in a Masters in Computer Science at McGill University. His area of interest are Computer Vision and Robotics.

Fahim is naturally reserved, but get him started talking about Computer Vision and you’ll see the sparks of passion in his eyes. You’ll hear a motivated and knowledgeable student who knows where he is going.

The Google Summer of Code is his first serious experience at contributing to Open Source, and it is turning out to be a good experience. Fahim summer project adds masking capabilities to Hugin’s GUI. He is quite independent and self-motivated, and can count on expert mentorship from Daniel M. German, a former libpano maintainer and regular community contributor. Fahim posts his progress reports on his project’s blog.

Even with student’s life and his own GSoC project, Fahim occasionally finds the time to chime in and help out in the community when his expert skills are needed.

Recently, a change introduced in the enblend-enfuse code disrupted the building process in Windows. Fahim was quick to provide a working solution.

The above panorama of our meeting is enfused and has a little bit of ghosting. Unfortunately the deghosting code that was written for hugin during last year’s Summer of Code is only applied to HDR stacks. When will it be ported as an option to enfuse? And when will we see code to extract multiple exposures from an HDR file and enfuse them together?

Dynamic Range

This is a crop of a common picture. For now disregard the grayscale bar that was added underneath. It was taken with the camera on automatic exposure mode. The automatic exposure selects an average exposure time for the whole picture. Parts of the windows are overblown and their detail is not visible. Same for the detail of the statues in the shadow.

Below is the same picture, overexposed. Or: exposed for the statues. The windows are even further overblown, but at least we can clearly see faces of the statues.

Next is the same picture, underexposed. Again, the automatic exposure has been overridden and the picture has been exposed for the windows. There is even more shadow than in the average exposure (and it was accentuated during RAW development), but most of the details in the windows are visible.

Modern dSLR cameras have a built-in Auto Exposure Bracketing (AEB) function that makes it relatively easy to obtain the three pictures automatically. A tripod and a steady subject help. But if it is so easy, why doesn’t the camera capture all of the details in the shadow and in the highlight right away?

Because the camera’s sensor has a limited dynamic range, which in many cases is smaller than the dynamic range of the scene being depicted. This is particularly true for panoramas.

For the purpose of image processing, dynamic range describes the difference between the smallest and largest discernible quantity of light. Anything that is smaller is hidden in the shadow and anything that is lighter is blown out white.

The depicted scene has a given dynamic range. Any input device such as a camera sensor responds to a dynamic range which often is a subset of what is present in the scene. And any output device such as a display or a printer is capable of reproducing a dynamic range that often is a subset of what is available in the recording. At any given step of the process, the dynamic range of the input is mapped to the dynamic range of the output.

The grayscale bars under the pictures above map approximately the dynamic range of the sensor in relationship to the dynamic range of the scene. The black and white areas are out of the range. Light in the original scene that falls out of the range is not discernible in the individual exposure.

In this case it took three exposures to record the most relevant parts of the dynamic range in the scene. Sometimes it takes more like in this example I shot earlier this year. These exposures need to be merged back into a single image reconstructing the original scene in as much detail as possible. And that single image needs to be mapped to the dynamic range of the output device. There are different techniques to achieve this. These techniques can be summarized into two groups: exposure blending and HDR/tonemapping.

Below is an example of exposure blending, using state of the art exposure fusion. Tom Mertens, Jan Kautz and Frank Van Reeth have devised a mathematical way based on simple quality measures like saturation and contrast to mix the best exposed pixels from every pictures and fuse them into a single, visually pleasant and nearly “realistic” result. Andrew Mihal programmed Enfuse based on their algorythm. Enfuse, available for download from this site, can do it for you, out of the box.

Next time you go out photoshooting, set your camera to AEB. A little bit of post-production magic may reveal some unexpected detail in your images.