• Subscribe

    Subscribe to This Week In Panospace by eMail.
    Subscribe in a reader
  • License

    Creative Commons License
    This work is © 2008-2012
    by Yuval Levy
    and licensed under a
    Creative Commons License.
  • Entries

    April 2014
    M T W T F S S
    « Dec    
     123456
    78910111213
    14151617181920
    21222324252627
    282930  
  • Archives

The Quest is Over

The biggest user misconception about Hugin is with control point (CP) generators.  Contrary to popular belief,  until last week Hugin did not have a CP generator.

Because CP generation is critical to the process of panorama creation; and because Hugin is an interface to the panorama creation process; users misunderstand Hugin for the whole.  Every time CP generation fails they perceive it to be a failure of Hugin.

Packagers (including myself when I was a Windows packager) who ship Hugin for platforms without proper package management (notably: Windows and OSX) have in one way or another bundled CP generators in the past to help making the downloaded package useful.  Despite advanced control for CP placement, many users find Hugin without CP generator to be of limited use.  The packages did not help to lift the misconception, and brought a whole set of problems with them.

Other functions of the panorama production process in Hugin are also performed by third-party tools (e.g. Enblend-Enfuse for blending), but the interface to CP generators is a particularly tricky one because there are (too) many third party CP generators; they take different arguments on the command line; none of them is absolutely superior; and last but not least they are encumbered by patents, limiting what they can be used for and where and how they can be distributed.

So what can be recommended to packagers/distributors and users?

My more than two years old overview of available CP generators is outdated.  None of them is properly maintained.  For example Autopano-SIFT-C version 2.5.2 (that has not even been released) is broken.  It has been broken for more than a year.  The recommendation is to use 2.5.1 until the problems with 2.5.2 are fixed.  No active work was done on fixing 2.5.2, and the command line interface (and thus the string that goes into Hugin’s preferences) has been a moving target as well.

Users rightly complain about the difficulties of configuring a CP generator, and indeed it should not be that difficult.  But the changes in the CP generator need to be coordinated with the changes of parameters in Hugin.  Hugin ships with pre-configured parameters, but Hugin can’t determine if the CP generator is of a version equivalent to the pre-configured parameters.  To make matters worse, packagers may be packing older versions of the CP generators with newer (incompatible) versions of Hugin.

Even the new installer that downloads the CP generators from their original locations is not free of these troubles.  How does it know what version to download and whether there is already a previously installed instance that interferes with the new one?

Ever since accepted as a Google Summer of Code mentoring organization,  the number one priority has been a patent-free CP generator that can be shipped with Hugin.

In 2007, during our first Summer of Code participation, Zoran Mesec implemented a feature matcher with the mentoring help of Herbert Bay (inventor of the SURF algorithm that is implemented in Panomatic).  Paired with existing autopano code, this yielded matchpoint.  In February 2008 Bruno Postle’s built match’n’shift on top of it by using an intermediary projection to improve the quality of the generated CPs.

In 2008, Onur Küçüktunç built a feature descriptor, mentored by Alexandre Jenny of Autopano Pro.  The project provided experience and insights, but performance was not as hoped.  The branch still exist in Hugin’s repository but has been superseeded.

In 2009 students picked up other projects and the idea of the patent-free CP generator seemed to be skipping another year.  But over Christmas Pablo d’Angelo had some time and inspiration, and so in early 2010 the missing piece of the puzzle was created.

Still, it has taken another Summer of Code in 2010, and the determination of Antoine Deleforge and the mentoring of Thomas Modes to complete the work started almost four years ago.  cpfind, the patent-free control point finder has been integrated into Hugin’s default branch and is in the pipeline for a release after the current 2010.2 release.  Maybe before the end of the year?

cpfind will hopefully solve the majority of the problems for the average user.  Until then:

  • Avoid installing too many CP generators.  Choose one that works for you and stick with it.
  • Before using a CP generator from Hugin, try it from the command line.  Start a command prompt, run the CP generator you want to test with no arguments and read its online help and version number.
  • Try running the CP generator from the command line on a few pictures.  Load the resulting file in Hugin and check visually on the Control Points tab if the resulting CPs are good or garbage.
  • Once you are sure that the CPs are good, set the proper preferences in Hugin.  If you have difficulties, ask on the Hugin mailing list, mentioning clearly what version of what CP generator you are dealing with.
  • Alternatively, try entering the CPs manually.  With Hugin’s sophisticated entry system, you don’t even have to click on the exact spot, just make sure it is within the square cursor.  With a pre-calibrated lens, three to four CPs are enough to obtain an excellent result.
  • If you are sure that the CPs in your project are good, but you still don’t achieve a proper alignment, the issue is either with the input images, or how the optimizer is operated.  But that’s material for another article.

The Most Welcoming Bed and Breakfast in London, Ontario

Londinium was established as a town by the Romans almost 2000 years ago. It was common practice for Romans to adopt and romanize native names for new settlements. Not so the Brits: when they settled North America, they brought their names with them. So there are a dozen Londons in North America, one of them in Canada.

We’ve been to the original London many time, and we visted London, Ontario for the first time last summer. We were lucky to find the most welcoming bed and breakfast in London, Just For You. Owned and operated by Ron and Gerard two Dutch expatriates, this bed and breakfast combines the coziness and warmth of a B&B with a level of service that competes with the best five stars hotels of the world. It was an inspiring place to recharge our energy before and after our busy days in the city. Gerard served us healthy and creative breakfasts. One day he treated us to Poffertjes, a sweet complement to the varied and well presented fruit salad and accompanied with original Dutch Hagelslag and real cumin Gouda.

The room was so welcoming, I felt inspired to shoot an HDR panorama and to test if recent developments have made it easier on artists to create HDR panoramas. The bottom line is: things will soon become easier. For this panorama I had to work through some issues of the tool chain.

090809bb4u01triplane

A New Approach to High Dynamic Range (HDR) Panoramas

Disclaimer: the author has been contracting with Photomatix.

Stacking and stitching or stitching and stacking exposure stacks has been discussed in the past, as a manually controlled workflow. It has been automated in Hugin 2009.2.0. As a reminder: stacking and stitching is generally more efficient but had a draw back for full spherical panoramas. Last time I tried it suffered from vortex-like artifacts at nadir and zenith. The upcoming enblend-enfuse 4.0 introduced new algorithms that may linder the problem. Time to try again.

The process in a nutshell:

  1. Shoot the brackets for a full spherical panorama using a tripod (i.e. perfectly aligned stacks).
  2. Merge the stacks to individual HDR pictures.
  3. Stitch the pictures as if they were LDR pictures (the current Stitcher tab in Hugin really need some explanation – details below).

This is how I did it, in detail.

1. Shot bracketed RAWs. Fed them to Photomatix Pro and batch processed into HDR. Optional: In the same batch Photomatix Pro can also generate an initial tonemapped version (needed to set up the stitching project).

There are many software to merge RAWs into HDR, some of them Open Source. The reason why I use Photomatix is the automated batch processing that includes fixing chromatic aberration and noise. This automates those deterministic aspects of the process and let me focus on those aspects where human creativity makes a difference.

Photomatix’ Manual recommends using a third-party RAW converter and gives detailed instructions for white balance, basic settings, and curves. I did not find any drawback in using Photomatix’ integrated RAW converter.

2. Identified control points between the images.

None of the control points detectors known to me support HDR files (out of Photomatix) as input at this time. Hence the need for the initially tonemapped images. For this specific panorama, taken with an 8mm fisheye, there were not that many images to link, so I opened the HDR images in Hugin’s Control Points tab and clicked myself through them. A passage through that tab is anyway mandatory to establish vertical control lines.

3. Stitch in “Normal” output mode.

This is the confusing part. Currently Hugin’s Stitcher tab has three main output modes, each with their variations: Normal; Exposure Fusion; and HDR merging. Intuitively, this is HDR, right? but it’s not merging. We (the Hugin team) need to do our homework and improve the interface.

Another confusing point is the output format selection for the “Normal Output”. The only options available are TIFF, JPEG, PNG. Don’t worry, choose TIFF and the result can be loaded and tonemapped in most HDR software, including Qtpfsgui (soon to be renamed Luminance) and Photomatix (correction: with Photomatix Pro 3.2.6 I had to open it with Photoshop first and save as EXR). It would be nice if we had also EXR there, like for the merging process. And Radiance HDR too. Or at least an indication that the TIFF output will be 32bit.

Hickups And Fixes

The resulting HDR equirectangular had an artefact at the Zenith. Enblend 4.0 pre-release, with default settings, produced a dark speckle instead of a vortex. Smaller, but still disturbing.pitchedcp

pitched_3.2

pitched_4

pitched_4noopTo study this, I pitched the panorama down 90°, bringing the Zenith to the middle of the equirectangular, on the equator (and consequently the Nadir on the 360° at the equator). And to make it visible, I used plain color images. The four images on the right are: a simple, unblended output of the six layers on top of each other; the blend with Enblend 3.2; the blend with Enblend 4.0; and last the working workaround: the blend with the –no-optimize option suggested by Christoph Spiel, release manager for Enblend 4.0. The not optimized version enabled me to produce a usable HDR equirectangular and continue the process. On Christoph’s request, I dug deeper in the code to isolate when the artefact is introduced. With his patient guidance and a few experiments it seems narrowed down to the seam-line vectorization or de-vectorization code.

Conclusion

The resulting HDR equirectangular is technically correct. Enblend 4.0 (pre-release) is an improvement, though not yet perfect. As an added bonus, on multi-core CPUs it is also significantly faster than previous versions.

The other obstacle left is tonemapping: many tonemappers, notably Qtpfsgui (soon to be renamed Luminance) don’t deal properly with zenith and nadir, limiting the user’s choices. The latest Photomatix is tested to deal with the 360° seam and the zenith. In the meantime this is just a post-processing inconvenience that can be solved with a brush in Photoshop or GIMP.

That was a much longer than expected processing for a panorama, but it was worth it. I hope it helps advance Enblend 4.0 toward release.

Autopano-SIFT-C 2.5.1 released

Bruno Postle released a source code tarball for autopano-SIFT-C.

Autopano-SIFT-C is one of the control point generators that can be plugged into Hugin to help aligning images.

To use it with Hugin install it on your computer and set the following parameters:

Program: autopano-sift-c
Parameters: –maxmatches %p %o %s

or:

Program: autopano-sift-c
Parameters: –maxmatches %p –projection %f,%v %o %i

To install it you will most likely have to build it from source. Major Linux distribution such as Fedora and Debian do not carry binaries of Autopano-SIFT-C because it is tainted by a patent in some jurisdictions.

Hugin does not yet have its own, patent-free control points generator. Some building blocks have been contributed during Google Summer of Code 2007 and 2008, but more work needs to be done.

What Goes Around Comes Around

Zoran Mesec joined the hugin community in 2007. He applied to our Google Summer of Code initiative and went on to successfully complete matchpoint. The code is still experimental and is distributed with hugin.

In 2008, Zoran gave back a little of the mentoring and love that he received from the community. He mentored Marko Kuder, a fellow university student, on the batch processor project. And he joined other community activities as well, using the Mrotator U panoramic head sponsored by Agnos Engineering for his Summer of Code participation to contribute to the World Wide Panorama for the first time.

Zoran, together with long time panotools maintainer and community contributor Bruno Postle will represent our team at the Google Summer of Code mentor summit.

I enjoyed the experience a lot last year. I hope the panorama from last year will be repeated this year and become a tradition.

Inching Toward Release

Bruno Postle officially released autopano-sift-C-2.5.0 and panoglview-0.2.2. He intends to release a hugin tarball release candidate very soon. This will pave the way for inclusion of hugin in coming Linux distributions. Binaries for the different platforms will follow.

Stay tuned!

Mid Term Evaluations

Nodal Ninja in action

Nodal Ninja in action

Time flies and it is already time for the mid term evaluation of our Google Summer of Code students. All projects are on track and we might achieve our first zero attrition year. The main reason for that is that we improved significantly our selection process with a simple skill test: applicants were required to provide a patch to the code. The goal was to show basic command of the build chain so that they can get down to the code from day one. This was possible thanks to a documentation effort that made the build process almost dummy-proof.

Another improvement this year is the interaction with the community. Students have been directed from the start to the mailing list where they introduced themselves and their projects. They got plenty of positive feedback, feature requests and test reports.

The one extra thing I have organized, both last year and this year, was a donation of pano gear. This year it is a Nodal Ninja 3 MkII, and because Bill Bailey is so generous, it comes with plenty of extra goodies. While I do not think that our students or mentors need extra motivation, I believe this will help produce better code because understanding the gear means understanding the user for whom the code is being written.

Happy coding!

When the dust settles

It’s a few week before the official coding starts. The students are bonding nicely with the community. Ideas are flowing, and some patches too. I can’t wait to see what they will do when they are officially on Google’s payroll and assigned to work full time on hugin.

While Pablo and Alexandre P. are at LGM, time to introduce properly this year’s team that will participate in the Google Summer of Code. There is a lot to be written, and more may come over the next few months as the summer unfolds. For now, here is an overview of the six projects we’re working on, of the people, and of exciting things to come.

The Projects

There are six projects this year in our portfolio, even though only five are listed on the Hugin/panotools entry at Google. The sixth project is a joint effort with VideoLAN on their leading cross-platform media player. It is listed on their page at Google.

OpenGL hugin preview

The preview windows is central to the panorama making process. It is here where the author looks at his composition before rendering it. It is here that framing decisions are being made, interactively. Right now the interaction is slow.  James Alastair Legg, mentored by Pablo d’Angelo, will improve the panorama preview windows by giving it the speed of OpenGL. We expect near real-time interaction for the author when composing his panorama.

Automatic Feature Matching for Panoramic Images

Critically important to stitching panoramas is to identify overlapping images in two features and align them in space. In the past, this was tedious, manual work. Then control point detectors came along. Those available to hugin users are still tainted by patents. It’s a two step process: detecting features and matching them. In Google Summer of Code 2007 Zoran Mesec wrote the detector, MatchPoint. This year Onur Küçüktunç will write the matcher, mentored by Alexandre Jenny, author of the original autopano and of Autopano Pro. A beautiful example of cooperation and co-existence between the proprietary and open source world. We expect to have a fully patent free control point detection process.

Masking in GUI

Overlap regions between two images are the nature of stitched panoramas. Since the world is in movement, such overlaps often present challenges that won’t match. The current solution is to render each image on a separate layer, and then mask out manually one of the two images so to display only one frozen instance of the moving object. This can often be a painful work at the single pixel level. Fahim Mannan, mentored by Daniel M. German, will introduce a simpler workflow: just mark the moving object with a couple of approximate brush strokes and his code will work out the exact object boundaries automatically. We expect an improvement for those using hugin to stitch action panoramas.

Batch Processing

Photographers often come back from the field with tons of photographs to stitch. A lot of this could be automated. Even more so with the up and coming pano-videography. Marko Kuder, mentored by Zoran Mesec, will improve hugin’s batching abilities. We expect to be able to process repetitive tasks without human intervention.

Machine-based Sky Identification

Some areas of photographs are better suited for control points than others. The sky, with its moving clouds, definitely not, as good control point don’t move between images. Timothy Nugent, mentored by Yuval Levy, will train a support vector machine (SVM) to identify clouds in the sky as a bad area for control points, so that it can be masked out before triggering the control point detection. Once working the method can be extended to other features as well, such as foliage and water.

Panorama Viewing in VideoLAN

19 years after Tim Berners-Lee invented the web there is still no universal format to view panoramas on the web. Apple’s QuickTimeVR, the original technology to display full spherical panoramas, is not available on platforms other than Windows and the Macintosh, and it is no longer developed by Apple. A lot of good things have happened with Flash panoramas in the last year. Nevertheless, a lot of legacy content out there is captive of the QuickTime format, like the World Wide Panorama. In Google Summer of Code 2007 Leon Moctezuma added functionality to FreePV. This year Michael Ploujnikov mentored by Yuval Levy will integrate panorama viewing in VLC, a leading cross-platform media player. We expect Linux users and users of other alternative platforms to have access to the majority of QTVR content soon.

The team

I’m happy to pass the admin role to Alexandre Prokoudine this year. We had more available mentors, student applications and project ideas that we would have loved to follow through, but resources are limited also for large corporations and Google is already very generous with us. We would have loved to see students working on leading edge image processing under the supervision of Andrew Mihal of enblend/enfuse fame, or John Cupitt of VIPS. Maybe next year. Other mentors that registered with our organization on the Google Code page and are left without student are Bruno Postle, Jim Watters and Ken Turkowski. We are a team, and like last year I expect a lot of help and community support to the six lucky students.

Partners

Cooperation is a topic I particularly care about for this edition of the Google Summer of Code. We are leveraging the Summer of Code to reach beyond our small world. I am proud that we found an ally in the VideoLAN team, a larger mentoring organization. Granted, we are natural allies: hugin/panotools is used to create media; and VLC is used to play it. Nevertheless, this cross-collaboration, whose idea is the result of a meeting with Jean-Baptiste Kempf at the 2007 Google Summer of Code Mentor Summit, is IMO a demonstration that the whole is worth more than the sum of the parts and that an initiative like the Google Summer of Code adds much more value to the world of FOSS than what can be stated in (highly appreciated) numbers. 175 mentoring organization, 1125 students, 90 countries.

And in our small world we’re working a partnership deal to further motivate and propel the hugin/panotools team, similar to what we did last year. Stay tuned for an announcement.

April Installer

After a hiatus from Windows in March, here is the latest hugin installer. Before heading to the download page, be aware that it still has bugs, some of them recently discovered. There are workarounds too:

For this bug, go into the images tab, select the images for which you need control points and click on the button to generate.

For this bug, just click inside the panorama preview to move the panorama slightly and then go back to the optimizer and click optimize.

Hugin is becoming better, now with a wide variety of control point generators!

Control Points Generators, the more the merrier.

There is a big choice of control point generators in the latest hugin installer. Many of them are experimental. Some work better for some situations than others. Variety is positive as there is likely to be a solution to most situations. But it is also daunting. Which one to choose, and how? here a few pointers. Moreover, the default preferences settings as set by the installer are written down, so you don’t have to be afraid of changing them and not know how to change them back.

Match-n-Shift

Match-n-Shift is Bruno Postle’s idea. Images are remapped to a conformal projection before the identification process. Despite the added step, match-n-shift is not significantly slower. For fisheye lenses, the quality of control points is better than non-remapped generators. Currently it only works with the good old autopano and it is subject to the same patents constraints, but efforts are underway to make it work with Matchpoint.

Default preference settings: -f %f -v %v -c -p %p -o %o %i

Autopano-SIFT Perl

Autopano-SIFT Perl is the latest incarnation of the classic SIFT-based control point generator that has been used by hugin users for years, and its use is limited by the SIFT patent in some countries, notably the United States. The shell wrapper script has been rewritten in Perl by Bruno Postle so that it can be compiled for and run on Windows (where Perl is optional). Linux and OSX users have had access to this tool for a long time.

Default preferences settings: –noransac –points %p –output %o %i

Autopano-SIFT-C

Autopano-SIFT-C is Tom K. Sharpless reengineering of Autopano-SIFT Perl. Tom is passionate about precision and efficiency. He initially used the full scale image to achieve best possible precision, but there is a cost to that: double the maximum dimension and it quadruples the time needed to process that single picture. This gets compounded as the pictures are matched against each other. Autopano-SIFT-C current default is 1600 pixels, but in my installer I have set it to 800 pixels, the value used by match-n-shift and by autopano-SIFT Perl, which yields excellent results.

Autopano-SIFT-C is more efficient and thus slightly faster, though I could not notice the difference with my average project. It also does some remapping, but I found match-n-shift much more elegant. The main difference is that Autopano-SIFT-C has a monolithic approach. Everything happens within a single file: scaling, reprojection, detection, matching, so it is less flexible to combine and cooperate with other tools.

Last but not least, the algorithm is still SIFT, so patents may apply.

Default preferences settings: –maxdim 800 –maxmatches %p %o %i
–maxdim (default 1600) => set to 800 to compare with a-c-c

Edit: I’ve been made aware of a recently introduced expandos in hugin that I had forgotten. –maxmatches %p %o %s will use a conformal projection. A practical show of how right Agos’ comment below is!

Matchpoint

Matchpoint was Zoran Mesec’s 2007 Google Summer of Code project. He wrote it as a drop-in replacement for the generatekeys program that came with autopano, so it was easy for Bruno Postle to adapt the Autopano-SIFT-C wrapper to it. Unfortunately Matchpoint still has a problem dealing with transparency masks, so it can’t be used with match-n-shift.

Matchpoint is the only feature detector in this article to be free of patent. It is still experimental but yields useful results. In the upcoming Google Summer of Code participation Onur Küçüktunç will develop a feature matcher to go with, mentored by Alexandre Jenny, the author of Autopano Pro.

Default preferences settings: –noransac –points %p –output %o %i

Pan-o-matic

Panomatic is the newest kid on the block. It is an implementation of the SURF algorythm (which is also patent protected in some jurisdictions) by Anael Orlinski. It is built to extract maximum power from modern multi-core processors and comes in two flavors. The standard Pan-o-matic runs on modern CPUs, that means Pentium 4 and newer or AMD Athlon XP and newer. The NOSSE version runs also on older CPUs.

Default preferences settings: -o %o %i

A few interesting options for those who may want to try:

  • -n<number> sets the of cores to use (default: autodetect)
  • –fullscale use full scale images to detect keypoints
  • –ptgui activate compatibility with PTGui

So which one should I choose?

The default is Autopano-SIFT-C. It is robust and yields good results. But if you are using fisheye lenses, Match-n-Shift is probably a better option. If you have a modern powerful multi-core CPU, you may want to give pan-o-matic a try.

All of them, though are subject to patents – either SIFT or SURF. It is your responsibility to make sure that you are not violating a patent in your jurisdiction of residence.

Matchpoint solves the patent issue for feature detection, and hopefully by the end of the Google Summer of Code we’ll have a patent-free solution also for the feature matching.

New Snapshot and Google Rumors

A bunch of bug fixes. SVN2904 is available in the downloads section.

Windows users can now try the latest, improved match-n-shift – CP detection for fisheye images has never been better!

(Ubuntu) Linux users have easily access to match-n-shift, as well as to the experimental matchpoint code.

For OSX, Harry built SVN2897 yesterday.

Last but not least: there is a wind of love coming from Google’s Open Source Program Office. It starts to feel like summer. We’ll have a release before the spring ;-)