• Subscribe

    Subscribe to This Week In Panospace by eMail.
    Subscribe in a reader
  • License

    Creative Commons License
    This work is © 2008-2012
    by Yuval Levy
    and licensed under a
    Creative Commons License.
  • Entries

    April 2018
    M T W T F S S
    « Dec    
     1
    2345678
    9101112131415
    16171819202122
    23242526272829
    30  
  • Archives

The Quest is Over

The biggest user misconception about Hugin is with control point (CP) generators.  Contrary to popular belief,  until last week Hugin did not have a CP generator.

Because CP generation is critical to the process of panorama creation; and because Hugin is an interface to the panorama creation process; users misunderstand Hugin for the whole.  Every time CP generation fails they perceive it to be a failure of Hugin.

Packagers (including myself when I was a Windows packager) who ship Hugin for platforms without proper package management (notably: Windows and OSX) have in one way or another bundled CP generators in the past to help making the downloaded package useful.  Despite advanced control for CP placement, many users find Hugin without CP generator to be of limited use.  The packages did not help to lift the misconception, and brought a whole set of problems with them.

Other functions of the panorama production process in Hugin are also performed by third-party tools (e.g. Enblend-Enfuse for blending), but the interface to CP generators is a particularly tricky one because there are (too) many third party CP generators; they take different arguments on the command line; none of them is absolutely superior; and last but not least they are encumbered by patents, limiting what they can be used for and where and how they can be distributed.

So what can be recommended to packagers/distributors and users?

My more than two years old overview of available CP generators is outdated.  None of them is properly maintained.  For example Autopano-SIFT-C version 2.5.2 (that has not even been released) is broken.  It has been broken for more than a year.  The recommendation is to use 2.5.1 until the problems with 2.5.2 are fixed.  No active work was done on fixing 2.5.2, and the command line interface (and thus the string that goes into Hugin’s preferences) has been a moving target as well.

Users rightly complain about the difficulties of configuring a CP generator, and indeed it should not be that difficult.  But the changes in the CP generator need to be coordinated with the changes of parameters in Hugin.  Hugin ships with pre-configured parameters, but Hugin can’t determine if the CP generator is of a version equivalent to the pre-configured parameters.  To make matters worse, packagers may be packing older versions of the CP generators with newer (incompatible) versions of Hugin.

Even the new installer that downloads the CP generators from their original locations is not free of these troubles.  How does it know what version to download and whether there is already a previously installed instance that interferes with the new one?

Ever since accepted as a Google Summer of Code mentoring organization,  the number one priority has been a patent-free CP generator that can be shipped with Hugin.

In 2007, during our first Summer of Code participation, Zoran Mesec implemented a feature matcher with the mentoring help of Herbert Bay (inventor of the SURF algorithm that is implemented in Panomatic).  Paired with existing autopano code, this yielded matchpoint.  In February 2008 Bruno Postle’s built match’n’shift on top of it by using an intermediary projection to improve the quality of the generated CPs.

In 2008, Onur Küçüktunç built a feature descriptor, mentored by Alexandre Jenny of Autopano Pro.  The project provided experience and insights, but performance was not as hoped.  The branch still exist in Hugin’s repository but has been superseeded.

In 2009 students picked up other projects and the idea of the patent-free CP generator seemed to be skipping another year.  But over Christmas Pablo d’Angelo had some time and inspiration, and so in early 2010 the missing piece of the puzzle was created.

Still, it has taken another Summer of Code in 2010, and the determination of Antoine Deleforge and the mentoring of Thomas Modes to complete the work started almost four years ago.  cpfind, the patent-free control point finder has been integrated into Hugin’s default branch and is in the pipeline for a release after the current 2010.2 release.  Maybe before the end of the year?

cpfind will hopefully solve the majority of the problems for the average user.  Until then:

  • Avoid installing too many CP generators.  Choose one that works for you and stick with it.
  • Before using a CP generator from Hugin, try it from the command line.  Start a command prompt, run the CP generator you want to test with no arguments and read its online help and version number.
  • Try running the CP generator from the command line on a few pictures.  Load the resulting file in Hugin and check visually on the Control Points tab if the resulting CPs are good or garbage.
  • Once you are sure that the CPs are good, set the proper preferences in Hugin.  If you have difficulties, ask on the Hugin mailing list, mentioning clearly what version of what CP generator you are dealing with.
  • Alternatively, try entering the CPs manually.  With Hugin’s sophisticated entry system, you don’t even have to click on the exact spot, just make sure it is within the square cursor.  With a pre-calibrated lens, three to four CPs are enough to obtain an excellent result.
  • If you are sure that the CPs in your project are good, but you still don’t achieve a proper alignment, the issue is either with the input images, or how the optimizer is operated.  But that’s material for another article.

To Roll Or Not To Roll? – Part II

In this second part we look at how moving artefacts influence the seam placement. Here again, a different set of input images. The cars at the traffic light have moved between adjacent shots.

Before looking at how much each image contributes to the final panorama, let’s see where enblend puts the seams. This is possible with the option -l 1 which reduces the blending levels to 1. I am not aware of similar functions on the other blenders. Interestingly, seam placement in enblend is not very regular and the weight of the magenta picture, maybe affected by the moving artefacts, is strongly reduced.

And here is the resulting blend:

Obviously seam placement is influenced by the moving artefacts, though it seems not possible to predict it while shooting the scene. If it was, I could choose a convenient starting angle on the 360° around me isntead of starting at any random point, or at a specific cardinal point to use for orientating the panorama in a geographical context. Also for this specific case, like for many cases with moving objects, seam placement by enblend is not satisfying and will require further manual masking.

PTgui 8beta5 does a better job at seam placement, the panorama could be used as-is, although where there is such an overlap there are always two options and it is often a subjective artist decision which of the two to show. In this specific case, I could have wanted to show the empty street, and preferred the image without the cars.

Last but not least, two smartblend tests, once with 1.2.5 and once the version integrated in Autopano Pro. Also this time the result is different. It deals well with the moving artifacts, but not with the stitching error at the zenith.

As already shown previously by Michel Thoby, the blending process is unpredictable, and so it does not make sense to tweak camera orientation or starting angle in function of a potentially better blend. The slanted camera positioning does not introduce any disadvantage, and it retains its advantages. Conclusion: I’ll keep rolling my camera. YMMV.

Interact with the finished panorama here. It was my entry to this summer edition of the World Wide Panorama.

To Roll Or Not To Roll? – Part I

Above is how the footprint of the 8mm Sigma Fisheye relates to different sensor sizes. Below is my home-brew no-parallax-point (NPP) adapter. It aligns the rotation axis of the rotator with the NPP of the camera.

Ever since I started using an APS-C sensor dSLR camera with an 8mm fisheye lens to create full spherical panoramas, I’ve used a slanted pano head.

There are several advantages to a slanted pano head compared to a traditional one. The key advantage for me was a bigger overlap which allowed me to either produce a sphere with three shots only; or to quickly take six shots around and easily mask out moving objects at the seams.

Moreover, the edges of fisheye lenses are softer than the center, and such a setup should use more of the center and less of the edge. Or does it? A discussion triggered my curiosity.

I needed a method to visualize the contribution of each shot to the final panorama, without influencing the seam placement. Bruno Postle suggested to desaturate the images and colorize them. This method does not change the artefacts which determine seam placement. I used it once on a still scene with constant artefacts and once on a moving scene. In this article I present the results for the still scene.

Below are the six input pictures that were fed to the different contenders for warping and blending:

Enblend did quite regular slices out of the images, taking mostly the central part of them, where the lens quality is at its best.

PTgui 8beta5 has an interesting and more complex / less predictable blending pattern. It seems that all images available contributed to the zenith, and that even around the horizon the boundaries are not at regular intervals. More in depth analysis would be required to understand the pattern.

Smartblend 1.2.5 in PTgui yielded a result similar to Enblend, again slicing the images more or less regularly.

Interestingly, Smartblend inside Autopano Pro display a slightly different placement of the seams while still keeping them regularly distributed.

For still scenes, unless the blender used is the newest PTgui one, the logic seems to hold true that only the central, sharpest part of the image is used.

And here is the finished full spherical panorama, interactive and in its original color.