« Glacier, Yellowstone... | Main | 24-Hour Panorama of... »

Spherical Panoramas for Nature Photographers

It is easy to get into a rut. Each year I get out my telephoto lenses and photograph seasonal birds, or I switch to wide angle lenses and capture scenes at familiar places. But often I reject all of the recent images because none of them are exciting, and they are no better than those I got in previous years. I get the feeling of “been there and done that,” and I need a change. Nature always provides a great experience, but maybe I should capture it a different way this year. There are a lot of possibilities I hope to explore. My “to do” list contains 3D, time-lapse, video, and extreme panoramas. At present I am concerned with the latter in the form of virtual reality (VR) or immersive photography. This is not a new method and there are many commercial applications, but it is not a common technique for the capture and display of nature photographs.

So what is VR photography all about? Basically we experience the world as if we were living at the center of a sphere. This visual perception can be recreated to some extent by generating a sphere of continuous images that can be viewed from the center. Spherical panoramic photography is now possible for amateur photographers with readily available equipment and software. I am new to the VR game, but I am learning a lot and having fun – but I repeat myself. Here is what I have learned thus far.

Capturing the image: Photographs must be made in all directions with sufficient overlap to permit the stitching of a complete spherical image. It is possible to capture the necessary images with a hand held camera or a camera on a homemade rotation mount, but I find that a commercial panoramic head on a sturdy tripod is essential. A search of the internet reveals dozens of possibilities ranging in cost from $200 upward. I am currently using a Nodal Ninja 3. For capturing the images I use a Sigma 10mm fisheye lens on a Canon DSLR (XTi or 7D). With this setup I can cover all angles with eight shots. First I rotate the camera up from level (horizontal plane) by 30o, and then I rotate the camera around the vertical axis to make exposures every 90o. For the remaining four exposures, I rotate the camera down to 30o below the level and shift by 45o around the vertical axis before making make exposures every 90o.  This scheme does not leave much margin for error, and it might be safer to make exposures at 72o rather than 90o intervals.  Another, robust procedure is to make six photographs with the camera level (tilt at 0o), and to supplement the six shots with one straight up and one straight down. In a previous blog I discussed check lists for making sure everything is ready for a panoramic sequence. Here I will just emphasize that the panoramic head must be level, as indicated by the built-in bubble level, at the beginning and end of the sequence; and that the rotations must be about the no-parallax-point (NPP) of the lens.

Processing the image: At this point I will have eight or 24 images when HDR (three shot) bracketing capture is used. I apply global corrections for exposure, clarity, vibrance, sharpness, and noise reduction In Photoshop Lightroom 3; and then, if necessary, I export the images to Photomatix Pro or HDR Expose for HDR tone mapping. After tone mapping, eight files are ready to be exported in TIF format for stitching with Autopano Pro. I have had good experience with this program where stitching is completely automatic. PT-Gui is another good choice for stitching spherical panoramas, and there are many others including the free download Hugin. The output I desire from the stitching program is an equirectangular projection.

As an illustration, I show in the first figure an equirectangular projection of a spherical panorama obtained in Duke Gardens on an overcast day in the spring of 2010. This is simply a mapping of pixels into a rectangle so that the angles of rotation from -180o to +180o around the vertical axis are plotted on the horizontal axis (abscissa-x)) and the corresponding angles away from the horizontal plane from -90o to +90o are plotted in the vertical direction (ordinate–y). Certainly no lens could make this image in a single shot. A hypothetical rectilinear wide angle lens with decreasing focal length would give images that become rather abstract at fields of views above 120o and at 179o degenerate into lines radiating from a perspective point in the center of the image. A super fisheye lens, that could see everything, would produce a circular image where the circumference would represent the single point 180o from the center in any direction. In contrast the equirectangular projection spreads the north pole of the virtual sphere along the top border (x = -180o to +180o, y = 90o) and the south pole along the bottom border (x = -180o to +180o, y = -90o).

Strange as the equirectangular image is, it provides a convenient starting point for VR interactive displays and numerous other mappings. For example one can select any direction for the center and have a rectangular image mapped with any desired field of view up to about 179o. Also, a mirror ball located at the NPP of the camera lens can be mapped. Of course, this is rather strange because one wonders were the observer is in the reflection. Is this the view from a camera in a spy microbot the size of a gnat or from a specter? Similar questions arise in viewing renaissance art where detailed representations of convex mirrors appear in the scenes but seldom show the artist. For example see The Arnolfine Portrait (1434) by Jan von Eyck or Heinrich von Werl and St. John the Baptist (1438) by Robert Campin. These and other examples are discussed in Jonathon Miller's fine book, On Reflection.(Google to find these images online)

How does one take advantage of these wonderful possibilities? Obviously one needs the right software. I think there are several possibilities. After some online research, I decided to try Pano2VR. This package was better than I expected, and it has permitted me to generate interactive VR and a host of other mappings. For VR I initially chose Quicktime VR, but that failed because 64 bit Quicktime does not support VR. I then switched to Flash for VR and that worked just fine. This is another reason to keep using Flash at least for the time being. I have also enjoyed Pano2VR generated mirror ball images and printouts of the six faces of the VR cubes that can be displayed in 3.5” photo cubes.

To illustrate a mirror ball and a rectilinear image mapped from the data shown in the equirectangular image of Duke Gardens I exhibit in the second figure.

Here the background is a rectilinear image with a field of view of 179o corresponding to a focal length of less than one 0.15mm for an APS-C sensor, and the mirror ball was mapped, resized, and pasted onto the background.

Finally, the most impressive thing is the VR interactive display. This reveals the amount of information recorded and the detail in any direction. A little browsing the internet will uncover numerous sites with VR displays. I recommend the Virtual Tour of Oxford with a link to a panorama tutorial by Dr. Karl Harrison. I wrap up this discussion with a small VR display of the Duke Gardens image.  Use your cursor in the image to rotate and zoom.

This is an embedded Shockwave Flash (SWF) file that is 320x240 pixels in size.

Quality = high

Comments:

Post a Comment:
  • HTML Syntax: NOT allowed

« Glacier, Yellowstone... | Main | 24-Hour Panorama of... »