« Publication News | Main | Air Travel: What... »

Why Moore's law will continue to apply to digital photography

           Recently, at Luminous-landscape.com (LL), Ray Maxwell expounded on his view that Moore</st1:place></st1:city>’s law is no longer relevant to digital photography. This was based on his observation that we have all the pixels in our digital sensors that most of us need. I thought that was a very narrow view, indeed, and was planning a rebuttal when Nathan Myhrvold’s rebuttal was published on LL. Myhrvold explained that we can still use more pixels in many situations, and that he expects to buy sensors in the future with many more pixels than are now available. I agree with everything he says, but I think he misses the point.

           It is fair to say that at some point in the near future digital sensors will out resolve all camera lenses at all apertures either because of lens aberrations or diffraction broadening. At that time the standard CCD and CMOS pixel arrays now in use will not benefit from more pixels, but our cameras will benefit more and more from higher density of circuit elements/gates in their encoders and computers in the years to come. We have now reached the era of computational photography (CP).

</o:p>

A computational camera embodies the convergence of the camera and the computer. S. K. Nayar1

</o:p>

Researchers in the field of CP are largely responsible for the expanding feature sets we find in commercially available cameras. For example face detection, smile detection, wink detection, in camera panorama creation, high-dynamic-range images, and high-resolution video modes are available even in inexpensive cameras. What is more important, is that CP scientists are designing fundamentally different computational cameras for the future. We are in phase one of the digital camera revolution in which film has been replaced by digital sensors and there is some post processing of images in cameras and external computers.2 In phase two we expect to see major changes in the way cameras capture information about the light field. This means that images will be encoded to greatly enhance the possibilities for post processing. For example we may be able to change the plane of focus (plenoptic cameras), effectively remove motion blur (flutter shutter), and manipulate lighting after the fact. These and many more amazing advances are on the horizon. Phase three goes beyond image encoding to exploring artistic expression and higher level image processing similar to that found in the human brain.

</o:p>

I intend to explore the field of CP this year and to write about what it will mean for serious photographers. However, that will take me some time and will be a much extensive project than just responding to the notion that Moore</st1:place></st1:city>'s law is not important for digital photography. The literature of CP is already extensive. Many CP courses are being offered in Computer Science departments and textbooks are beginning to appear.3 I will report on my progress here. Stay tuned.

</o:p>

P.S. Some of you may recall that Nathan Myhvold and I debated the usefulness of more pixels on the LL site a couple of years ago. I argued for the benefits of increased pixel density even in the presence of diffraction broadening because of the advantages of over-sampling. I think that Nathan inadvertently agreed with me by advocating the super-resolution technique where increased resolution is obtained by combining multiple images to simulate over-sampling. My final rebuttal was not published on LL but can be found here in my posting of Sunday, March 4, 2007 on the next page with links to images and a relevant reference.

</o:p>

1. S. K. Nayar, "Computational Cameras: Redefining the Image," IEEE Computer Magazine, Special Issue on Computational Photography, pp.30-38, Aug, 2006.

</o:p>

2. R. Raskar, MAS.963, MIT Lectures, Fall 2008.

</o:p>

3. R. Raskar and J. Tumblin, Computational Photography (A.K. Peters, Ltd., 2009).

</o:smarttagtype></o:smarttagtype>
Comments:

It's indeed a fascinating field. However, when we get to the point where we manipulate lighting after the fact, I am not sure we'll still be doing "photography" in the etymological sense of the term. We'll probably end up playing with with 4 dimensional data sets (conventional 2D+multiple focal planes+time) out of which we will be able to extract/create an almost infinite number of different representations. It will of course be possible to produce 2D prints or displays of one of those representations, and some could be masterpieces. But, 30 years from now, let's hope we'll have something like dynamic "photographic" quality holograms that capture and are able to render much more of the information we perceive... The field needs a catchier name...

Posted by Pierre Vandevenne on July 29, 2009 at 09:25 PM EDT #

Yes indeed! Mixing in time as a factor offers many possibilities. Cohen and Szeliske of Microsoft Research have written about the "moment camera" as opposed to cameras that capture an instant in time. Nayar and others are working on relighting objects in images, and so on. It is a brave new world.

Posted by Charles on July 29, 2009 at 09:42 PM EDT #

Post a Comment:
  • HTML Syntax: NOT allowed

« Publication News | Main | Air Travel: What... »