Appreciation through Understanding
« Publication News | Main | Air Travel: What... »
Recently, at Luminous-landscape.com (LL), Ray Maxwell
expounded on his view that
It is fair to say that at some point in the near future digital sensors will out resolve all camera lenses at all apertures either because of lens aberrations or diffraction broadening. At that time the standard CCD and CMOS pixel arrays now in use will not benefit from more pixels, but our cameras will benefit more and more from higher density of circuit elements/gates in their encoders and computers in the years to come. We have now reached the era of computational photography (CP).
A computational camera embodies the convergence of the camera and the computer. S. K. Nayar1
Researchers in the field of CP are largely responsible for the expanding feature sets we find in commercially available cameras. For example face detection, smile detection, wink detection, in camera panorama creation, high-dynamic-range images, and high-resolution video modes are available even in inexpensive cameras. What is more important, is that CP scientists are designing fundamentally different computational cameras for the future. We are in phase one of the digital camera revolution in which film has been replaced by digital sensors and there is some post processing of images in cameras and external computers.2 In phase two we expect to see major changes in the way cameras capture information about the light field. This means that images will be encoded to greatly enhance the possibilities for post processing. For example we may be able to change the plane of focus (plenoptic cameras), effectively remove motion blur (flutter shutter), and manipulate lighting after the fact. These and many more amazing advances are on the horizon. Phase three goes beyond image encoding to exploring artistic expression and higher level image processing similar to that found in the human brain.
I intend to
explore the field of CP this year and to write about what it will mean for
serious photographers. However, that
will take me some time and will be a much extensive project than just
responding to the notion that
P.S. Some of you may recall that Nathan Myhvold and I debated the usefulness of more pixels on the LL site a couple of years ago. I argued for the benefits of increased pixel density even in the presence of diffraction broadening because of the advantages of over-sampling. I think that Nathan inadvertently agreed with me by advocating the super-resolution technique where increased resolution is obtained by combining multiple images to simulate over-sampling. My final rebuttal was not published on LL but can be found here in my posting of Sunday, March 4, 2007 on the next page with links to images and a relevant reference.
1. S. K. Nayar, "Computational Cameras: Redefining the Image," IEEE Computer Magazine, Special Issue on Computational Photography, pp.30-38, Aug, 2006.
2. R. Raskar, MAS.963, MIT Lectures, Fall 2008.
3. R. Raskar and J. Tumblin, Computational Photography (A.K. Peters, Ltd., 2009).
charles in General
12:05PM Jul 29, 2009
Comments [2]
Tags:
computational
photography
pixels
sensors
« Publication News | Main | Air Travel: What... »
This is just one entry in the weblog Photophys.com: The Science of Photography. You may want to visit the main page of the weblog
Below are the most recent entries in the category General, some may be related to this entry.
Posted by Pierre Vandevenne on July 29, 2009 at 09:25 PM EDT #
Posted by Charles on July 29, 2009 at 09:42 PM EDT #