« Where is the No... | Main | How to replicate the... »

Another rant: Why is everybody bashing Canon for developing the 120 megapixel APS-H sensor?

The engineers and scientists at Canon have made another contribution, albeit small, to the evolution of digital photography by developing the 120 megapixel APS-H sensor. This is a worthwhile development and not just for increased resolution! Yet all I see and hear on the internet is derision. It runs something like this. Canon is in a mad pixel race and there will be no actual gain in resolution because of diffraction. Hogwash!

First, let’s get the resolution question out of the way. The point spread function, basically the broadening of detail in our images, has contributions from lens aberrations, diffraction, pixel size, and some other things. As pixel size decreases, the pixel contribution to loss of resolution vanishes, but we are not there yet. Film grain is much smaller than pixels, but even with film some of us preferred the fine grain variety.

Note that measurements reported on dpreview.com reveal that the Canon G10 with pixel pitch 1.7 micrometers out resolves the Canon G11 with pixel pitch 2.0. Some people prefer the G11 because of it slightly higher S/N ratio, but others like the G10 because, when there is enough light, it gives better resolution. The very popular Panasonic LX-3 has a pixel pitch of about 2.2 micrometers which is vey close to that of the Canon 120 megapixel sensor. There is always a trade off between resolution and the signal-to-noise ratio.

And what about diffraction! Come on people, diffraction is not new. It was in the physics and photography books I studied in the 1950’s, i.e., Lenses in Photography by R. Kingslake. The diffraction spot size is proportional to the F-number. This is an absolute number, so it is more significant in smaller sensors. In order to maintain the relative diffraction contribution, for example to maintain spot size at 1/1500 of the sensor width, we must decrease the F-number so that it tracks sensor size. What else is new?

But resolution is not the point! A 30” by 40” print at 300 pixels/inch (with no “up-rezing”) requires 108 Mpixels; and that is beyond what most professionals need, at lease for now. So what can be done with extra pixels? A look at developments in Computational Photography indicates that quite a lot can be done. First there is the need to characterize the light field and not just the pattern of intensities on the sensor surface. That requires a determination of the direction of light rays arriving at each pixel and not just the intensity of the light.

Consider the plenoptic camera. In one version that has been demonstrated, each pixel in the sensor plane is replaced with a micro lens that images the aperture of the camera lens on a second array of pixels. The layer of light detecting pixels lies directly behind the layer of lenses and provides a group of pixels behind each micro lens. The pixel array contains perhaps 25 times as many elements as the lens array. This arrangement provides information that permits neat things to be done in post processing. For example one can refocus an image, produce a stereo image, or generate super resolution!

Another possibility is to modify the sensor to capture an HDR image in one shot by covering the sensor with a pattern of neutral density (ND) filters. No current or expected sensor has high enough dynamic range to capture the contrast ratios in all possible scenes. (Recall that the DR of nature is perhaps up to 20 stops and a bright day with radiant clouds and dark shadows might present 15 stops.) The built-in filters, or some variation on that theme, would provide enough information for the construction of a HDR image from a single shot.

This is not all that sensors making use of high density pixels will be able to do in the near future, but I hope it makes the point. Imaginative new ways for encoding images in space and time are already appearing in the feature sets of new digital cameras. With digital photography, we haven’t seen anything yet.

Additional Reading:

C.S. Johnson, Jr., Science for the Curious Photographer, (A.K. Peters, 2010), Chaps. 16 and 17

C. Bloch, The HDRI Handbook, (Rockynook, 2007), Chap. 3

E.H. Adelson and J.Y.A. Wang, “Single Lens Stereo with a Plenoptic Camera,” IEEE Trans. Patt. Anal. Machine Intellegence, 14, 99-106 (1992).

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz,, an P. Hanarahan, “Light Field Photography with a Hand-held Plenoptic Camera,” Stanford Tech. Rept. CTSR 2005-02, 1-11,

A. Lumsdaine and T. Georgiev, “Full Resolution Light Field Rendering,” Adobe Tech. Rept., Adobe Systems, Inc., Jan. 2008.

Comments:

Post a Comment:
  • HTML Syntax: NOT allowed

« Where is the No... | Main | How to replicate the... »