Scientists at Duke University have built an experimental camera that allows the user—after a photo is taken—to zoom in on portions of the image… The Wall Street Journal has the scoop on this billion pixel camera:
The Duke device, called Aware-2, is a long way from being a product. The current version needs lots of space to house and cool its electronic boards; it weighs 100 pounds and is about the size of two stacked microwave ovens. It also takes about 18 seconds to shoot a frame and record the data on a disk.
The $25 million project is funded by the Defense Advanced Research Projects Agency, part of the U.S. Department of Defense. The military is interested in high-resolution cameras as tools for aerial or land-based surveillance.
How does the camera work?
The secret of the Duke device is a spherical lens, a design first proposed in the late 19th century. Although very effective spherical lenses exist naturally—the human eye, for example—researchers have long found it tricky to accurately focus images using lab-made versions. The Duke group overcame the challenge by installing nearly 100 microcameras, each with a 14-megapixel sensor, on the outside of a small sphere about the size of a football. The setup yields nearly 100 separate—but accurately focused—images. A computer connected to the sphere then stitches them together to create a composite whole.
The current limitation? Besides it weight, it only shoots in black and white.
Personally, I think this isn’t the future of photography. Technically, this project is interesting and will have implications for photographers who do wide-scale panoramic shots. But for the average photographer, this is a step in the wrong direction: changing the image after the fact is counter intuitive to how photographers should work, namely framing the image in camera.