Researchers at Stanford University are developing a multi-aperture image sensor which groups arrays of 16×16 pixels, then puts a tiny lens on each group. Their 3Mpixel image sensor in this way includes a total of 12,616 lenses, compared to a shabby single lens commonly found in cameras. The benefits are plentiful. The simpler electronic design means the pixels can be 0.7um, much smaller than Kodak’s 1.4um pixels that I posted about earlier. Camera modules incorporating this technique can be made even smaller, cheaper, more robust, and, most importantly, grab better pictures. Instead of taking a single snapshot, the camera actually takes 12,616 pictures, which can be combined with digital image processing techniques to capture 3D image data, to accurately control depth of field, focus, etc. With enough image processing power available in the camera, this opens up a whole world of new possibilities.

A high level overview of the work can be found here and their technical ISSCC paper can be found here.