Very very small pixels

Researchers at Stanford University are developing a multi-aperture image sensor which groups arrays of 16×16 pixels, then puts a tiny lens on each group. Their 3Mpixel image sensor in this way includes a total of 12,616 lenses, compared to a shabby single lens commonly found in cameras. The benefits are plentiful. The simpler electronic design means the pixels can be 0.7um, much smaller than Kodak’s 1.4um pixels that I posted about earlier. Camera modules incorporating this technique can be made even smaller, cheaper, more robust, and, most importantly, grab better pictures. Instead of taking a single snapshot, the camera actually takes 12,616 pictures, which can be combined with digital image processing techniques to capture 3D image data, to accurately control depth of field, focus, etc. With enough image processing power available in the camera, this opens up a whole world of new possibilities.

A high level overview of the work can be found here and their technical ISSCC paper can be found here.

Pixels better than real life?

According to this survey by Motorola, Americans would rather watch the Superbowl on an HDTV than in person. “The survey results really speak to the popularity of high-definition programming,” said Doug Means from Motorola.

That’s a lame study and a lame statement. The results of the survey don’t say anything about the quality of HD video and how close it gets to being there. Yes, quite a few people would rather sit in their homes than take a plane and sit on a plastic seat for hours watching the game. Yes, a big screen TV presents a much better picture than an old Philco Predicta. But no, nothing compares to being there. And I can say that without having ever been to a superbowl game.

Pixel compression over time

Here’s an interesting graph from Harmonic that I sometimes use in presentations. I often misplace it, so I figured I’d stick it here on this blog. That way I can always find it. The graph shows that video compressors are not all the same. They can be improved over time. This is an important fact for chip makers, since developing a complex chip these days often takes well over a year, and is then sold in the market for a year or so after. The longer you can keep your chip in the market, the more you will sell! If your chip includes a software programmable video subsystem, you can still take advantage of algorithmic improvements, just like Harmonic did, and deliver better video quality.

Very small pixels

A few weeks ago, Kodak announced their new 5Mpixel image sensor at the Mobile World Congress. The sensor has a 1.4 micron pixel size (1.4 by 1.4 micron) which means the sensor can fit in a 4x4mm camera module. That is about the size of a regular black ant. I am sure a bigger ant could carry such a camera. The Kodak sensor has some novelties. There is a new color filter pattern, which includes a “white” photocell receptor instead of just measuring the amount of red, green and blue. That will require quite some changes to the image processing algorithms. Another novelty is that the sensor measures darkness instead of light. Apparently that can be more accurately implemented in silicon. Like most new sensor introductions, Kodak promises higher quality images than anyone else.

Micron just announced that it spun its image sensor business out into a new company called Aptina. The business will be run by Micron’s Bob Gove, who was previously at VLIW processor company Equator. Micron says they have already sampled an even smaller 1.2 micron pixel, which in the same 4x4mm tiny camera module would yield a 7Mpixel sensor.


$/pixel article in Taiwanese magazine

Indexed pixels

Pixel etymology

Did you know the word pixel is derived from “picture element”? Here’s a long video that details a search for the history of the pixel, by Richard Lyon. Lots of well known names in the field of video and graphics are mentioned. To skip over the introduction go to 2:20.

Perfect pixel patent

As early as 1929, Ray Davis Kell described a form of video compression and was granted a patent for it. He wrote, “It has been customary in the past to transmit successive complete images of the transmitted picture. [...] In accordance with this invention, this difficulty is avoided by transmitting only the difference between successive images of the object.” Although it would be many years before this technique would actually be used in practice, it is still a cornerstone of many video compression standards today. It’s the reason why video using MPEG can be compressed roughly a factor of 10 better than JPEG-compressed still images.

What technique can provide another magnitude of improvement in video compression?

My prediction is that we need to change focus from optimizing for best peak signal to noise performance to optimizing for psycho-visual perception. I.e. “how good do the compressed images look” instead of looking at minimizing the mathematical difference between the original and compressed imagery.