Do you ever wonder whom we have to thank for selfies, grumpy cat memes, and a look at Kim K’s butt? (eew) Have you ever been curious about how a digital camera actually takes a picture? For answers to these provocative questions, read on.
On October 17, 1969, a couple of really smart guys named George Smith and Willard Boyle sat down at Bell Labs and, in less than an hour, sketched out a plan for a charge-coupled device (CCD). Although they were primarily interested in making a new kind of semiconductor memory device for computers, they realized that CCD technology was applicable to imagery as well as memory, and our lives changed forever. For inventing the imaging semiconductor circuit, they shared the 2009 Nobel Prize in Physics.
The primary difference between the original film camera and today’s digital camera is that the film itself has been replaced by a digital image sensor. Steven Sasson, an engineer at Eastman Kodak, built the first camera using a digital image sensor (CCD) in 1975. Today, most cameras use a complementary metal-oxide-semiconductor (CMOS) image sensor because the CMOS is cheaper to make and uses less power. However, both CCD and CMOS image sensors capture light in basically the same way.
Click here to link to picture source.
Just one more “small” detail before we figure out how all this works. Thanks to the marketing wars of camera manufacturers, everyone has heard the term megapixel (often abbreviated as MP as in a 12MP camera). A megapixel (MP) is a million pixels and a pixel (short for a picture element) is the smallest element of a digital image. Just look at your computer screen with a magnifying glass and you can see the individual pixels. For each pixel, there is a corresponding photosite on your image sensor that records the brightness and color of that pixel.
If your head has not yet exploded from all this techie-physics, please continue reading for the really good stuff. The image sensor on a 12MP camera has approximately 2,848 rows, each containing 4,256 photosites etched on its surface. Think of each photosite as a miniature solar collector. When you press the shutter release on your camera, the image passes through the lens and is projected on the surface of the image sensor. Each photosite collects the photons of light at its specific location and converts the photons into electricity. The image sensor measures the amount of electricity and then converts the resulting measurement to binary code. The accuracy of that measurement, in the 8 bit world, can differentiate between 256 shades of gray.
But you may have noticed that your pictures are in color, and you find yourself asking, “How does the image sensor record color?” Patience you must have, my young Padawan. To answer this question we must go back in time to 1860 when a physicist named James Clerk Maxwell discovered that color pictures could be made using red, green, and blue filters with black and white film. He asked a photographer named Thomas Sutton to take three pictures of a tartan ribbon, each time using a different colored filter. He then projected the three black and white pictures onto a screen with three projectors using the same filters. When aligned, the projected image formed a full-color picture and thus color photography was born.
Travel forward in time to the present and you will discover that those same filters are being used on your current image sensor (I guess those old laws of physics still apply today). Each photosite on the image sensor has a red, green, or blue filter that allows it to measure at least 256 tones for each of those colors. Only black and white have one tone because they are pure and have no detail.
Another smart person working for Kodak, named Bryce Bayer, invented the Bayer filter in 1974. Because the human eye is more sensitive to the color green, his filter employs a checkerboard mosaic pattern that is 50% green, 25% red, and 25% blue, yielding sharper images and truer colors. 95% of digital cameras today use the Bayer mosaic filter. To create a color picture, the image processor, using a process called interpolation, computes the color of each pixel by measuring the shade of one photosite and combining it with the color measurements of the photosites surrounding it.
Click here to link to picture source.
This is a necessarily simplified explanation of how a digital camera takes a picture. When you snap that selfie with your cell phone (which has an image processor that is only 4.54 x 3.42 mm), your image sensor captures the color and brightness data of your image and passes that information to your image processor, where it interpolates, compresses, stores, and displays the data as an image that you can see. To do this, the image processor performs millions of calculations faster than you can retract that “duck face.” I find that amazing.
If you find all this digital stuff amazing too, please leave a comment or questions in the comments section below.