Skip to content

What is pixel binning, and what does it mean for your mobile photography?

Five years ago, smartphone cameras were very different. Back in 2017, the Pixel 2, Galaxy Note8, and iPhone 8 all had 12(ish)-megapixel sensors powering their rear cameras. Fast forward to today: iPhones still have 12-megapixel cameras (for now), but both Pixel 6 phones and the Galaxy S22 and S22+ have 50-megapixel primary sensors, and the S22 Ultra features a 108-megapixel shooter. We’ve even heard Motorola has plans for a phone with a 200-megapixel camera.

Megapixel counts on the best Android phone cameras have ballooned into the hundreds, but if you’ve used any of these high-megapixel cameras, you may have noticed that they don’t actually kick out 50- or 108-megapixel images by default; the Pixel 6 doesn’t even have the option to save full-resolution shots. So where are all those pixels going?

ANDROID POLICE VIDEO OF THE DAY

This is a phenomenon known as “binning.” In data processing, binning is a process that, in a nutshell, sorts data points into groups (or “bins”). In digital photography in particular, the data points that are being bound are individual pixels. Depending on the full resolution of the sensor in your phone’s camera, pixels are binned into groups of either four or nine (you might see this described as “tetra-binning” or “nona-binning,” respectively). The Galaxy S22 Ultra, like several high-end Samsung phones before it, bins groups of nine pixels, using its 108-megapixel sensor to capture 12-megapixel images (the math checks out: 108 ÷ 9 = 12). The Pixel 6 and 6 Pro, meanwhile, each bin sets of four pixels to create 12.5-megapixel photos (50 ÷ 4 = 12.5).


But why do this at all? I put the question to Judd Heape, Qualcomm’s vice president of product management for camera and computer vision. The answer largely comes down to two things: light sensitivity and space constraints.

On their surfaces, camera sensors have thousands and thousands of pixels — discrete units that sense light. As the resolution of smartphone cameras increases, so too does the number of pixels on those sensors’ surfaces. But while cramming more pixels into the same physical area makes your phone’s camera more capable of seeing fine detail, it simultaneously limits how well that camera can function in dim settings.

“Small pixels can’t capture as much light,” Heape explains. “It’s basic physics.” And modern smartphone pixels are small; it’s not uncommon to see pixel sizes around 1 μm — a single mircometer (or micron). To put that into context, an average strand of human hair is something like 80 μm thick. Pixel size matters because the smaller a pixel is, the less surface area it has to collect incoming light; all else being equal, a sensor with 0.8-μm pixels will take a dimmer picture than a sensor with 1.2-μm pixels.


There are a few things manufacturers can do to combat this. Many smartphone cameras combine information from multiple frames, using software to create a single image that actually contains data from several photos. There’s also the option to use a physically larger sensor, allowing each pixel more surface area to collect light. Google used a comparatively huge 1/1.31″ sensor for the primary camera in the Pixel 6 series, which afforded it relatively large pixels and a high megapixel count. But this approach requires devoting more internal space to camera hardware, which means you end up with either less room for other parts — like the battery — or a unique camera bump, like the one on the newest Pixel phones.

Small pixels can’t capture as much light. It’s basic physics.

Pixel binning, on the other hand, combines adjacent pixels to create artificially large “superpixels” that are more sensitive to light than their constituent pixels are on their own. In most digital cameras, each pixel on an image sensor filters light to collect only certain wavelengths — broadly speaking, 25 percent of the pixels are tuned to red light, 25 percent are tuned to blue light, and 50 percent are tuned to green light ( green gets extra representation because the human eye is more sensitive to green light than it is other colors). When a phone bins pixels, its image signal processor (ISP) averages the input from sets of four (or nine, in the case of nona-binning) neighboring like-colored pixels to generate image data. The result, Heape says, is a trade-off: “Resolution goes down, light sensitivity goes up.”


You can’t quite replicate the low-light performance of physically large pixels by combining smaller pixels; as Heape put it, “The distance between them isn’t infinitely small,” so binning pixels can cause additional artifacting. Remember we’re dealing with fractions of a literal hair’s breadth, though; distances between pixels are microscopic, and software’s gotten very good at filling in tiny data gaps introduced by techniques like pixel binning.

Nona-binning.

Because pixel binning can largely make up for the low-light deficiencies inherent to sensors with small pixels, it also means that features that depend on high-megapixel resolutions don’t have to be exclusive to phones with physically enormous camera sensors. The S22 Ultra, for example, supports 8K video recording — which is physically impossible on a more traditional 12-megapixel camera sensor. All those megapixels are also great for punching in without using a dedicated telephoto lens, as Heape explains: “In adequate lighting conditions, the high-resolution capabilities of the sensor can be leveraged to achieve excellent quality digital zoom.” But because the S22 Ultra is also able to bin groups of nine pixels together to kick out 12-megapixel stills, its low-light performance is considerably better than what you’d see out of a lower-resolution camera with pixels of the same 0.8 -µm size.


Long story short, pixel binning is a creative workaround to the physical limitations imposed by ever-increasing megapixel counts on sensors that have to remain tiny to fit inside our phones. It’s rapidly becoming the industry standard, and it’s not hard to see why: It helps us get visually accurate photos in lighting that would otherwise require noise-inducing high ISO or blur-prone long exposure times. It’s not quite magic, but it is very clever engineering — and really, aren’t they pretty much the same thing?

Leave a Reply

Your email address will not be published.