Ad Under Header

What does pixel binning mean for smartphone photography, and what does it do?

 


The cameras on smartphones were different a few years ago. The rear cameras of the Google Pixel 2, Samsung Galaxy Note8, and Apple iPhone 8 all had 12(ish)-megapixel sensors in 2017. Today's smartphones typically have 50-megapixel primary sensors, with the exception of the S22 Ultra, which has a 108-megapixel camera. These phones range from the Google Pixel 7 Pro to the Samsung Galaxy S22 and Galaxy 22+. With the inclusion of a 48-megapixel primary camera on its newest iPhone 14 Pro, Apple has even shown to be intelligent.

The number of megapixels on Android phone cameras has skyrocketed to hundreds. You might have noticed if you've used any of these high-megapixel cameras that they don't automatically produce photographs at 50 or 108 megapixels. Even the ability to save photos in their entirety-resolution is absent from the Google Pixel 7. Where are those pixels going, then?

Pixel binning – what is it?

Creating a super pixel by combining several nearby pixels is a process known as pixel binning. Binning is a technique used in data processing that classifies data points into groups (or bins). Individual pixels, also known as photosites, are the data points that are binned in digital photography.

Pixels are divided into bins of four or nine, depending on the full resolution of the image sensor in your phone's camera (you might see this described as "tetra-binning" or "nona-binning," respectively). In order to create 12-megapixel photographs from its 108-megapixel image sensor, the Galaxy S22 Ultra, like several other high-end Samsung phones before it, uses pixel binning to combine groups of nine neighboring pixels (the arithmetic works out to 108 9 = 12). To produce 12.5 megapixel images on the Pixel 7 and Pixel 7 Pro, each bin places four pixels (50 x 4 = 12.5).

What is the purpose of pixel binning?

But why even do this? Judd Heape, Qualcomm's vice president of product management for camera, computer vision, and video, was asked the query. Space limitations and light sensitivity are the key factors.

Camera sensors have countless pixels, or individual light-sensing components, covering their surfaces. The amount of pixels on the surfaces of smartphone camera sensors grows along with their resolution. While packing more pixels into the same physical space increases your phone's camera's ability to perceive fine detail, it also reduces the camera's ability to perform well in low light and results in photographs with a lower dynamic range.

Heape says, "Smaller pixels can't collect as much light." It's fundamental physics. Furthermore, current smartphone pixels are tiny. Pixel sizes around 1 m are not unusual (a single micrometer, or micron). To put that into perspective, a typical human hair strand is around 80 micrometers thick.

Better image quality is often associated with larger pixels, especially in low light. The smaller a pixel is, the less surface area it has to collect incoming light, which is why pixel size matters. A sensor with 0.8-m pixels takes a darker picture than a sensor with 1.2-m pixels, all other things being equal.

Manufacturers have a few options to address this. In a process known as computational photography, many smartphone cameras integrate data from many frames, utilizing software to produce a single image that combines information from multiple photos. Another alternative is to employ a sensor that is physically bigger, giving each pixel a larger surface area to capture light.

For the main camera of the Pixel 7 series, Google selected a comparably enormous 1/1.31-inch sensor, giving it a comparatively large pixel size and a high megapixel count. This method, however, necessitates dedicating more internal space to camera hardware, which results in either having less space for other components, like the battery, or a distinctive camera bump, like the one found in Google's most current Pixel smartphone.

What is the process of pixel binning?

When nearby pixels are combined, artificially big "superpixels" are produced that are more sensitive to light than their individual constituent photosites. Each pixel on an image sensor in the majority of digital cameras filters light to only capture specific wavelengths. 50 percent of the pixels are tuned to green light, 25 percent to blue light, and 25 percent to red light, respectively (green gets extra representation because the human eye is more sensitive to green light than it is other colors).

When a phone employs pixel binning, its image signal processor (ISP) creates superpixels by averaging the input from groups of four (or nine, in the case of nona-binning) nearby like-colored pixels. The outcome is a trade-off, according to Heape: "Resolution goes down, light sensitivity goes up."

By combining smaller pixels, you can approximate the low-light performance of physically huge pixels, but not quite. The distance between them isn't indefinitely small, as Heape described it, therefore pixel binning can result in more artifacting. Keep in mind that we're talking about tiny variations in the width of a hair. Pixel distances are small, and software has gotten better at filling in the minuscule data gaps created by methods like pixel binning.

It also implies that functions that rely on high-megapixel resolutions don't have to be restricted to phones with physically gigantic camera sensors, as pixel binning may effectively compensate for the low-light shortcomings small pixel camera sensors have. For instance, the S22 Ultra can shoot 8K video, which is physically impossible with a 12-megapixel camera sensor.

With that those megapixels, you can also get a lot done without a special telephoto lens. In sufficient lighting, the high-resolution capabilities of the sensor can be used to produce great quality digital zoom, according to Heape. However, the S22 Ultra's low-light performance is better than what you'd see out of a smartphone since it can combine groups of nine pixels to produce 12-megapixel stills.

Pixel binning is essentially magic.

The physical constraints imposed by ever-increasing megapixel counts on picture sensors that must stay small to fit inside our phones are creatively overcome by pixel binning. It's easy to understand why it's quickly evolving into the industry norm. In lighting conditions that would otherwise call for noise-inducing high ISO or blur-prone lengthy exposure times, it produces visually accurate photographs. Though it's not exactly magic, the engineering is deft. And aren't they essentially the same thing, after all?

Take expert-level pictures

Have a photographic mood? View our comparison of the Google Pixel 7 and Google Pixel 7 Pro's cameras. Additionally, we offer advice on how to take and edit RAW photos on Android as well as how to edit your photos in Google Photos.

Tags:
Top ad
Middle Ad 1
Parallax Ad
Middle Ad 2
Bottom Ad
Link copied to clipboard.