As mobile phones have developed throughout the years, so has technology in general. Having a camera was just a novelty back then but has now become a non-negotiable in the golden age of social media. What really constitutes a good camera, anyways? We, as both reviewers and consumers, used to believe in just the numbers, but then we stepped back upon realizing that there was more to it. Now, it appears we’re back in the numbers game of the smartphone megapixel race. What has happened? There’s a lot of context to this, so grab some popcorn and bear with us!
The first thing your average consumer will look at when purchasing a camera-equipped device is the megapixel count. This makes perfect sense. For the uninformed, photos are made up of tiny specks called pixels. Each pixel contains data that, when added up, forms the photo. A megapixel is formed by a million pixels each.
As such, the more megapixels crammed into a photo, the more detail is retrieved and the individual pixels become much less recognizable. This is the concept that became the basis of the smartphone/other device megapixel race. People began to associate the number of megapixels as a determinant of photo quality. Conversely, cameras with smaller megapixel counts were written off as inferior even before being tried out.
The world isn’t so simple though. While megapixels are important, they aren’t the end-all and be-all of image quality. You also have to consider sensor size, software optimization, modern technologies such as pixel binning, and others. Not every pixel was built the same.
The Game Changer
Optics enthusiasts may have been in the know all along, but the one to open the mainstream smartphone tech world’s eyes to this concept was HTC. They launched the flagship HTC One in 2013, which was, by all means, a beautiful device. It had all the makings of a flagship, but arriving at the camera specs made you wonder: 4 megapixels in 2013? Keep in mind that the class-leading Samsung Galaxy S4 came with a 13-megapixel shooter.
HTC then introduced the concept of the UltraPixel. Instead of upping its megapixel numbers, the company had then decided to make each individual pixel as humongous as possible in order to absorb more light. This device destroyed its contemporaries when it came to low-light performance and revolutionized a new age. Despite the fact that the lack of detail became more apparent as you zoomed in, one thing became clear: pixels vary in shapes and sizes, which also determine the final outcome. In essence, the UltraPixel did defy the perception that higher megapixels equate to better quality.
Light Is Might
Image sensors, by their very definition, sense images through light. Sensors are filled with photosites corresponding to a digitally-formed pixel on an image. Larger sensors can accommodate correspondingly larger photosites, increasing receptiveness to light. This means brighter and more detailed photos which are especially apparent in dimly-lit locations.
Keep in mind that each photosite corresponds to a pixel. For example, a 12-megapixel image would be generated from 12 million photosites on the sensor. Larger sensors are able to store more photosites, but the choice remains: more of these little photosites or fewer but larger units?
The former would lead to a higher-megapixel unit with more detail in ideal lighting, while the latter will create a lower-megapixel unit with a bit less detail but greater consistency in a variety of conditions. Images are much less noisy due to the ability to capture more light.
Manufacturers do their best to find the ideal middle ground in these scenarios. Is the goal to pad the stat sheet with astounding counts or to find a setup that is ultra-versatile but might falter a bit when zoomed into? Will the photos be used in professional contexts or is this just for the ‘Gram? There are numerous factors considered here.
Aperture also comes into play. This is the size of the opening that lets light in – bigger is definitely better. However, as the size increases, complications in the image produced, as well as the cost of manufacture, do as well. While companies like Samsung did dabble in variable aperture setups like with the Galaxy S9 (you’d notice it open and close), fixed aperture setups have been more or less the norm for a while now.
As with any piece of technology, the hardware and software work hand-in-hand to deliver the experience to the user.
A famous trend done by many manufacturers nowadays is called “pixel-binning.” Explaining in layman’s terms, it merges data from photosites together. For example, there are 64 million photosites on a camera’s sensor. Therefore, common knowledge would easily estimate the output photos being 64 megapixels as well.
Instead, manufacturers merge data between (for example) four adjacent photosites to achieve a 16-megapixel photo that is brighter and clearer than ever. This manages to strike a workable harmony between resolution and light. This also reduces useless noise from photosites that are deprived of light, removing the grainy parts of your photos. While the photos this method produces are relatively large, they’re still nothing compared to a DSLR, for example.
Before we forget to mention, also equally as important is the post-processing technology conducted by each and every smartphone vendor in order to enhance the image output. You’ll often see a bevy of shooting modes and presets on your device’s camera window, indicative of this optimization.
The Trend Falls
Before long, the world’s leading smartphone manufacturers often stayed in 12 and 16-megapixel configurations, with improvements mainly to the sensors and software. The truth was revealed: Not all megapixels are equal, and that brings us back to the idea of UltraPixel that higher megapixels do not necessarily translate to better quality. Bigger pixels lead to better photos, even at the expense of count. The goal was now to maximize each pixel to deliver the best image possible.
Those who continued to relentlessly charge forward in the smartphone megapixel race were discouraged by inferior images to their optimized rivals. Consumers no longer used megapixel counts as their only rubric when it came to camera performance, thus the emergence of sample photos and side-by-side comparisons. Benchmarks like DXOMARK also rose to fame as the premier evaluator of smartphone image quality. It eventually became a race to produce the best system of software, hardware, and further innovation.
As with all things, there were some notable exceptions: one of them being the legendary Nokia Lumia 1020 released back in 2013. It had a 41-megapixel sensor and one of the largest camera bumps at a time when most devices only had tiny lenses. This noticeable bulge in its rear was the price to pay for its stellar optics, which also became the eventual motivation for devices like the Samsung Galaxy S4 Zoom.
While not necessarily becoming sales successes, they were testaments to the creativity of manufacturers.
So, Why Exactly Are We Back In The Smartphone Megapixel Race?
Before, we had to sacrifice either resolution or quality of light capture when building a camera system. In 2022, we can just add more camera sensors to cater to different purposes.
It has become aesthetically acceptable to have more than one camera sensor equipped on the rear of a smartphone, regardless of the real estate it takes up. As such, one of the cameras can be an all-out resolution hog to be used in clear sunny skies, while another lower-megapixel sensor serves as the one to use in iffy lighting conditions. Basically, one has a whole army of photosites while the other has larger ones.
This approach kills two birds with one stone as it also helps with zoom. When a user zooms in while taking a picture on a smartphone, it just presents a digitally-cropped version of the original image. This is the reality of digital zoom and how it falls behind optical zoom methods you can physically perform on a DSLR camera. As such, the quality falls drastically and details are lost to the nether realm.
With the addition of multi-sensor setups, manufacturers have assigned the main sensor to be able to digitally zoom up to a certain extent, then another optical zoom sensor meant for that very purpose kicks in once you’ve crossed the threshold. For example, the heralded Samsung Galaxy S22 Ultra loses out on detail as you approach 6-9.9x zoom, but 10.0 switches over to a different camera and things are crystal clear again.
So the answer to why we’re back in the pixel race is simple: we can add more sensors now, and both software and hardware have improved to the point where it’s possible.
What Is The Future For Smartphone Cameras?
Based on what we’ve seen so far, we can imagine that the multi-sensor setup we currently have on smartphones will remain. In fact, there are currently eight types of smartphone camera sensors but we can imagine that more will be introduced soon as technology progresses.
As of now, Xiaomi, Infinix, and Samsung are among the smartphone brands slated to release devices with 200-megapixel sensors, which we assume are composed of a whopping 200 million photosites each. The overall system will either be extremely enormous, or the photosites themselves are going to be crafted to be as tiny as possible.
We can imagine that most users won’t be using the 200-megapixel modes on a daily basis though. The intelligent move would be to take advantage of pixel-binning, resulting in the crispiest, crunchiest, and most stunning 16-megapixel smartphone photo you have ever seen in your entire life. Either way, we are excited to see what’s next for smartphone optical technology.