Lens tests Smartphones

On Samsung’s 0,0768 megapixels and other lies

Somehow, Samsung customers believe in megapixels and don’t believe in diffraction. Well, you know, there’s a sucker born every minute. Those of us who aren’t math-challenged know that megapixels don’t say anything and that the square root of them tells us that diffraction spoils everything. But it’s a lot worse than that.

Smartphone manufacturers invest enormous amounts of money, fighting an uphill battle they can’t win: the battle against physics. They made some progress by enlarging the sensor (but more isn’t possible unless smartphones get larger than cameras). The use of all kinds of software tricks, mostly consisting of noise reduction and sharpening also helped – somewhat. Yet in the end, you get a 12 MP picture that is still worlds apart from a decent real picture. Those smartphone pictures are great – for smartphone pictures. Once you start seeing the noise reduction artifacts and made up image details and smearing you realize those pictures are simply not useable an anything larger than a smartphone screen. Great for memories, but not for photography. And then we don’t even talk about the lack of lens choice and bokeh (because artificial bokeh is not high quality and only comparable to f/4).

First lie: 10 = 500

Now smartphone manufacturers know that, so they concentrate on what’s easier than to fight the laws of physics: they just invest in marketing. Huawei even got caught with their pants down when they used pictures made with a DSLR in their marketing, saying they had been made with a smartphone. Hey, a man gotta do what a man gotta do, right?

Samsung’s Head of Sensor Business Team, Yongin Park came up with a few new lies. The first is that the resolution of the human eye is around 500 megapixels. Bullshit.

When I measure it for my eyes, it’s about 10 megapixels. Depending on your criteria and acuity that it’s between 2.5 and 35 megapixels for your eyes but never 500. How do I measure it? Quite simply, by looking at the same test charts I use to test lenses. I look at the test wall I use to test a 50mm lens and then from the same distance and I get a reading comparable with about 50 lp/mm for a full-frame camera.

The eye consists of both one small spot, the fovea, with a relatively high resolution and a much larger area with a much lower resolution, so much lower that, in terms of megapixels, you can forget it. Now the fovea only has a field of view of between 1.5 and 2 degrees, this corresponds with a very long telelens of 1200 to 1600 mm! Peripheral vision – Wikipedia.

But since the movements of your eye are partially involuntarily, you get the impression you can see sharp over a larger view angle, as if the fovea were larger. This is not strange: when we talk about a lens that complies with your natural look of that the world, we mostly mention a 50 mm lens. Such a lens has a field of view of 40 degrees horizontally (47 diagonally). In practice it works as follows: when you look at a detail, you point your eye at it and you use the fovea. While reading e.g., you jump with your eye from word to word. The rest of your eye you use only to get a general impression and to see movements.

Now if  refer to Wikipedia for the visual acuity and calculate that back to the situation where you look at a distance of 5 m, you get 27 lp/mm, but for good eyes it’s about double of that and that’s the value I get for my bionic eyes after my cataract operation: https://en.wikipedia.org/wiki/Visual_acuity

That means that no matter whether you measure it or calculate it, you end up with a value of 10 MP.

But I can understand why Park lied about the resolution of the eye. His PR department needed urgently some tale around their new non-existing 600 MP sensor. (That’s not a real lie, but ist very strange to brag about a thing you haven’t done. It’s like me telling you that I’ll win the Noble prize next year and then continuing to brag about how smart Nobel prize winners are. First see, then believe.)

Second lie: 0.0768 = 108

Now back to the real lies. Why did Park need new lies? As I said, smartphones are all about marketing. So Samsung makes a very expensive phone with 108 megapixels, which makes pictures with … 12 megapixels. As you can see in https://www.techradar.com/reviews/samsung-galaxy-s20-ultra-full-review the iPhone 11 pro with only 12 MP makes better pictures than the Samsung with 108 MP. So why do they use this technique? Simple: to lie to the customer. The customer in his turn can lie to his friends he has a 108 MP smartphone.

But it’s even worse. In fact, this 108-megapixel camera only has 0.0768 megapixels. Just do the math with me. The Samsung has 100 x zoom, according to Samsung. Another thing to brag about isn’t it? Or just another lie. We start with 48 megapixels (that don’t look better than 10 megapixels, but ok, let’s give poor Samsung a break, they are in big trouble as we are about to see and they can’t count, which is quite a serious issue if you’re in the tech business). To get to 100 x zoom you have to crop 25 x. That means there will be 25 ^2 is 625 x fewer megapixels than we started with. 48 / 625 = 0.0768 megapixels to be precise.

Lies on top of other lies

But there’s another reason Park urgently needed some lies about eyes and cameras. Samsungs Vice Chairman, Jay Y. Lee, just got arrested. You want to know what he did? Lying. No, not about megapixels, but about stock prices, audit rules, and bribery. And the guy just came out of prison a year ago or so, for similar reasons!

I guess in marketing, if a thing like that happens, you have to create some other lies to make up for it. So, OK, the human eye has 500 megapixels, Samsung has a 600-megapixel sensor and it’s Galaxy Ultra makes 108-megapixel photos and has a 100 x zoom.

Now the truth

The truth is much more interesting and revealing than lies, as those among us who aren’t caught in a marketing bubble know. Let’s go back to the first lie, about the resolution of the human eye. If our eye has about the resolution of a smartphone, why is it that what I see is so much more detailed than what I see on a smartphone picture?

Just look at this detail from an iPhone XR picture. Yes, you do see small details. But if you look closely, you see that they don’t look natural. That’s because we only see high contrast details, the low contrast details are lacking. That’s what you get of course if you start tricking with information. If you’ve ever seen MTF graphs, you know that the smaller details get, the lower the contras gets. If you use sharpening, you just make the contrast higher, especially at the border of two lines. That helps with high contras details and even with coarse lower contrast details, but the smaller lower contrast details, that just haven’t been recorded, will not come back. That’s why a smartphone picture looks ok-ish until you look at it on a larger screen. And this example is even from an iPhone XR. Apple is very careful to not meddle too much with details, at least compared to other smartphones like Samsung.

Samsung is bragging about a sensor they don’t have yet. But just like Samsung I can tell you something about this non-existing sensor. It won’t make much difference. As you can see at https://www.tomsguide.com/reviews/galaxy-s20-ultra a sensor with native 12 megapixel makes better pictures than a 108 megapixel Samsung sensor. Why? Because of the laws of physics. The more megapixels the smaller the pixel pitch and the more sharpness you’ll loose from diffraction. As a rule of thumb, you can say that the maximum aperture you can use before resolution suffers, is about 1.5 x the pixel pitch in micrometer. Apple uses a pixel pitch of 1.4 micrometers, so with f/1.8 you’re about right.

Diffraction limitations

For all practical purposes, this is also the border. Using a smaller pixel pitch means, that diffraction will make your picture less sharp. So what you win by adding pixels, you lose by making them smaller from that point.

That is also why Apple iPhones with 12 megapixels make at least equally sharp pictures than 48 or even 108 megapixel smartphones from Samsung and the like.

Now you could argue, but what if I make the lens with a larger aperture? After all, diffraction is determined by the aperture (and the wavelength of the light, which you can’t change). That’s also the reason why smartphones don’t have an aperture: they operate already at the diffraction limit. So you could use a larger aperture, f/0.6 instead of f/1.8…

Yes you could – in theory. You should understand though that the current smartphone lenses are already making an enormous effort to be diffraction limited. The small lenses make use of extreme aspherical surfaces at all elements, what more can you do? Since most lens aberrations increase exponentially with lens speed (aperture) you won’t be able to solve this problem.

Well, OK, there is a solution. If you make a lens more complex, if you add elements to provide extra aberration correction, then you can create a diffraction-limited lens with a very large aperture. But the problem with smartphones is, that you don’t have the room to use lenses like that.

Sounds a little ironic, but if you want to use very small pixel pitches you’d better use a large camera with exchangeable lenses, something like a mirrorless full-frame camera. So that’s where the snake bites its tail.

As I told you; smartphone manufacturers are fighting an uphill battle against physics they can’t win. In the long run, even PR lies don’t change that.