Minnow Park

Blog

A Review of iPhone 11 Pro's Camera

Taken during blue hour with iPhone 11 Pro wide angle lens.

Taken during blue hour with iPhone 11 Pro wide angle lens.

Taken in Vermont about two hours before sunset. The Smart HDR has come a long way.

Taken in Vermont about two hours before sunset. The Smart HDR has come a long way.

It wasn’t until the iPhone 4 that Apple started to take the camera on the phone seriously. Released in 2010, the iPhone 4 came with a f/2.4 aperture, which in photography parlance is a “fast” lens. The aperture determines how much light a lens allows in. I see it as eyes of the camera and how wide they can open. The more light it lets in, the more versatile and therefore better the lenses are. The best zoom lenses Canon makes has f/2.8 aperture (the lower then number, the “wider” the eyes open). Canon prime lenses, can go as wide as f/1.4. The iPhone 4 claimed it had f/2.4.

One striking difference though is that the size of the lens on the iPhone 4, and models after are a fraction of the size of my professional lenses. With digital photography, the size of the lens and the size of the sensor matter. The cameras I use for work are that big and heavy for a reason. The physics dictate that there needs to be that much space and distance between the light entering the lens and hitting the sensor in order to produce high quality images. So even though they used terminology that was familiar to me, it’s like comparing the horse power of a smart car to my Honda CRV. The measurements are the same, but the context is very different. Smart cars may be fun to drive around town, but I’d never drive it across the country. 

But with every model after the iPhone 4, Apple kept iterating on the lens and the sensor. Every year, Phil Schiller proclaimed, “This is the best camera we have ever made on an iPhone!” Even though it was getting better, I kept wondering, “how are they going to defy the the laws of physics? They keep making the iPhones thinner, and even with a bump, those lenses and sensors cannot physically do what my DSLR camera could do.”

The answer wasn’t to violate the laws of physics, but rather mimic it by using computation and machine learning. It started when Apple introduced Portrait Mode on the iPhone 7. The iPhone 7 had two lenses on the phone, and when shooting in that specific mode, both lenses were used to compute the distance of the subject from its background and digitally blur the background like a 50mm f/1.4 lens on a DSLR camera. 

Apple touted how the software calculates the intensity of the blur just like the laws of physics would dictate: the elements farther away from the subject were blurred more than those that were closer. It launched as a beta feature for the iPhone 7, but it’s gotten better since then and now is a flagship feature of the phone. There have been photos I took with my phone that looked awfully close to a photo I would’ve taken with my DSLR. And with each progressive iPhone, the gap of “awfully close” has gotten smaller.

And this year with the iPhone 11, there are some features where the gap has been closed.

Three examples of Portrait mode on iPhone 11 Pro.

Three examples of Portrait mode on iPhone 11 Pro.

My favorite reviewer of the iPhone is Matthew Panzarino. He is the Editor-in-Chief of Techcrunch (meaning he is the tech nerd of tech nerds) and in his previous life, he was a professional photographer and ran a print lab. So that makes him my kind of nerd: someone that knows the technical details and also knows how to take a decent photo. 

Here’s how he explains the camera technology in the iPhone 11:

In addition to the ISP (image signal processor) that normally takes on the computational tasks associated with color correction and making images look presentable from the raw material the sensor produces, Apple has added the Neural Engines’s machine learning expertise to the pipeline, and it’s doing a bunch of things in various modes.

This is what makes the camera augmented on the iPhone 11, and what delivers the most impressive gains of this generation; not new glass, not the new sensors — a processor specially made to perform machine learning tasks.

What we’re seeing in the iPhone 11 is a blended apparatus that happens to include three imaging sensors, three lenses, a scattering of motion sensors, an ISP, a machine learning-tuned chip and a CPU all working in concert to produce one image. This is a machine learning camera. But as far as the software that runs iPhone is concerned, it has one camera. In fact, it’s not really a camera at all, it’s a collection of devices and bits of software that work together toward a singular goal: producing an image.

Here is what happens after I press the shutter button on my camera:

  1. The shutter lifts a mirror, to let light that has passed through a lens, to strike a digital sensor.

  2. The sensor translates the light into raw digital information in the form of a file, and stores it onto a memory card.

  3. I transfer the file onto my computer and manipulate it in Lightroom or Photoshop to realize its full potential, to produce an image.

The iPhone 11 is able to do all of steps one and two, and most of step three in a fraction of a second without any input from me. I use an app called Darkroom to make a few edits, but that only takes 30 seconds at most. And with each photo I take, it’s learning how to do that better and better. With my Canon camera, I go out shopping for ingredients and cook my own meal. My iPhone, it seems, is a private chef and all I have to do is set the table.

I've been loving the wide-angle lens, a 13mm lens.

I've been loving the wide-angle lens, a 13mm lens.

Taken around the same blue hour time as the first photo.

Taken around the same blue hour time as the first photo.

Intense backlight from the sun in the sky and reflected off the reservoir and the camera handled it admirably.

Intense backlight from the sun in the sky and reflected off the reservoir and the camera handled it admirably.

There are two examples of the iPhone’s “cooking abilities” that I wanted to share. The first is called Night Mode which I’ve played around with since I got my phone last week. And another feature called Deep Fusion that’s due to release in the fall. 

The one I’ve tried is called Night Mode. Here’s Panzarino’s description of it: 

On a technical level, Night Mode is a function of the camera system that strongly resembles HDR. It does several things when it senses that the light levels have fallen below a certain threshold:

1. It decides on a variable number of frames to capture based on the light level, the steadiness of the camera according to the accelerometer and other signals.
2. The ISP then grabs these bracketed shots, some longer, some shorter exposure.
3. The Neural Engine is relatively orthogonal to Night Mode working, but it’s still involved because it is used for semantic rendering across all HDR imaging in iPhone 11.

The ISP then works to fuse those shots based on foreground and background exposure and whatever masking the Neural Engine delivers.

“Number of frames” and “bracketed shots” means that the iPhone takes multiple photos, or to keep with our analogy, shopping for ingredients. “Semantic rendering,” “fusing” and “masking” means the iPhone is cooking the meal for me rather than doing those things on the computer and doing it myself. And the iPhone does it in a fraction of the time it would take me to do it.

This was around 9 at night, with only street lights lighting the basketball court. The camera is taking in more light than the eyes can see. Until now, I would've only been able to get this shot with my DSLR camera.

This was around 9 at night, with only street lights lighting the basketball court. The camera is taking in more light than the eyes can see. Until now, I would've only been able to get this shot with my DSLR camera.

I've taken this photo of my courtyard many times, but the scene never looked this. Notice the uplighting on the trees and the buildings.

I've taken this photo of my courtyard many times, but the scene never looked this. Notice the uplighting on the trees and the buildings.

Taken at dusk, the iPhone Pro 11 managed to balance the intensity of three different light sources: the fading blue sky, the light from inside the trailer, and also the dimmer light falling onto the person with the white hoodie.

Taken at dusk, the iPhone Pro 11 managed to balance the intensity of three different light sources: the fading blue sky, the light from inside the trailer, and also the dimmer light falling onto the person with the white hoodie.

IMG_0060.jpg

It blew me away when I first used it last week. It wasn’t only the fact that it was able to capture a dark scene as well as a professional camera, it’s that it did it well both in artistic taste and fidelity.

But the one feature, that I think will be the best manifestation of the machine learning, artificially intelligent camera system, is called Deep Fusion. From Panzarino’s tweet during the Apple Keynote on September 10, about the feature:

Deep Fusion shoots 9 images, it pre shoots 4 long and 4 short exposure images into a buffer. Then when you press the shutter button it takes a longer exposure. Then the neural engine and ISP combine these on a pixel by pixel basis into your image.

During the presentation, Phil Schiller called it “computational photography mad science.” Cheesy, but true. The iPhone is the most popular camera in the world, and it’s going to get better, not by being a better camera, but by becoming a smarter computer.

Does this mean the laws of physics don’t matter? Will this replace the “pro” cameras and lenses I own? I don’t think so. I still stand by the smart car vs CRV analogy, although with the iPhone 11, that smart car is slowly becoming a Tesla. I'm sure the day will come when cameras and heavy lenses are no longer needed, but photography will be less about the camera and more about person taking the photo. The chef still matters. 

Technology may make things easier, but it won’t replace how I see the world, and what stories I want to tell. It’s really fun to geek out on it though.