The Pixel 6’s selfie portrait mode can see individual strands of hair, and here’s how it works
Google’s Pixels have always taken incredible photos. Even if they once lagged behind in camera hardware, Google knew how to stretch every pixel to its limit. That’s the science of computational photography, bringing us benefits likethe Pixel 6’simproved selfie portrait mode, which can even pick out individual strands of hair using the front-facing camera. According to Google,quite a lot of work went into it.
Portrait modes like these have understandably always been a little messy. They’re trying to emulate the effect of a fast lens and a big sensor using limited hardware. But Google’s had a few tricks over the years, fromAI-powered “people pixel” detectiontoparsing “semantic” and “defocus” cues to augment phase-detection depth data. To further improve selfie portrait modes with the Pixel 6, Google had to train new models that worked in new ways, and it had the improved AI workload performance of Tensor to help.
![]()
If you can’t find it, build it
AI models are only as good as the data used to train them. That usually means it’s good to diversify your data sources as widely as possible, but that can also be an issue logistically. To start, Google needed a massive set of photos of people from all possible angles in all environments paired with perfectly accurate masking. The easiest way is to composite photos of perfectly masked-out people into existing backgrounds, but that introduces its own issues, as lighting for the individual and the scene would vary, among other problems. So Google generated its own data sets in the same sort of way thatit implemented its portrait lighting effects on the Pixel 5.
Remember this?

Google busted out the Light Stage geodesic camera sphere it keeps in storage, and thanks to the hundreds of LEDs, an array of cameras, and custom high-resolution depth sensors, researchers were able to use it to capture examples with immaculately precise masking, perfectly separating people from the background.
That might seem like it’s enough to kick off the computational magic, but this is just the first stage. Google used these high-quality images to create separate sets of photos it would use to train the on-device model, taking the images it took of people under the sphere and compositing them with other backgrounds, relighting the portraits to match the scene with the help of all that extra depth data, ray-tracing, and even a simulation of the optical distortion effects of a virtual camera.

All that extra work means the model is even more prepared, not just to recognize the foreground, but also a wide variety of backgrounds and lighting conditions. Google arguably put more effort into creating the data sets used to train the Pixel 6’s portrait mode model than actually went into training it.
This was further seasoned with some “real” photos taken using Pixels in the wild, with a high accuracy model extracting similar masking data and a visual inspection approving only the highest-quality examples. Ultimately, Google used both data sets to train the model to a wide variety of scenes, poses, and people.

High-res, low-res, high-res
That model is only part of how the new Pixel 6 portrait mode works, though, and Google has a few clever tricks up its sleeves to save time and resources.
To start, your Pixel 6 takes that selfie and processes up a coarse mask for the foreground — seemingly similar to the default approach most other portrait modes take. While many smartphone cameras stop there, Google further feeds both the coarse mask and the photo itself into the model we just discussed, and the output of that model is a more highly defined but lower resolution mask.
![]()
Models further refine the low-resolution refined mask into a high-resolution version, recursively referring back to both the original photo and coarse mask. The steps soundsort oflikethe push-pull processing Google uses for Google Photos' denoising tool, but it works a little differently. In the end, a very fine and high-resolution mask is generated. And, using it, the portrait mode effect is selectively applied, preserving the foreground while the background is blurred. (Google even tries to approximate bokeh with it, though it’s not perfect.)
Better portrait mode selfies on Pixel 6
Ultimately, this new model and pipeline have some big advantages. For one, the mask it can create is much sharper and more detailed, which means better and more accurate blurring around areas of intricate texture, like hair. Even semi-transparent details visible through a fringe of frizzy hair are less confusing for this new model. It’s not just sharper, it’s smoother.
The wider models this new portrait mode is trained on also make it better suited for a wider variety of skin tones and hairstyles, further affirmingGoogle’s Real Tone commitment for inclusion and equity. And, ultimately, it just means better and more accurate blurry bits in portrait mode selfies for all of us. It’s still not perfect — even some examples Google provided show a few problem areas, and the gradient of the blur isn’t as tack-sharp as a real DSLR with afast lenswould be — but it’s a lot closer to the ideal.
At one time, the camera attached to your smartphone was an afterthought — a useful convenience and probably not something you’d make art with. But, as many photographers say, “the best camera is the one you have with you,” and our phones arealwayswith us, causing us to demand more and more from them. It’s only thanks to the science of computational photography that smartphones are able to take such stellar photos with the tiny sensors they’ve got, and Google is on the bleeding edge of these developments — something every Pixel 6 owner can appreciate.
From faster storage to better speakers
Pixel 10 Pro XL charges faster wirelessly
It’s time to sniff out the culprit
Breaking language barriers, one feed at a time
It helped me wind down before bed
New tablets coming September 4