Technology

Pixel 6 is concerned with camera hardware, not just the software side

Google revealed the sixth generation of Pixel 6 phones several days ago. Which came after many leaks and teasers that included the new phones. There is no doubt that the new Pixel 6 phones are the best in terms of specifications and design compared to previous generations.

The Pixel 6 phones offered a complete design change, with high refresh rate screens, along with the company’s Tensor processors. However, the most notable change for many is the change in the camera system.

This is where Pixel phones were known to provide the best possible performance in terms of imaging and image processing, although they continued to use the 12-megapixel Sony IMX363 sensor since the launch of the Pixel 3 and even Pixel 5.

Read also: Everything we know about Google’s foldable phone

Pixel 6 camera specifications

The Pixel 6 phones now offer a 50-megapixel main sensor, which merges all 4 pixels into one pixel to produce a 12.5-megapixel image. According to Google itself, the new sensor is capable of capturing 150% more light than the Pixel 5.

The phone offers a 12-megapixel wide sensor, and the Pixel 6 and Pixel 6 share the same cameras mentioned above, while a third camera is added in the β€œPro” version, which is a 48-megapixel zoom camera. As for the front camera, it is 8 megapixels in the regular version, and 11.1 megapixels in the Pro version.

Read also: Google officially announces the Pixel 6 lineup

Google’s focus is on the software side

There is no doubt that the Pixel 6 cameras are far superior to previous versions, at least in terms of numbers. However, Google didn’t care much about numbers because it’s mainly software.

This is as Google provides the phone with high capabilities for computational imaging. Whereas, previous Pixel phones were able to produce better images than images produced by smartphone cameras with a resolution of more than 100 megapixels.

Read also: Google should do its best to convince us to buy the Pixel 6

According to Google, its focus now will not be on the hardware side only, but will continue to develop the software features on which its smartphone cameras depend.

That’s because it has completely re-engineered Portrait mode. In addition, phone cameras will better differentiate between shades of colors, especially human skin colors. The company relies on a large number of algorithms in its new phone cameras, including those that reduce undesirable lighting, or improve the appearance of the skin and face.

The company offers a feature known as Face Unblur, which relies on the use of a group of photos of the same person to enhance the face and reduce any blurring in the image content. Pixel phones generally start processing the photos completely before taking them, so that the user can take more than one photo to activate the background.

Read also: Tensor promises to put Google’s machine learning in your pocket

It also offers a mode known as Motion Mode that captures images of moving objects. As you can expect, the company relies on machine learning and artificial intelligence techniques to improve the performance of its cameras. Perhaps these techniques have clearly appeared within the Magic Eraser feature, which allows to effectively remove elements from the image:

Related Articles

Leave a Reply

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker