Google first released its custom chipset with the Pixel 6, and due to the positive response, they have worked to create an updated version. The Tensor chips are made in collaboration with Samsung and offer excellent performance. It even facilitated AI and ML use cases, which has become a differentiating factor for Google.
This year, Google has debuted its Tensor G2, which they claim to be an evolution of last year’s chip. The fundamentals are reportedly the same, but Google has added a custom AI module and an imaging engine. This allows the Pixel 7 and the Pixel 7 Pro to take full advantage of its camera and take high-quality pictures.
Google’s strategy seems to be building on the technology they have already released and debuting features that are unique to the Pixel phones. The chip uses a 5nm node that was used in last year’s chip, and the core configuration remains unchanged.
Google is switched its GPU and now uses the Mali-G710 instead of the G78, and it also offers a seven-core part. The older G78 used 20 cores but decreasing the number of cores should not be much of an issue because the G710 uses new architecture.
Machine learning is one of the most important features of Tensor G2. When Google announced the G2, it boasted about its ability to deliver 60% gains for select ML tasks.
Google has made an interesting move by continuing to use old cores because it puts them at a disadvantage. The new chip will be unable to get the same efficiency gains, and the performance will be limited. However, Google does not seem to be playing the numbers game.
Google’s intentions seem to focus on providing its users with unique features driven by machine learning. Such features include Pixel Call Assist, which relies on AI to display the available options when calling a particular business to bypass listening to the voice recording.
The chip also powers many camera features, such as their new Night Sight algorithms, which allow the user to take excellent photos even in the dark,