Lens was first introduced back in May and made available within Google Photos last month as part of the Pixel 2 launch.
Last month, Google had introduced its newest smartphone — the Pixel 2 and boasted about how Lens builds on Google’s advancements in computer vision and machine learning. Now Google is bringing its artificial intelligence-powered Lens tool to all Pixel phones in few weeks as part of an update to Google Assistant.
Google Lens is a computer vision system that lets users point their Pixel phone's camera at an object and get information in real time, as the AI-powered algorithm is capable of recognizing real-world items. Lens was first introduced back in May and made available within Google Photos last month as part of the Pixel 2 launch. The feature can also be used on photos or screenshots you have already taken.
The company says that this feature is rolling out slowly but it will be coming to all Pixel phones in the US, UK, Australia, Canada, India and Singapore. Once the feature is live, users can see the Google Lens logo in the bottom right corner of the Assistant screen. Until now, the only way to use Lens was through Google Photos on the Pixel phone.
Google says that it can save information from business cards, follow URLs, call phone numbers, recognize addresses and landmarks. It can also show information about movies and books by just looking at the spine or a poster and scan barcodes.
The company says Lens will only improve as it learns more about our surroundings and becomes more adept at identifying people, objects, and any manner of other things in the real world.