Born out of Google’s obsession with artificial intelligence, Google Lens is still a pretty new feature. Google had announced a bunch of new features, including a revamped UI, real-time object recognition, and smart text selection at I/O 2018 earlier this month. It now appears like these features have started rolling out to compatible devices.
Nothing has changed about how you activate Google Lens, which is still from the Google Assistant pop-up, you will now notice that the former overlayed UI of the feature is now replaced with a more rounded and materialistic white interface. Google Lens now displays a prompt asking you to ‘tap on objects and text’ at the bottom of the screen to begin recognition. You can pull up on this prompt to know what the Google Lens is capable of recognizing. It can help you identify text, products, books & media, places, and barcodes.
As for the real-time recognition feature, you no longer need to point and tap on things to get Google Lens to identify what it is. Instead, Google Lens is now continuously looking at what it sees and shows you colored dots in the viewfinder for whatever it has recognized. You can tap on these dots to see more info on what it has found among the things you were pointing at.
The updated feature is capable of recognizing more than one object, which is the reason you might see several colored at once in the viewfinder. It also highlights the text you see on the screen via the smart selection feature that has been around for quite some time.
But, like always, the feature is still a hit or miss and doesn’t quite recognize most of the things it sees. I tried the new Google Lens on a couple of devices here at the Beebom office and the feature was still as finicky as it has been from the very start. I hope Google irons out the kinks as Google Lens could be a great feature if it works smoothly.