There have been numerous occasions when we are trying to look for something but just can’t come up with the right word to type in the Google Search bar. To solve this issue faced by many of us, Google has introduced a new Multisearch feature in Lens. This ability, which was announced initially last year, will help you search with both images and texts. Here’s how this will work.
Google Lens Multisearch Feature Introduced
Google Lens’ Multisearch feature will let you search for a particular thing you see by uploading its picture along with an accompanying query to find an answer even when you are unable to describe the question.
This will come in handy when you try to search for a dress you just saw or maybe, a decor item you want for your house too. Google says that you snap a picture of an object in front of you and “refine” your search by any sort of attribute to the object.
To go about it, you need to open the Google app on your Android or iOS device -> upload the image by selecting the Google lens icon next to the search bar -> Swipe up, and click on the “+ Add to your search” button to write the text, and you are good to go. Here’s a look at the process in action.
The company also mentions some use cases that include fashion and home decor and suggests that it works “best” with shopping searches. There’s another use case wherein you can attach an image of an object and get an answer to a related query. Google’s example involves the image of a Rosemary plant and the added query of how to take care of it.
This feature is a result of advancements in AI, although it isn’t based on the Multitask Unified Model. For those who don’t know, it allows for enhanced search by providing an image of an object. Google has detailed this too and has suggested that it will soon be introduced for users.
The new Multisearch feature in Google Lens has been introduced as part of beta on both Android and iOS, currently available in English in the US. We expect it to reach more regions in more languages soon.