Google I/O 2023 was all about generative AI. It showcased how it takes the AI game to the next level with fluid product integrations and enhanced end-user experience. Google promised to bring these implementations very soon. And now it’s time we see some of those “next-gen” AI products in action for Search and more of its products. Read along to know more.
Google Products Just Got Smarter With Generative-AI
The first update comes in the form of Google Search. If you remember, Google opened the Search Generative Experience to Search Lab program subscribers last month. Those lucky users could gain access to generative search features like AI-generated snapshots based on PALM2, SGE’s conversational capabilities, and much more.
With the recent update, Google is further expanding on its AI-generated snapshots feature. Now, you will be able to look for hotels, restaurants, travel destinations, and much more using generative AI. For your convenience, key data pertaining to your query will be extracted from multiple sources, and then summarized for you. You can further expand on a single query, or even throw follow-up questions at Google. This will follow SGEs conversational mode.
The next update comes for Google Bard. The generative search abilities of Google are built upon the foundation of Bard and Google’s Pathways Language Model 2 (PaLM2) and Multitask Unified Model (MUM). Bard first opened up for early access in March and expanded further in Google I/O 2023. Ever since then, we have seen the true potential of what Google brings to the AI table.
Now, Bard is getting smarter than ever, with the ability to generate results based on “image prompts.” Early SGE adopters have already seen this feature in action. This feature will allow you to get helpful responses from Bard with visual cues. For example, you can ask Bard to suggest you popular tourist attractions in Mexico. It will summarize the data for you complete with pictorial depictions. You can even add your own image and generate text prompts for it. For example, you can feed an image of two dogs to Bard and ask it to write a funny caption for it.
If you are planning to do some shopping using Google Search, AI will be there to assist you. The very first implementation is in the form of a virtual try-on tool. Starting today, you will be able to see how your choice is actually translated into real-world scenarios. This will be done via virtual avatars of actual models, of different skin types, body types, and much more. Google says that the feature is supposed to replicate “how it would drape, fold, cling, stretch, and form wrinkles and shadows on a diverse set of real models in various poses.”
Another update is in the form of granular customization options while shopping. Google is calling this feature Guided Refinement. With this feature, you are supposed to feel exactly as you would in a real store. This feature is available in the U.S. only.
With AI in Google Lens, you will now be able to look for skin conditions. This is made possible via Lens’ new Find My Skin feature. All you will need to do is take a picture and search for it. you will be greeted with visual matches for your skin. the feature works on your other body parts like nails and hair as well.
Next up, AI in Google Maps will now let you ‘time travel.’ Well, sort of. Google Maps is getting a new AI-based feature and two new updates. Now, you will be able to see a 3D-mapped multidimensional visualization for your destinations and routes with Immersive View. For example, if you want to travel from New York to San Francisco, you will be able to map out a 3D route for your entire journey. You will be able to see every small detail that’s part of your travel route like the trees, what restaurants are available, on which side of the road you can find a gas station, and much more.
As for the time travel part, this is part of the Immersive View update itself. Google will now let you know the time, temperature, weather conditions, and traffic conditions of your destination well in advance. For example, if you are expected to arrive in San Francisco at 12 pm, you can get a detailed weather and traffic forecast for that time period as early as 9 am. The predictive data is stitched together using pre-existing data and AI.
So what do you think of these new AI implementations? Do you think these will improve your experience with Google products? Do let us know in the comments below.