Google Unveils Gemini 2.0 and ‘Deep Research’ For Advanced Users

google releases gemini 2.0 flash model
Image Credit: Google
In Short
  • Google has released the Gemini 2.0 Flash model, and it's available to all users in the Gemini web app.
  • Google says Gemini 2.0 Flash outperforms its larger Gemini 1.5 Pro model on key benchmarks.
  • It supports native multimodal output including native image generation and audio output.

After months of anticipation, Google has finally released its next-generation Gemini 2.0 model. Google is first releasing the Gemini 2.0 Flash model which according to the company, outperforms its flagship Gemini 1.5 Pro model on key benchmarks. Google says it’s 2x faster and more efficient than its larger models.

Not only that, it supports native multimodal output such as native image generation and native text-to-speech multilingual audio output. It can also natively interact with Google Search and perform code execution as well. As for benchmarks, Gemini 2.0 Flash achieves 62.1% in GPQA (Diamond), 76.4% in MMLU-Pro, and 92.9% in Natural2Code.

gemini 2.0 flash comparison with gemini 1.5 pro
Image Credit: Google

Gemini 2.0 Flash is now available on the web version of Gemini to all users including free and paid users. You need to click on the drop-down menu and select the “Gemini 2.0 Flash Experimental” model to use it. Google says the Gemini 2.0 Flash model will soon be added to the Gemini app on Android and iOS.

testing gemini 2.0 flash model

As for paid Gemini Advanced users, they get access to a new feature called “Deep Research” which uses advanced reasoning to solve complex queries. It can also help you compile reports for you. It seems Gemini Advanced users have access to something like OpenAI’s o1 reasoning models that can “think” step-by-step and reason through harder questions.

Google says Gemini 2.0 is coming to AI Overviews early next year. AI Overviews will be able to understand complex queries, advanced math equations, coding questions, and multimodal input. Notably, the search giant says the Gemini 2.0 model was trained entirely on its custom TPUs like Trillium, and inference is also done on its in-house TPUs.

Comments 0
Leave a Reply

Loading comments...