Google Introduces Gemini 1.5 Pro with a Massive 1 Million Context Window

gemini 1.5 pro
Image Courtesy: Google
In Short
  • Google has introduced its next-generation Gemini 1.5 Pro model built on the MoE architecture.
  • The Gemini 1.5 Pro model offers a context window of up to 1 million tokens. It performs similarly to the Gemini 1.0 Ultra model despite being a mid-size model.
  • This model is currently in preview and developers can test the new model via AI Studio and Vertex AI.

Just after the launch of Gemini 1.0 Ultra with the Bard rebrand last week, Google is back with a new model to compete with GPT-4. This is the Gemini 1.5 Pro model, the successor to Gemini 1.0 Pro that currently powers the free version of Gemini (formerly Bard).

While the family of Gemini 1.0 models has a context window of up to 32K tokens, the 1.5 Pro model increases the standard context length up to 128K tokens. Not just that, it supports a massive context window of up to 1 million tokens, much higher than GPT-4 Turbo’s 128K and Claude 2.1’s 200K tokens.

Gemini 1.5 Pro Built on Mixture-of-Experts (MoE) Architecture

Google says the Gemini 1.5 Pro is a mid-size model, but it performs nearly the same as the Gemini 1.0 Ultra while using less compute. It’s made possible because the 1.5 Pro model is built on the Mixture-of-Experts (MoE) architecture, similar to OpenAI’s GPT-4 model. This is the first time Google has released an MoE model, in place of a single dense model.

In case you are unfamiliar with the concept of MoE architecture, it consists of several smaller expert models that are activated depending on the task at hand. The use of specialized models for specific tasks delivers better and more efficient results.

gemini 1.5 pro context window in comparison to GPT-4, Claude 2.1
Image Courtesy: Google

Coming to the large context window of Gemini 1.5 Pro, it can ingest vast amounts of data in one go. Google says the 1 million context length can process 700,000 words, or 1 hour of video, or 11 hours of audio, or codebases with over 30,000 lines of code.

To test Gemini 1.5 Pro’s retrieval capability, given that it has such a large context window, Google performed the Needle In A Haystack challenge, and according to the company, it recalled the needle (text statement) 99% of the time.

In our comparison between Gemini 1.0 Ultra and GPT-4, we did the same test, but Gemini 1.0 Ultra simply failed to retrieve the statement. We will definitely test the new Gemini 1.5 Pro model and will share the results.

To be clear, the 1.5 Pro model is currently in preview and only developers and customers can test the new model using AI Studio and Vertex AI. You can click on the link to join the waitlist. Access to the model will be free during the testing period.

VIA Google Blog
comment Comments 0
Leave a Reply