- Google has raised the context window of its Gemini 1.5 Pro model from 1 million to 2 million tokens.
- The Gemini 1.5 Pro model is now available on Gemini Advanced with a context window of 1 million tokens.
- The 2M context window will be available to developers in a private preview.
At the Google I/O 2024 event, the search giant announced that it’s scaling its powerful Gemini 1.5 Pro model for up to 2 million tokens. That’s a massive increase in context length. It will be available to developers in a private preview. Google says that its ultimate aim is to unlock infinite context.
Apart from that, Google has improved the Gemini 1.5 Pro model across various categories including translation, dialogue, code, reasoning, and writing. And the improved Gemini 1.5 Pro model with a 1 million context window is now available to all developers globally.
As for consumers, Google has finally brought the improved Gemini 1.5 Pro model with a 1 million context window on Gemini Advanced. It’s out of the preview and those who have subscribed to Gemini Advanced can take advantage of the large context and its native multimodal capability. It’s available in 35 languages as of now.
I tried Gemini 1.5 Pro a few months back and it performed remarkably well. This is the first AI model to have such a huge context length available to consumers. You can upload images, videos, audio clips, files, documents, code repositories, and more and Gemini 1.5 Pro can process all of them without any hiccups.