- Google will be launching the Gemma 2 open-source model in June.
- It's trained on 27 billion parameters. It will be a much larger model than Gemma 2B and 7B models.
- Google claims Gemma 2 can outperform some models that are twice its size.
Google released a family of open-source models, Gemma 2B and 7B, in February 2024. And now at the Google I/O 2024 event, the search giant announced that it’s launching Gemma 2 in June. The new Gemma 2 model will also be open-source and this time, it will be trained on 27B parameters.
Google says that developers and researchers have been asking for a larger open model that is easy to run and use. So Gemma 2 is a culmination of that effort. Google further says that the Gemma 2 27B model will outperform “some models that are more than twice its size“.
In addition, the company says, Gemma 2 will be lightweight enough to run efficiently on GPUs or a single TPU hosted in Vertex AI. So far, Meta has been leading the AI race in the open-source space with its Llama 3 models. We have seen Llama 3 8B and 70B performing remarkably well despite their small sizes.
Now, we will have to wait and see how well the upcoming Gemma 2 model performs. There are plenty of open-source models from Mistral, Meta, Microsoft, and other AI companies competing in this space. Let us know if you are excited about the Gemma 2 open-source model in the comments below.