FILE PHOTO: Google DeepMind has released a new, compact, open-source AI model, Gemma 2 2B. | Photo Credit: AP
Google DeepMind has released a new, compact, open-source AI model, Gemma 2 2B, on Thursday. Containing just 2.6 billion parameters, the language model surpasses rivals like OpenAI’s GPT-3.5 and Mistral AI’s Mixtral 8x7B, which are much bigger in size.Â
The AI model is also deliberately nifty so as to fit on to a wider range of devices including smartphones while offering GPT-3.5’s level of performance.Â
LMSYS, an independent AI research organisation that tested Gemma 2 2B said that it achieved a score of 1130 in evaluation which placed it slightly ahead of GPT-3.5 Turbo-0613 and Mistral-8x7B. The latter models have more than ten times the parameters.
Google’s Developer Blog stated that Gemma 2 2B had been built using distillation techniques which effectively reduces computational requirements while distilling knowledge from larger models into smaller ones.Â
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
Earlier in June, Google DeepMind had announced the Gemma 2 9B and 27B models, but has moved to building even smaller and more efficient models as the market towards mobile and edge-based AI is expected to grow.Â
Developers can access the model on the Hugging Face platform and can be implemented via PyTorch and TensorFlow.