Ghostboard pixel

Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks
Google released Gemma 2 to researchers and developers worldwide
Credit: Google

Google released Gemma 2 to researchers and developers worldwide

Google's newly released Gemma 2 offers state-of-the-art performance in open AI models, featuring improved efficiency, broad compatibility, and a focus on responsible AI development, aimed at empowering researchers and developers worldwide.

Ellie Ramirez-Camara profile image
by Ellie Ramirez-Camara

Google recently announced the official release of Gemma 2, the next iteration of Gemma, its lightweight, state-of-the-art open model family first introduced earlier this year. Gemma 2 comes in 9 billion and 27 billion parameter sizes, and it performs better and has higher inference efficiency than the first generation. The 27 billion parameter model stands out for having capabilities comparable to models more than twice its size while running efficiently and cost-effectively on a single NVIDIA H100 Tensor Core GPU or TPU host. Similarly, the smaller Gemma 2 outperforms models in its size category including Llama 3 8B.

In addition to featuring a best-in-class performance, and a design that allows for efficient and affordable deployments without performance losses, Gemma 2 is optimized to run on a flexible range of hardware, from gaming and high-end laptops and desktops to cloud-based deployments. Gemma 2 is available in Google AI Studio, as a quantized version to run locally, or via Hugging Face Transformers for home computers with NVIDIA RTX or GeForce RTX. Gemma 2 is compatible with popular frameworks such as JAX, PyTorch, and TensorFlow, and is also optimized to run as an NVIDIA NIM microservice, with NVIDIA NeMo optimization to follow.

Like its predecessor, Gemma 2 is available under the commercially-friendly Gemma License and is released emphasizing the importance of responsible deployment. To ensure the availability of the resources necessary to build responsibly, Google has recently open-sourced the LLM Comparator, a tool that provides in-depth evaluations of language models. The company is also working to bring its SynthID text watermarking technology to Gemma models. Plans for the future include releasing a 2.6B parameter Gemma 2 model and the availability of Gemma 2 in the Vertex AI Model Garden.

Ellie Ramirez-Camara profile image
by Ellie Ramirez-Camara
Updated

Data Phoenix Digest

Subscribe to the weekly digest with a summary of the top research papers, articles, news, and our community events, to keep track of trends and grow in the Data & AI world!

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More