Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks

Google's A3 supercomputer VMs

Google Cloud has launched the new A3 supercomputer virtual machine (VM) at the Google I/O event. It's exciting to see how this new technology can improve the performance, speed, and cost-effectiveness of machine learning models and generative AI. These machines are specifically designed to meet the demands

Soham Sharma profile image
by Soham Sharma
Google's A3 supercomputer VMs

Google Cloud has launched the new A3 supercomputer virtual machine (VM) at the Google I/O event. It's exciting to see how this new technology can improve the performance, speed, and cost-effectiveness of machine learning models and generative AI. These machines are specifically designed to meet the demands of resource-hungry use cases like LLMs and generative AI. The A3 VMs come with modern CPUs, next-gen Nvidia GPUs, improved host memory, and major network upgrades.

The A3 VMs come with Nvidia’s H100 GPUs, 4th Gen Intel Xeon Scalable processors, 2TB of host memory, and 3.6TBs of bisectional bandwidth. These machines can provide up to 26 exaFlops of power, which can help improve the time and cost related to training larger machine learning models. The workloads on these VMs run in Google’s specialized Jupiter data center networking fabric, which has 26,000 highly interconnected GPUs.

Google is offering the A3 in two ways: customers can run it themselves, or Google can handle most of the heavy lifting through a managed service. The do-it-yourself approach involves running the A3 VMs on Google Kubernetes Engine (GKE) and Google Compute Engine (GCE), while the managed service runs the A3 VMs on Vertex AI, the company’s managed machine learning platform.

The A3 VMs can be a cost-effective solution for customers who require an enormous amount of power to train more demanding workloads, whether that involves complex machine learning models or LLMs running generative AI applications. By using the A3 VMs, customers can benefit from Google’s specialized data center networking fabric and the ability to adjust the topology on demand.

It's impressive that Google is currently accepting sign-ups for a preview waitlist, and I can't wait to see how quickly the market adopts these new VMs. We are currently witnessing a thrilling time for the development of high-performance computing solutions. It's reassuring to see companies investing their resources to stay ahead of the competition by introducing new and innovative technologies that can improve our lives. With LLMs and generative AI gaining more popularity, it is clear that companies need to invest in high-performance computing solutions to stay ahead of the competition.

Soham Sharma profile image
by Soham Sharma

Data Phoenix Digest

Subscribe to the weekly digest with a summary of the top research papers, articles, news, and our community events, to keep track of trends and grow in the Data & AI world!

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More