Ampere is collaborating with Qualcomm on an Arm-based AI server
Ampere and Qualcomm have partnered to offer a powerful AI inferencing server solution that combines Ampere's efficient Arm-based CPUs with Qualcomm's Cloud AI 100 Ultra AI chips, delivering a best-of-breed platform for running large AI models at scale while maximizing performance and efficiency.
Ampere has announced it will combine its technological strengths with Qualcomm's to offer a powerful AI inferencing server solution. Although technically both companies specialize in Arm-based chips, AI may not be Ampere's strongest suit. Thus, it makes sense that it would partner with a company like Qualcomm, which shares its experience with Arm-based chips but has also successfully broken into the AI market. The AI-focused server will combine Ampere’s CPUs and Qualcomm’s Cloud AI 100 Ultra chips to provide a model-running solution.
As Arm CTO Jeff Wittich describes Ampere's motivation for the partnership, the company can already show some successful cases of standalone Ampere CPUs running AI inferencing. However, this is not enough when working at scale, since bigger models will bring different requirements. Thus, combining an efficient Ampere CPU that can handle most general-purpose tasks with Qualcomm's specialized AI inferencing chips presents an ideal solution for data centers looking to maximize performance and efficiency.
The Qualcomm partnership is part of Ampere's annual roadmap update. The latter includes unveiling a new 256-core AmpereOne chip, built using a modern 3nm process. The chips are almost ready and are expected to roll out later this year. Notably, this generation of AmpereOne chips features 12-channel DDR5 RAM, granting data centers fine-grained control of memory access for diverse workloads. In addition to being performant, the new chips will be cheaper and energy efficient, providing a suitable alternative to NVIDIA's A10 GPUs. It should be noted that Ampere will not be retiring any of its previous-generations chips from the market.
The partnerships announced with Qualcomm, SuperMicro, and NETINT are the latest step towards delivering performance that meets the demands of modern AI workloads, in a cost-saving and energy-efficient packaging.