NVIDIA is in the process of acquiring the GPU Orchestration Software Provider Run:ai
NVIDIA recently entered into a definitive acquisition agreement with Run:ai, the Kubernetes-based workload management and orchestration software provider, to help its customers use their AI computing resources more efficiently. AI workloads have become increasingly complex. Generative AI models, recommender systems, and search engines require sophisticated scheduling to optimize performance at the system and infrastructure levels. Run:ai's open platform, built on Kubernetes, enables enterprises to manage and optimize their compute infrastructure across on-premises, cloud, and hybrid environments.
Run:ai built an open Kubernetes-based platform that supports all Kubernetes variants and third-party AI tool and framework integrations. Other key features include a centralized shared resource management interface, user and team management tools, the ability to pool and share GPU power across tasks, and efficient utilization of GPU clusters.
In the near future, NVIDIA plans to offer Run:ai products under the same business model and invest in the Run:ai product roadmap. This includes enabling Run:ai on NVIDIA DGX Cloud, NVIDIA's platform for enterprise developers, which offers an integrated full-stack service for generative AI. Run:ai will be available for NVIDIA HGX, DGX, and DGX Cloud customers' AI workloads, especially for large language model deployments. Run:ai's products are already integrated with several of NVIDIA's offerings. Both platforms will continue to provide customers with choice and flexibility in their deployments by enabling "a single fabric that accesses GPU solutions anywhere".