Runway catches up in the video generation race with Gen-3 Alpha

Last week saw the public release of not just one, but two video generation models: Kling, developed by Kuaishou Technology, a company in charge of a popular TikTok competitor in China; and Luma AI's Dream Machine. Understandably, these events may have slightly heated the video generation race, putting both OpenAI and firms like Runway and Pika under pressure. Fortunately, Runway had already been working on its next-generation models, the Gen 3 family. The company recently previewed some details of the upcoming Gen 3 Alpha, the first model in the Gen 3 family.

Gen 3 Alpha will power Runway's generative AI services, including the Text to Video, Image to Video, and Text to Image tools; the Motion Brush, Advanced Camera Controls, Director Mode control modes; and upcoming tools allowing users more fine-grained control over aspects of the video like structure, style, and motion. Gen 3 Alpha also features novel guardrails, including a new moderation system and C2PA provenance standards.

Moreover, Gen 3 Alpha's unique training contributes to its groundbreaking performance. It was trained by a multidisciplinary team of research scientists, engineers, and artists who ensured the model understood cinematic and style terminology. Gen 3 Alpha was also trained using a proprietary dataset featuring highly descriptive prompts for each complete video and the temporal transitions within them. These techniques yield a highly performant video, capable of managing everything from transitions to photorealistic humans.

Gen 3 Alpha will power the experiences in Runway's free and paid subscription tiers. Additionally, the company announced it has partnered with entertainment and media organizations to deliver custom versions of the Gen-3 models. Runway is currently taking information requests on its new customization service here.