Movie Gen is Meta's latest contribution to AI-powered media generation research
Meta has unveiled Movie Gen, a comprehensive AI system that can generate, edit, and personalize videos with matching audio using simple text prompts, marking a significant advancement in AI-powered content creation tools.
In line with its track record of sharing its most relevant fundamental AI research advances, Meta recently previewed Movie Gen, a suite of generative AI models that enables users to generate videos from text prompts, make localized edits on existing videos, create personalized videos using simple text prompts combined with a reference image, and create synchronized audio content for a video, including ambient sound and music.
Movie Gen's main component is a 30B parameter transformer model capable of producing high-definition videos up to 16 seconds long at 16 frames per second or up to 10 seconds long at 24 fps. A separate 13B parameter model handles videos and optional text prompts as input to deliver audio generation (ambient sound, sound effects (Foley), and instrumental background music) for up to 45 seconds.
Movie Gen's full technical and evaluation details are available in the research paper. While Meta acknowledges current limitations in processing time and quality, the company does envision a potential future release that would empower creators of all skill levels to bring their artistic visions to life. Part of the preparations for such a release would include collaborating with filmmakers and creators to refine the technology with their feedback.