Ghostboard pixel

Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks

Adobe is planning to bring generative AI into Premiere Pro workloads

Adobe doesn't want to get caught unprepared in the AI-powered video generation and editing race. The company recently shared its plans to launch the Firefly Video Model, add AI-powered features to Premiere Pro, and integrate third-party video generation solutions to power said tools.

Ellie Ramirez-Camara profile image
by Ellie Ramirez-Camara
Adobe is planning to bring generative AI into Premiere Pro workloads
AI-generated diamonds | Credit: Adobe via TechCrunch

Adobe reports that its Premiere Pro team has collaborated with the professional video community to find productive applications for generative AI in video editing workflows. They've found that to be truly helpful, any generative AI integration has to satisfy three key requirements: integrate into everyday workflow tools like Premiere Pro, allow for the edition of existing content (object addition/removal and clip extension) without sacrificing control, and enable users to choose the model best suited for the task at hand. With this in mind, Adobe has announced that it will bring a proprietary video generation model to its collection of Firefly generative AI models. The Firefly Video Model will join the Image, Vector, Design, and Text Effects Models.

Moreover, the company is also looking for strategies to integrate third-party models and tools such as OpenAI’s Sora, RunwayML, and Pika into the Premiere Pro workflow. This is especially important since these proprietary solutions will reportedly join the Firefly Video Model in powering three generative AI features coming to Premiere Pro:

  • Generative Extend to make existing clips longer by adding extra shots at the beginning or end of the clip. A useful feature for edit fine-tuning, holding on to a shot for a bit longer, or covering a complicated transition between frames.
  • Object Addition/Removal, a rather self-explanatory feature, will leverage AI-assisted masking and tracking tools to select objects moving over time so they can be replaced or deleted, while areas needing the addition of an object can equally be tracked so the generated content remains consistent.
  • Generative B-roll will enable users to generate short clips from text-based prompts to create anything from placeholders to B-roll footage that may be complicated to shoot or features fantastical concepts too impractical to recreate.

As other sources point out, Adobe's announcement is quite abstract, and the details of how these announced features may pan out are scarce. For instance, there are virtually no details on how the Firefly Video Model will come to be, or at least what kind of data it will be trained on. This is a sensitive topic, considering that the current landscape is fraught with lawsuits and discussions on the soundness of OpenAI and others' arguments on fair use granting them the right to train models on video data scraped from the internet. In cases like these, Adobe usually obtains training data from its vast stock library, and when it proves insufficient, the company often turns to submitted contributions. Copyright violations are also sensitive enough that even if there are no details on the sort of data the Firefly model will be trained on, Adobe has already announced that the AI video generation features will be enriched Content Credentials, so generated content gets appropriately labeled as such.

Bloomberg recently reported that it was granted access to documents detailing how Adobe is offering its network of contributors $120 in exchange for about 100 short video clips featuring simple anatomy shots of hands and feet, and people engaging in everyday actions, interacting with everyday objects like smartphones, and expressing emotions. However, the pay per minute of video can range from about $2.62 to $7.25 for higher-quality submissions. But even if the sourcing of training data mystery has been solved, the path is still fraught with obstacles, including the fact that previous Firefly models have failed to live up to the hype and that there is still no clarity on how Adobe will negotiate the third-party integrations it is already advertising.

Ellie Ramirez-Camara profile image
by Ellie Ramirez-Camara

Data Phoenix Digest

Subscribe to the weekly digest with a summary of the top research papers, articles, news, and our community events, to keep track of trends and grow in the Data & AI world!

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More