Saturday, November 23, 2024
Home Business Video Gen AI battles begin, as Adobe releases Firefly Video model into the world

Video Gen AI battles begin, as Adobe releases Firefly Video model into the world

by
0 comment

Miami, Florida: The year started off with a glimpse at the next chapter of generative artificial intelligence (AI). Video generations, to be precise. OpenAI, in February, announced Sora but shied away from releasing it for anyone except ‘red teamers’ to asses risks and accuracy. In October, Meta talked about Movie Gen, their AI video generator, but that’s also not for public access. Adobe has overtaken them all, hopefully balancing accuracy and user safety, with an announcement to release transition that lasted all of 30 days. HT had reported in September, that Adobe wanted to make sure Firefly Video Model was safe for users. That time, they insist, has come.

The basic premise of Firefly Video model is text to video and image to video. (Official image)
The basic premise of Firefly Video model is text to video and image to video. (Official image)

“We understand that technology can have unintended consequences. It’s why we took on the responsibility of understanding the implications of this technology, on all of us. We focused on designing a commercially safe solution to make sure that we respect creator and intellectual property rights,” Shantanu Narayen, CEO of Adobe, made clear the company’s vision for AI at the Adobe MAX keynote. A vision, of which Firefly plays a pivotal role.

The basic premise of Firefly Video model is text to video and image to video, that is, generative AI to create videos in the same way as generative AI has regaled us with generated photos over the past couple of years. Either with a prompt, or suggestions from a shared media file. But it isn’t restricted to that. Within Adobe’s ecosystem of apps and editing platforms, it is designed to do a lot more.

“Video is hard. We’ve been working on it for a while, and we weren’t happy with the quality. But our research department has done wonders to handle the 100 times more pixels, 100 times more data, using thousands and thousands of GPUs (or graphics processing units, a computing hardware) to make sure we master this research process,” says Alexandru Costin, Vice President of Generative AI at Adobe, detailing challenges they faced, in a briefing of which HT was a part.

For example, Generative Extend in Premiere Pro, the video editing suite, uses Firefly Video model. The idea is to use AI to create footage that can fill gaps in a video b-roll that’s being edited, smoothen transitions or even allow editors to hold a frame for longer to help with more precise editing.

There are three main ways of using the Firefly Video model directly. The first is text to video prompts, with an option to choose a wide variety of camera controls (these would be angle, motion and zoom specifics) and aspect ratio and frames per second selections, as well as reference images to create a footage. This will also be the method to generate atmospheric elements like fire, water, light leaks, and smoke. These elements will be generated on a black or green background, which would give editors the flexibility to layer them over existing video footage using blend modes or with Adobe’sPremiere Pro or After Effects software.

“We’ve heard from many of you that filling gaps in your timeline where visual effects shots are planned to be added later helps to gain creative buy-in as you craft your story. No more “insert shot here” placeholders needed,” says Meagan Keane, Principal Product Marketing Manager for Adobe Pro Video.

“Visualizing creative intent for difficult-to-capture or expensive shots is challenging for teams working on tight budgets and quick turn arounds. Using Adobe Firefly to visualize and plan these shots before going into VFX or back to set for pick-up shots helps to streamline communication between production and post-production,” adds Keane, detailing relevance during the film editing process.

The second method is image to video, which means you can share a reference image alongside a text prompt. This adds to the flexibility of generating what can be considered complimentary shots for content you may already have, or create a video from a series of images. Adobe confirms that it is possible change the original motion or intent of the shot in some cases. This would be helpful to visualise how a piece of video would look like if reshot, or if generated.

The third is, as part of integration within apps such as Premiere Pro, for Generative Extend. A prompt to add an extension to a clip you have, will include both video, audio as well as the background to match the original video, in the generation.

Adobe insists this is designed with failsafes for commercial use. “We only train the Firefly model on Adobe’s Stock data and we don’t train on customer data,” details Costin, before adding, “we don’t train Firefly on data straight from the internet.” That’s one advantage Adobe has.

When the Content Authenticity web app was announced a few days ago, Adobe confirmed to HT that as part of the privacy measures in place, if the opt-out option is selected by a creator, they’ll not submit their content to the Adobe Stock library. It is the content, be it images or other media, that’s part of the Adobe Stock, is used to train Firefly models. Content Credentials, the identifiers for every image or generation in terms of the author, creation specifics and editing history, will be attached to every generation from the Firefly Video model.

“We respect our responsible AI principles—accountability, responsibility and transparency. On the topic of accountability, we have this mechanism where we get feedback from our customers and we act on that feedback. This is what helps in design and improve the guardrails that minimise bias and harm, and the potential of deepfakes from our model,” Costin explains. “Our models are trustable and usable in real workflows,” he adds.

The Adobe MAX 2024 keynote becomes a pivotal bookmark as the AI generative video chapter is being written. By releasing Firefly Video model before OpenAI and Meta could do so with their own video models, with Runway’s Gen-3 Alpha which was released over the summer in its sights, Adobe insist speed balances safeguards.

You may also like

Leave a Comment

About Us

Welcome to Janashakti.News, your trusted source for breaking news, insightful analysis, and captivating stories from around the globe. Whether you’re seeking updates on politics, technology, sports, entertainment, or beyond, we deliver timely and reliable coverage to keep you informed and engaged.

@2024 – All Right Reserved – Janashakti.news