Adobe previews AI video tools that arrive later this year


On Wednesday, Adobe opened Firefly AI video creation tools coming to beta later this year. Like many things related to artificial intelligence, the examples are equal parts fascinating and terrifying, as the company slowly integrates tools built to automate much of the creative work. Reflects AI salesmanship found elsewhere in the technology industryAdobe touts all of this as additional technology that “helps eliminate post-production fatigue.”

Adobe describes its new Firefly-powered text-to-video, Generative Extend (which will be available in Premiere Pro), and image-to-video AI tools that help editors with tasks such as “control gaps in footage, remove unwanted objects from the scene.” , smoothing jump cut transitions and finding the perfect b-roll.” The company says the tools will give video editors “more time to explore new creative ideas, a part of what they love.” (To take Adobe at face value, you have to believe that once the industry fully embraces these AI tools, employers won’t simply increase their product demands from editors. Or pay less. Or hire fewer people. But I digress.)

Firefly Text-to-Video lets you create AI-generated videos from—you guessed it—text prompts. But it also includes tools for controlling camera angle, movement, and zoom. He can shoot with gaps in the timeline and fill in the gaps. It can even use a still reference image and turn it into a convincing AI video. Adobe says its video models excel at “videos of the natural world” and help you create quick shots or b-roll without needing a lot of budget.

For an example of how convincing the technology looks, check out the Adobe samples in the promotional video:

While these are examples chosen by a company trying to sell you on their products, their quality is undeniable. The detailed text suggests a specific shot of a flaming volcano, a dog chilling in a field of wildflowers, or miniature woolly wolves having a dance party (also demonstrating that they can do fantastical things). If these results are emblematic of the tools’ typical productivity (hardly a guarantee), then TV, film and commercial production will soon have some powerful shortcuts at their disposal—for better or for worse.

Meanwhile, Adobe’s image-to-video example starts with an uploaded galaxy image. A text command prompts him to make a video that travels away from the star system to reveal the inside of the human eye. The company’s Generative Extend demo shows a pair of people walking along a forest stream; A segment created by artificial intelligence fills the gap in personnel. (It was so believable that I couldn’t tell which part of the speech was generated by the AI.)

Still from an Adobe video showing a text suggestion creating a moody shot of a man on a rainy street.Still from an Adobe video showing a text suggestion creating a moody shot of a man on a rainy street.

Adobe

Reuters reports the tool will only create five-second clips, at least initially. To Adobe’s credit, it says its Firefly Video Model is designed to be commercially safe and only trains on content the company has permission to use. “We train them exclusively on the Adobe Stock content database of 400 million images, illustrations and videos, selected to not contain intellectual property, trademarks or recognizable symbols,” said Alexandru Kostin, vice president of Generative AI at Adobe. Reuters. The company also emphasized that it never trains users on their work. However, whether it fires its users is another matter entirely.

Adobe says its new video models will be in beta later this year. you can do sign up for the waiting list to test them.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *