Meta’s Movie Gen looks like a huge leap forward for AI video (but you can’t use it yet)


At this point, you probably either love the idea of ​​making realistic videos with generative AI, or you think it’s a morally bankrupt endeavor that devalues ​​artists and ushers in a disastrous cycle of profound fraud from which we’ll never escape. It’s hard to find a middle ground. Meta is not going to change his mind Film Genits latest video creation AI model, but regardless of what you think of AI media creation, this could be a significant milestone for the industry.

Movie Gen can create realistic videos with music and sound effects at up to 1080p at 16fps or 24fps (upscaled from 768 to 768 pixels). If you upload a photo, it can also create customized videos, and best of all, it looks easy to edit videos using simple text commands. Note that it can also edit normal, non-AI videos with text. It’s easy to imagine how this could be useful for cleaning up something you’ve taken on your phone for Instagram. Movie Gen is purely research right now — Meta won’t be making it public, so we’ve got some time to think about what it all means.

The company describes Movie Gen as the “third wave” of generative AI research after launch Media creation tools like Make-A-Sceneas well as more recent offers Llama uses an AI model. It is equipped with a 30-billion-parameter transformer model that can shoot 16 frames of video 16 seconds long or 24 frames 10 seconds long. It also has a 13-billion-parameter audio model that can generate 45 seconds of 48kHz content such as “ambient audio, sound effects (Foley) and instrumental background music” that syncs with video. The Movie Gen team “due to our design choices” does not yet support synchronized audio they wrote in their research articles.

Meta Film GenMeta Film Gen

Meta

Meta says Movie Gen was initially trained on “a combination of licensed and publicly available datasets,” including about 100 million videos, one billion images and one million hours of audio. The company’s language is a bit vague when it comes to the source — Meta has already admitted to training AI models on data from AI models. account of every Australian userit’s even less clear what the company uses outside of its own products.

As for the actual videos, Movie Gen certainly looks impressive at first glance. Meta says that in its A/B testing, people preferred its results compared to OpenAI’s Sora and Runway’s Gen3 model. Movie Gen’s AI people look surprisingly realistic, without many of the gross telltale signs of an AI video (especially the annoying eyes and fingers).

Meta Film GenMeta Film Gen

Meta

“While there are many interesting use cases for these foundational models, it’s important to note that generative AI is not replacing the work of artists and animators,” Movie Gen team wrote in a blog post. “We’re sharing this research because we believe in the power of this technology to help people express themselves in new ways and provide opportunities for people who might not otherwise have them.”

It’s still unclear what mainstream users will do with generative AI video. Will we fill our feeds with AI video instead of taking our own photos and videos? Or will Movie Gen evolve into custom tools that can help us sharpen our own content? While we can already easily remove objects from the background of photos on smartphones and computers, more sophisticated AI video editing seems like the next logical step.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *