How Make-A-Video works, Meta’s new AI system for generating videos from text that everyone is talking about

Make-A-Video creates different images and las places one after another as a GIF thanks to the evolution of artificial intelligence (AI) and machine learning from Meta.

this tool combine still photos and it gives them movement when receiving a text search, in addition, it works thanks to the implementation of two technologies.

First of all, the AI uses image diffusion, that is, a technique that creates visual aesthetics by removing noise in the stimuli selected by the developers. With this method, the tool identifies how realistic images work.

They can be removed.

Subsequently, unsupervised training examine the creation process without the help of human intervention, it also generates the sequence of frames.

Another feature of Make-A-Video is create a pre-existing image or video to generate different variations.

The tool generates 16 frames at a resolution of 64 x 64 pixels, and another artificial intelligence rescales it to a resolution of 768 x 768 pixels. The joint work of both technologies carry out training to resize the content to a reasonable resolution.


It offers more than five million images.

TechCrunch points out that the results are incredible, however, there is something strange in the generated videos, that is, sometimes the objects have strange movements and the bodies merge with others.

Currently this service it’s not public and the official release date is unknown.

Sign up for our newsletter and receive the latest technology news in your email.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button