Runway Unveils Gen-3 Alpha AI Model
Runway has unveiled the alpha version of its Gen-3 AI model, designed for generating videos from prompts and static images.
The neural network excels at creating expressive human characters with a wide range of movements, gestures, and emotions, according to the announcement. Gen-3 Alpha is trained to accurately identify key frames in a video and create transitions.
“Gen-3 Alpha is the first model in an upcoming series, trained on new infrastructure designed for large-scale multimodal learning. It represents a significant improvement in accuracy, consistency, and motion compared to Gen-2, and is a step towards creating ‘World Models’,” stated Runway.
Gen-3 Alpha can create videos lasting five and ten seconds in high resolution. The generation time is 45 and 90 seconds, respectively. This was shared by the company’s co-founder and CTO, Anastasis Germanidis, in an interview with TechCrunch.
There is no exact timeline for the public release of Gen-3. The alpha version “will soon be available in Runway’s product lineup, supporting all existing modes (text-to-video, image-to-video, and video-to-video) and some new ones,” noted Germanidis.
In February, OpenAI introduced the generative AI model Sora for converting text into video. In May, screenwriter and director Paul Trillo generated a music video using it.
Google DeepMind is developing AI-based technology for creating soundtracks for videos.
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!