AI startup Runway has launched its new video model, Gen 4.5, which outperforms similar solutions from competitors in independent testing.
Introducing our new frontier video model, Runway Gen-4.5. Previously known as Whisper Thunder (aka) David.
Gen-4.5 is state-of-the-art and sets a new standard for video generation motion quality, prompt adherence and visual fidelity.
Learn more below. pic.twitter.com/2J945S4bQt
— Runway (@runwayml) December 1, 2025
The tool allows for the generation of high-definition videos based on text prompts describing desired movement and action. The company emphasized that the model has a strong grasp of physics, human and camera movements, as well as cause-and-effect relationships.
“Gen-4.5 is a cutting-edge technology that sets a new standard for motion quality in video generation, prompt adherence, and visual fidelity,” the developers stated.
Gen 4.5 secured the top spot in the Video Arena ranking, maintained by the independent company Artificial Analysis. For ranking, people compare the results of two models and vote for the best option, without knowing which companies are behind the AI.
Google’s Veo 3 took second place in Video Arena, while OpenAI’s Sora 2 Pro ranked seventh.
Runway Gen-4.5 represents significant advancements in both pre-training data efficiency and post-training techniques for video models and serves as our new foundation model for world modeling.
Gen-4.5 scored 1,247 Elo points in the Artificial Analysis Text to Video leaderboard,… pic.twitter.com/PikKIIulT8
— Runway (@runwayml) December 1, 2025
“We managed to surpass trillion-dollar corporations with a team of 100 people,” emphasized Runway CEO Cristóbal Valenzuela.
The model “excels” at understanding and executing complex sequential instructions. Users can specify detailed camera choreography, intricate scene compositions, precise timing of events, and subtle atmospheric changes all within a single prompt.
The model excels at understanding and executing complex, sequenced instructions. You can specify detailed camera choreography, intricate scene compositions, precise timing of events and subtle atmospheric changes all within a single prompt.
We remain committed to making highly… pic.twitter.com/CPQeeth3YJ
— Runway (@runwayml) December 1, 2025
“Gen-4.5 achieves an unprecedented level of physical and visual accuracy. Objects move with realistic mass and momentum. Surfaces behave as they do in the real world. The laws of physics can be either adhered to or ignored, depending on your desire,” the developers highlighted.
The neural network is not without its flaws. There can be issues with cause-and-effect relationships and object consistency. The team is working to resolve these.
The model is being gradually rolled out and will be fully available by the end of the week through the company’s platform, API, and some partners.
Nvidia Cards
Gen-4.5 was developed using Nvidia’s graphics cards. They were employed at all stages, from initial research to pre-training. The company “closely collaborated” with the American chipmaker to “push the boundaries of what is possible in optimization, training efficiency, and inference speed.”
The neural network also operates on Nvidia’s Hopper and Blackwell series GPUs, “ensuring optimized performance without compromising quality.”
In October, the startup OpenAI and tech giant Google introduced updated versions of AI models for video generation.
