
Nvidia Unveils AI Model for Autonomous Driving
Nvidia unveils Alpamayo-R1, an AI model for autonomous driving, at NeurIPS AI conference.
At the NeurIPS AI conference in San Diego, California, Nvidia announced Alpamayo-R1, an open visual reasoning language model designed for autopiloting.
Such neural networks can process text and images, enabling vehicles to “see” their surroundings and make decisions based on the information received.
The new tool is based on the “reasoning” Cosmos-Reason. Nvidia released the Cosmos model family in January and in August introduced additional solutions.
“Previous versions of autonomous driving models struggled in complex situations—at intersections with many crossings, before upcoming lane closures, or with a double-parked car on a bike lane. Reasoning gives autonomous vehicles the common sense needed to drive at a human level,” the company noted.
Technologies like Alpamayo-R1 are crucial for companies aiming to achieve the fourth level of autonomous driving, according to Nvidia’s blog.
The model considers all possible trajectories and scenarios, then uses contextual data to choose the optimal route.
The company hopes the new tool will provide autonomous vehicles with “common sense,” allowing them to make complex driving decisions more effectively.
The model is available on GitHub and Hugging Face. Alongside it, the company has added step-by-step guides, resources for inference, and post-training workflows. The entire toolkit is called the Cosmos Cookbook.
The materials are intended to help developers better utilize and train neural networks for specific tasks.
Cosmos-Based Solutions
Nvidia reported “virtually limitless possibilities” for Cosmos-based applications. Among recent examples, the company mentioned:
- LidarGen—the world’s first model for generating lidar data in autonomous vehicle simulations;
- Cosmos Policy—a framework for converting large pre-trained video models into reliable robot policies—a set of rules that determine their behavior;
- ProtoMotions3—a solution for training bots using realistic scenarios.
Nvidia is promoting physical artificial intelligence as a new direction for its AI processors. The company’s CEO, Jensen Huang, has repeatedly emphasized that this field will be the next wave of AI development.
The chipmaker is betting on the robotics sector. In August, it released a new Jetson AGX Thor module for $3499. The company calls the processor the “brain of a robot.”
In October, Huang stated that artificial intelligence has reached a “success spiral.” According to him, significant improvements in neural networks lead to increased investment in the technology, further “boosting” the field.
In the third quarter, Nvidia’s revenue reached $57 billion, a 62% increase compared to the same period last year.
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!