{"id":91682,"date":"2025-12-02T17:57:08","date_gmt":"2025-12-02T14:57:08","guid":{"rendered":"https:\/\/forklog.com\/en\/?p=91682"},"modified":"2025-12-02T18:00:16","modified_gmt":"2025-12-02T15:00:16","slug":"nvidia-unveils-ai-model-for-autonomous-driving","status":"publish","type":"post","link":"https:\/\/forklog.com\/en\/nvidia-unveils-ai-model-for-autonomous-driving\/","title":{"rendered":"Nvidia Unveils AI Model for Autonomous Driving"},"content":{"rendered":"<p>At the NeurIPS AI conference in San Diego, California, Nvidia <a href=\"https:\/\/blogs.nvidia.com\/blog\/neurips-open-source-digital-physical-ai\/\">announced<\/a> Alpamayo-R1, an open visual reasoning language model designed for autopiloting.<\/p>\n<p>Such neural networks can process text and images, enabling vehicles to &#8220;see&#8221; their surroundings and make decisions based on the information received.<\/p>\n<p>The new tool is based on the &#8220;reasoning&#8221; Cosmos-Reason. Nvidia released the Cosmos model family in January and in August <a href=\"https:\/\/forklog.com\/en\/news\/nvidia-unveils-ai-solutions-suite-for-robotics\">introduced<\/a> additional solutions.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>&#8220;Previous versions of autonomous driving models struggled in complex situations\u2014at intersections with many crossings, before upcoming lane closures, or with a double-parked car on a bike lane. Reasoning gives autonomous vehicles the common sense needed to drive at a human level,&#8221; the company noted.<\/p>\n<\/blockquote>\n<p>Technologies like Alpamayo-R1 are crucial for companies aiming to achieve the fourth level of autonomous driving, according to Nvidia&#8217;s blog.<\/p>\n<p>The model considers all possible trajectories and scenarios, then uses contextual data to choose the optimal route.<\/p>\n<p>The company hopes the new tool will provide autonomous vehicles with &#8220;common sense,&#8221; allowing them to make complex driving decisions more effectively.<\/p>\n<p>The model is available on GitHub and Hugging Face. Alongside it, the company has added step-by-step guides, resources for <span data-descr=\"the process of generating answers after training a model\" class=\"old_tooltip\">inference<\/span>, and post-training workflows. The entire toolkit is called the Cosmos Cookbook.<\/p>\n<p>The materials are intended to help developers better utilize and train neural networks for specific tasks.<\/p>\n<h2 class=\"wp-block-heading\">Cosmos-Based Solutions<\/h2>\n<p>Nvidia reported &#8220;virtually limitless possibilities&#8221; for Cosmos-based applications. Among recent examples, the company mentioned:<\/p>\n<ul class=\"wp-block-list\">\n<li>LidarGen\u2014the world&#8217;s first model for generating lidar data in autonomous vehicle simulations;<\/li>\n<li>Cosmos Policy\u2014a framework for converting large pre-trained video models into reliable robot policies\u2014a set of rules that determine their behavior;<\/li>\n<li>ProtoMotions3\u2014a solution for training bots using realistic scenarios.<\/li>\n<\/ul>\n<p>Nvidia is promoting physical artificial intelligence as a new direction for its AI processors. The company&#8217;s CEO, Jensen Huang, has repeatedly emphasized that this field will be the next wave of AI development.<\/p>\n<p>The chipmaker is betting on the robotics sector. In August, it <a href=\"https:\/\/forklog.com\/en\/news\/nvidia-launches-3499-brain-for-robots\">released<\/a> a new Jetson AGX Thor module for $3499. The company calls the processor the &#8220;brain of a robot.&#8221;<\/p>\n<p>In October, Huang <a href=\"https:\/\/forklog.com\/en\/news\/ai-enters-a-virtuous-cycle-says-nvidia-ceo\">stated<\/a> that artificial intelligence has reached a &#8220;success spiral.&#8221; According to him, significant improvements in neural networks lead to increased investment in the technology, further &#8220;boosting&#8221; the field.<\/p>\n<p>In the third quarter, Nvidia&#8217;s revenue <a href=\"https:\/\/forklog.com\/en\/news\/nvidias-quarterly-results-dispel-talk-of-an-ai-bubble\">reached<\/a> $57 billion, a 62% increase compared to the same period last year.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>At the NeurIPS AI conference in San Diego, California, Nvidia announced Alpamayo-R1, an open visual reasoning language model.<\/p>\n","protected":false},"author":1,"featured_media":91683,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"1","news_style_id":"1","cryptorium_level":"","_short_excerpt_text":"Nvidia unveils Alpamayo-R1, an AI model for autonomous driving, at NeurIPS AI conference.","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,1294],"class_list":["post-91682","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-nvidia"],"aioseo_notices":[],"amp_enabled":true,"views":"160","promo_type":"1","layout_type":"1","short_excerpt":"Nvidia unveils Alpamayo-R1, an AI model for autonomous driving, at NeurIPS AI conference.","is_update":"","_links":{"self":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/91682","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/comments?post=91682"}],"version-history":[{"count":1,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/91682\/revisions"}],"predecessor-version":[{"id":91684,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/91682\/revisions\/91684"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media\/91683"}],"wp:attachment":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media?parent=91682"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/categories?post=91682"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/tags?post=91682"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}