{"id":11736,"date":"2024-03-19T12:39:16","date_gmt":"2024-03-19T10:39:16","guid":{"rendered":"https:\/\/forklog.com\/en\/nvidia-unveils-new-technologies-at-gtc-2024\/"},"modified":"2024-03-19T12:39:16","modified_gmt":"2024-03-19T10:39:16","slug":"nvidia-unveils-new-technologies-at-gtc-2024","status":"publish","type":"post","link":"https:\/\/forklog.com\/en\/nvidia-unveils-new-technologies-at-gtc-2024\/","title":{"rendered":"Nvidia Unveils New Technologies at GTC 2024"},"content":{"rendered":"<p>Nvidia has announced a new generation of AI chips and software designed for large language models. The <a href=\"https:\/\/www.youtube.com\/live\/Y2F8yisiS6E?si=w8Dm2T7Us-veeHw_\">presentation<\/a> took place at the GTC 2024 conference.<\/p>\n<p><!--more--><\/p>\n<p>According to the company&#8217;s CEO, Jensen Huang, Nvidia updates its <span data-descr=\"graphics processing unit\" class=\"old_tooltip\">GPU<\/span> architecture every two years to enhance performance. Many previously released AI models have been trained on the Hopper architecture, which is used in chips like the H100, announced in 2022.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>\n<cite>\u201cHopper is fantastic, but we need more powerful GPUs,\u201d stated the company&#8217;s CEO.<\/cite><\/p><\/blockquote>\n<p>The new generation of AI graphics processors is named Blackwell. The first model, GB200, will be released later this year.<\/p>\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/forklog.com\/wp-content\/uploads\/image3-62.webp\" alt=\"image3-62\" class=\"wp-image-228643\"\/><figcaption class=\"wp-element-caption\">Blackwell chip. Source: Nvidia.<\/figcaption><\/figure>\n<p>Nvidia claims the new processors offer a significant performance boost for companies working with neural networks\u201420 petaflops with the new chips compared to 4 petaflops with the H100. According to Huang, the additional computational power will enable them to train larger and more complex models.<\/p>\n<p>For instance, training GPT-4 in 90 days required 8,000 old chips and 15 MW of power. Training on the new graphics card requires only 2,000 processors and 4 MW.<\/p>\n<p>It was also revealed that the GB200 will have 192 GB of memory with a bandwidth of 8 TB\/s.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>\n<cite>\u201cBlackwell is not just a chip, it&#8217;s a whole platform,\u201d noted Huang.<\/cite><\/p><\/blockquote>\n<p>The graphics processor combines two separately manufactured dies into a single chip, produced by TSMC. Together, they contain 208 billion transistors.<\/p>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"590\" src=\"https:\/\/forklog.com\/wp-content\/uploads\/image2-90-1024x590.jpg\" alt=\"image2-90\" class=\"wp-image-228644\" srcset=\"https:\/\/forklog.com\/wp-content\/uploads\/image2-90-1024x590.jpg 1024w, https:\/\/forklog.com\/wp-content\/uploads\/image2-90-300x173.jpg 300w, https:\/\/forklog.com\/wp-content\/uploads\/image2-90-768x443.jpg 768w, https:\/\/forklog.com\/wp-content\/uploads\/image2-90-1536x885.jpg 1536w, https:\/\/forklog.com\/wp-content\/uploads\/image2-90.jpg 1999w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">Nvidia Grace Blackwell Superchip. Source: Nvidia.<\/figcaption><\/figure>\n<p>The model will be available as a server called GB200 NVLink 2, consisting of 72 Blackwell graphics processors. Its total RAM will be 30 TB.<\/p>\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/forklog.com\/wp-content\/uploads\/image1-138.webp\" alt=\"image1-138\" class=\"wp-image-228648\"\/><figcaption class=\"wp-element-caption\">GB200 NVLink 2 server. Source: Nvidia.<\/figcaption><\/figure>\n<p>The company will sell access to the GB200 through cloud services. Nvidia announced that Amazon Web Services will build a server cluster consisting of 20,000 of these models.<\/p>\n<p>Huang also announced the addition of a new product called Nvidia Inference Microservice (NIM) to the enterprise software subscription. It simplifies the use of older graphics processors for computations.<\/p>\n<p>Back in January 2024, Mark Zuckerberg <a href=\"https:\/\/forklog.com\/en\/news\/meta-to-acquire-350000-nvidia-chips-for-ai-expansion\">promised<\/a> to purchase Nvidia chips for Meta, as the \u201cfuture roadmap\u201d in artificial intelligence requires a \u201cvast computational infrastructure.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Nvidia has announced a new generation of AI chips and software designed for large language models. The presentation took place at the GTC 2024 conference.<\/p>\n","protected":false},"author":1,"featured_media":11735,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"","news_style_id":"","cryptorium_level":"","_short_excerpt_text":"","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,1295,1294],"class_list":["post-11736","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-chips","tag-nvidia"],"aioseo_notices":[],"amp_enabled":true,"views":"38","promo_type":"","layout_type":"","short_excerpt":"","is_update":"","_links":{"self":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/11736","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/comments?post=11736"}],"version-history":[{"count":0,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/11736\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media\/11735"}],"wp:attachment":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media?parent=11736"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/categories?post=11736"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/tags?post=11736"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}