{"id":69878,"date":"2022-11-10T16:00:01","date_gmt":"2022-11-10T14:00:01","guid":{"rendered":"https:\/\/forklog.com\/en\/?p=69878"},"modified":"2025-09-07T22:38:18","modified_gmt":"2025-09-07T19:38:18","slug":"intel-unveils-max-series-ai-chips-for-supercomputers","status":"publish","type":"post","link":"https:\/\/forklog.com\/en\/intel-unveils-max-series-ai-chips-for-supercomputers\/","title":{"rendered":"Intel unveils Max-series AI chips for supercomputers"},"content":{"rendered":"<p>In the run-up to Supercomputing 22, Intel <a href=\\\"https:\/\/www.intel.com\/content\/www\/us\/en\/newsroom\/news\/introducing-intel-max-series-product-family.html\\\" target=\\\"_blank\\\" rel=\\\"noreferrer noopener nofollow\\\">\u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u0438\u043b\u0430<\/a> Xeon CPU Max server processors and Data Center GPU Max accelerators for high-performance computing and AI workloads.<\/p>\n<figure class=\\\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\\\">\n<div class=\\\"wp-block-embed__wrapper\\\">\n<iframe loading=\\\"lazy\\\" title=\\\"Intel Supercomputing 2022 Keynote: Maximize Possibilities for High Performance Computing &#038; AI\\\" width=\\\"500\\\" height=\\\"281\\\" src=\\\"https:\/\/www.youtube.com\/embed\/vdOWpCaUxHs?feature=oembed\\\" frameborder=\\\"0\\\" allow=\\\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\\\" referrerpolicy=\\\"strict-origin-when-cross-origin\\\" allowfullscreen><\/iframe>\n<\/div>\n<\/figure>\n<p>The company will integrate the new devices into the Aurora supercomputer, due to launch in 2022. The system will help scientists at <span data-descr=\\\"the U.S. Department of Energy's national laboratory\\\" class=\\\"old_tooltip\\\">Argonne National Laboratory<\/span> conduct research in low-carbon technologies, subatomic particles, medicine and cosmology.<\/p>\n<p>The Xeon CPU Max chip is known by the codename Sapphire Rapids HBM. According to the company, it is the first and only x86-based CPU equipped with on-package high-speed memory HBM2e.<\/p>\n<p>The processor includes up to 56 compute cores with support for 112 virtual threads and has a <span data-descr=\\\"thermal design power requirements\\\" class=\\\"old_tooltip\\\">TDP<\/span> of 350 W. It is intended for high-performance server systems.<\/p>\n<p>Intel Xeon CPU Max uses EMIB technology. It is equipped with 64 GB of onboard high-speed memory and supports PCIe 5.0 and CXL 1.1. The chip\u2019s total bandwidth is around 1 TB\/s.<\/p>\n<p>The processor provides more than 1 GB of HBM2e memory per core, enough for most common HPC workloads.<\/p>\n<p>The company said that Xeon Max consumes 68% less power than AMD Milan-X at the same performance.<\/p>\n<p>According to the company, in certain operations Xeon Max is 3.5 times more productive than the Intel Xeon 8380 and AMD EPYC 7773X.<\/p>\n<p>The processor was also compared with Nvidia&#8217;s A100 chip in the MLPerf DeepCAM test, related to accelerating and augmenting modeling on AI-powered supercomputers. The new device was 1.2 times more productive than its competitor.<\/p>\n<p>Intel Xeon CPU Max will hit the market in January 2023.<\/p>\n<figure class=\\\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\\\">\n<figure class=\\\"wp-block-image size-large\\\"><img loading=\\\"lazy\\\" decoding=\\\"async\\\" width=\\\"750\\\" height=\\\"422\\\" data-id=\\\"190451\\\" src=\\\"https:\/\/forklog.com\/wp-content\/uploads\/Intel-SC-22-Press1.jpg\\\" alt=\\\"Intel unveils Max-series AI chips for supercomputers\\\" class=\\\"wp-image-190451\\\" srcset=\\\"https:\/\/forklog.com\/wp-content\/uploads\/Intel-SC-22-Press1.jpg 750w, https:\/\/forklog.com\/wp-content\/uploads\/Intel-SC-22-Press1-300x169.jpg 300w\\\" sizes=\\\"auto, (max-width: 750px) 100vw, 750px\\\" \/><\/figure>\n<figure class=\\\"wp-block-image size-large\\\"><img loading=\\\"lazy\\\" decoding=\\\"async\\\" width=\\\"1024\\\" height=\\\"576\\\" data-id=\\\"190455\\\" src=\\\"https:\/\/forklog.com\/wp-content\/uploads\/Intel-SC-22-Press3-1024x576.jpg\\\" alt=\\\"Intel unveils Max-series AI chips for supercomputers\\\" class=\\\"wp-image-190455\\\" srcset=\\\"https:\/\/forklog.com\/wp-content\/uploads\/Intel-SC-22-Press3-1024x576.jpg 1024w, https:\/\/forklog.com\/wp-content\/uploads\/Intel-SC-22-Press3-300x169.jpg 300w, https:\/\/forklog.com\/wp-content\/uploads\/Intel-SC-22-Press3-768x432.jpg 768w, https:\/\/forklog.com\/wp-content\/uploads\/Intel-SC-22-Press3.jpg 1456w\\\" sizes=\\\"auto, (max-width: 1024px) 100vw, 1024px\\\" \/><\/figure>\n<figure class=\\\"wp-block-image size-large\\\"><img loading=\\\"lazy\\\" decoding=\\\"async\\\" width=\\\"1024\\\" height=\\\"576\\\" data-id=\\\"190454\\\" src=\\\"https:\/\/forklog.com\/wp-content\/uploads\/Intel-SC-22-Press8-1024x576.jpg\\\" alt=\\\"Intel unveils Max-series AI chips for supercomputers\\\" class=\\\"wp-image-190454\\\" srcset=\\\"https:\/\/forklog.com\/wp-content\/uploads\/Intel-SC-22-Press8-1024x576.jpg 1024w, https:\/\/forklog.com\/wp-content\/uploads\/Intel-SC-22-Press8-300x169.jpg 300w, https:\/\/forklog.com\/wp-content\/uploads\/Intel-SC-22-Press8-768x432.jpg 768w, https:\/\/forklog.com\/wp-content\/uploads\/Intel-SC-22-Press8.jpg 1456w\\\" sizes=\\\"auto, (max-width: 1024px) 100vw, 1024px\\\" \/><\/figure><figcaption class=\\\"blocks-gallery-caption\\\">Chip Intel Xeon CPU Max Series. Data: Intel.<\/figcaption><\/figure>\n<p>The Data Center GPU Max graphics processor is known by the codename Ponte Vecchio.<\/p>\n<figure class=\\\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\\\">\n<div class=\\\"wp-block-embed__wrapper\\\">\n<iframe loading=\\\"lazy\\\" title=\\\"Introducing: Intel Data Center GPU Max Series\\\" width=\\\"500\\\" height=\\\"281\\\" src=\\\"https:\/\/www.youtube.com\/embed\/PGx55C0h6gQ?feature=oembed\\\" frameborder=\\\"0\\\" allow=\\\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\\\" referrerpolicy=\\\"strict-origin-when-cross-origin\\\" allowfullscreen><\/iframe>\n<\/div>\n<\/figure>\n<p>It includes 128 Xe cores and 128 RT cores, making it the only server accelerator with native hardware-accelerated ray tracing support.<\/p>\n<figure class=\\\"wp-block-image size-full\\\"><img loading=\\\"lazy\\\" decoding=\\\"async\\\" width=\\\"750\\\" height=\\\"422\\\" src=\\\"https:\/\/forklog.com\/wp-content\/uploads\/sm.Intel-SC-22-Press-deck-final2.750.jpg\\\" alt=\\\"Intel unveils Max-series AI chips for supercomputers\\\" class=\\\"wp-image-190457\\\" srcset=\\\"https:\/\/forklog.com\/wp-content\/uploads\/sm.Intel-SC-22-Press-deck-final2.750.jpg 750w, https:\/\/forklog.com\/wp-content\/uploads\/sm.Intel-SC-22-Press-deck-final2.750-300x169.jpg 300w\\\" sizes=\\\"auto, (max-width: 750px) 100vw, 750px\\\" \/><figcaption>Chip Intel Data Center GPU Max Series. Data: Intel.<\/figcaption><\/figure>\n<p>The processor has up to 408 MB of L2 cache and up to 64 MB of L1 cache.<\/p>\n<p>The system comprises more than 100 billion transistors across 47 chiplets built using multiple processes, including Intel 7 and TSMC N5. They are interconnected with EMIB and Foveros packaging technology.<\/p>\n<p>The company will release the chips in several form factors tailored to different tasks.<\/p>\n<p>Data Center GPU Max will also hit the market in January 2023.<\/p>\n<p>In October, Intel announced new 13th-generation processors and released the AI upscaler XeSS for gaming on Nvidia and AMD GPUs.<\/p>\n<p>Subscribe to ForkLog news on Telegram: <a href=\\\"https:\/\/t.me\/forklogAI\\\" target=\\\"_blank\\\" rel=\\\"noreferrer noopener nofollow\\\">ForkLog AI<\/a> \u2014 all the news from the AI world!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Intel unveiled Xeon CPU Max server processors and Data Center GPU Max accelerators for high-performance computing and AI workloads.<\/p>\n","protected":false},"author":1,"featured_media":69879,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"1","news_style_id":"1","cryptorium_level":"","_short_excerpt_text":"","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,1295,736,634],"class_list":["post-69878","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-chips","tag-intel","tag-supercomputers"],"aioseo_notices":[],"amp_enabled":true,"views":"14","promo_type":"1","layout_type":"1","short_excerpt":"","is_update":"","_links":{"self":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/69878","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/comments?post=69878"}],"version-history":[{"count":1,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/69878\/revisions"}],"predecessor-version":[{"id":69880,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/69878\/revisions\/69880"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media\/69879"}],"wp:attachment":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media?parent=69878"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/categories?post=69878"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/tags?post=69878"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}