Telegram (AI) YouTube Facebook X
Ру
Intel unveils Max-series AI chips for supercomputers

Intel unveils Max-series AI chips for supercomputers

In the run-up to Supercomputing 22, Intel представила Xeon CPU Max server processors and Data Center GPU Max accelerators for high-performance computing and AI workloads.

The company will integrate the new devices into the Aurora supercomputer, due to launch in 2022. The system will help scientists at Argonne National Laboratory conduct research in low-carbon technologies, subatomic particles, medicine and cosmology.

The Xeon CPU Max chip is known by the codename Sapphire Rapids HBM. According to the company, it is the first and only x86-based CPU equipped with on-package high-speed memory HBM2e.

The processor includes up to 56 compute cores with support for 112 virtual threads and has a TDP of 350 W. It is intended for high-performance server systems.

Intel Xeon CPU Max uses EMIB technology. It is equipped with 64 GB of onboard high-speed memory and supports PCIe 5.0 and CXL 1.1. The chip’s total bandwidth is around 1 TB/s.

The processor provides more than 1 GB of HBM2e memory per core, enough for most common HPC workloads.

The company said that Xeon Max consumes 68% less power than AMD Milan-X at the same performance.

According to the company, in certain operations Xeon Max is 3.5 times more productive than the Intel Xeon 8380 and AMD EPYC 7773X.

The processor was also compared with Nvidia’s A100 chip in the MLPerf DeepCAM test, related to accelerating and augmenting modeling on AI-powered supercomputers. The new device was 1.2 times more productive than its competitor.

Intel Xeon CPU Max will hit the market in January 2023.

The Data Center GPU Max graphics processor is known by the codename Ponte Vecchio.

It includes 128 Xe cores and 128 RT cores, making it the only server accelerator with native hardware-accelerated ray tracing support.

\"Intel
Chip Intel Data Center GPU Max Series. Data: Intel.

The processor has up to 408 MB of L2 cache and up to 64 MB of L1 cache.

The system comprises more than 100 billion transistors across 47 chiplets built using multiple processes, including Intel 7 and TSMC N5. They are interconnected with EMIB and Foveros packaging technology.

The company will release the chips in several form factors tailored to different tasks.

Data Center GPU Max will also hit the market in January 2023.

In October, Intel announced new 13th-generation processors and released the AI upscaler XeSS for gaming on Nvidia and AMD GPUs.

Subscribe to ForkLog news on Telegram: ForkLog AI — all the news from the AI world!

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK