Telegram (AI) YouTube Facebook X
Ру
uskoryayushhei-sya-fragmentatsii-mirovogo-interneta

Researchers speed up neural network training on CPUs by up to 15x

Researchers at Rice University have developed a sublinear deep-learning mechanism (SLIDE) that runs on a central processor and trains neural networks 4–15 times faster than platforms equipped with graphical processing units. The university’s official site reports.

In 2019, the research team reframed neural network training from matrix multiplication into a search problem that can be solved with hash tables.

Subsequently, the researchers improved the algorithm’s performance using vectorization accelerators and memory-optimization techniques in modern CPUs.

“If you are not fixated on matrix multiplication, you can harness the power of modern processors and train artificial intelligence models 4–15 times faster,” said Shabnam Dagagi, co-author of the study.

The project lead, Anshumali Srinivastava, associate professor of computer science, argues that using central-processing units instead of graphical accelerators will significantly reduce the cost of training neural networks.

He added that companies spend millions of dollars per week just on training and fine-tuning AI models.

At the end of March Arm introduced the ninth generation of its mobile-processor architecture, with manufacturers focusing their efforts on boosting AI algorithm performance.

In February 2021, the Chinese IT giant Baidu held talks with investors, who could invest funds in its new business producing AI processors for autonomous vehicles.

Subscribe to ForkLog news on Telegram: ForkLog AI — all the news from the AI world!

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK