Researchers from Google Research used reinforcement learning to speed up chip design from months to six hours. They say AI matches or even surpasses the work of experienced engineers.
Last year we shared how #ReinforcementLearning could hasten the design of accelerator chips (https://t.co/7KqG2pqTqh). Today we’re publishing improved methods, which we’ve used in production to design the next generation of Google TPUs. Read more in Nature https://t.co/gOQVmbWdeV
— Google AI (@GoogleAI) June 9, 2021
To achieve this, the researchers turned chip design into a game. The processor components acted as playing pieces, and the board on which they were placed was treated as the game board.
Chip performance was evaluated using metrics based on placement data for 10,000 chips. In the game, the model that demonstrated the highest efficiency won, Google said.
They found that within six hours the algorithms designed chips that matched or even surpassed human-designed samples produced over months.
“Our method enables creating manufacturing plans for microchips in less than six hours, compared with the most reliable baseline that requires months of intensive effort by human experts,” the study said.
The developers stressed that their approach not only saves time but also improves the quality of the designs.
They added that they had already applied the method in designing tensor processor TPUv4, presented in May 2021.
Google first disclosed that it was experimenting with AI for chip design in 2020.
Earlier in May, scientists presented the Perlmutter AI supercomputer, which will help build the largest-ever 3D map of the observable universe.
Earlier in May this year, researchers developed AI that models the universe on a single GPU 1,000 times faster than existing methods.
In April, scientists introduced a deep-learning mechanism that runs on a central processor and trains neural networks 15 times faster than on GPU-accelerated platforms.
Subscribe to ForkLog news on Telegram: ForkLog AI — all the news from the AI world!
