Site iconSite icon ForkLog

Google’s AI Masters Open-World Video Games

Google's AI Masters Open-World Video Games

Google DeepMind’s SIMA model has learned to play nine open-world video games.

“As part of our research project SIMA, a game-playing AI agent has been created. It can perform a wide range of tasks in virtual worlds, becoming more adaptable!” said DeepMind co-founder Shane Legg.

No Man’s Sky is an example of an open-world video game that the neural network has been trained to play. This skill could be a step towards creating AGI that functions successfully in the real world.

Video games have long been used to test the progress of artificial intelligence systems. Google DeepMind has demonstrated prowess in virtual chess and Go. However, these games have clear algorithms for victory, making it relatively simple to train neural networks.

Open-world games, with abstract goals and extraneous information, are more challenging for AI systems. The variability of actions makes them somewhat more akin to real life.

The games that the AI model has already learned to play include Minecraft, Teardown, and Goat Simulator 3.

SIMA can navigate spaces, use objects, and interact with interfaces. It can also perform more complex actions like piloting spacecraft or resource extraction.

Developers used existing video and image recognition models to interpret game data, then trained SIMA to correlate on-screen events with specific tasks.

To gather this information, researchers conducted paired team games. One person watched the screen and guided another on what moves to make.

Developers also asked people to review their gameplay and describe the mouse and keyboard movements they used to complete the game. This allowed SIMA to learn how players’ actions relate to task-solving.

“It is important to remember that for companies like DeepMind, these studies are not really about games, but about robotics. Navigation in a 3D environment is a means to an end. Companies aim to create AI models capable of perceiving and acting in the world,” commented Michael Cook, professor at King’s College London.

When the neural network was trained on eight games, researchers found it could play a ninth game it had not seen before. However, the results did not match human levels.

In February, DeepMind released the AI model Genie, which creates games from prompts.

Exit mobile version