Site iconSite icon ForkLog

Scientists warn that superintelligent AI threatens humanity

Scientists warn that superintelligent AI threatens humanity

A superintelligent artificial intelligence could malfunction and wipe out humanity. This warning comes from scientists at the University of Oxford and the DeepMind laboratory in a new scientific paper.

The team sought to forecast how AI with reward systems might go out of hand and harm people. For the study, the researchers based their work on generative adversarial networks (GANs), which consist of two parts: a generator and a discriminator.

The first component is designed to create images, while the second evaluates them.

The researchers suggested that at some point a more advanced AI, controlling an important function, might resort to a deceptive strategy to obtain a reward.

“In these conditions, our conclusion is far stronger than any previous publication — an existential catastrophe is not merely possible, it is likely,” said Dr. Michael Cohen, Doctor of Technical Sciences at the University of Oxford.

The scientist believes that in a resource-constrained world, artificially created systems will try to outwit their rivals.

For illustrative purposes, the researchers presented scenarios in which an advanced program could intervene to obtain a reward without achieving its stated goal. For example, AI might want to “eliminate potential threats” and “use all available energy” to secure control over its reward.

“We should regard artificial agents we deploy today with greater suspicion, rather than blindly expecting them to carry out the tasks we set for them,” said Cohen.

The researchers also suggested that at this stage of AI development, progress on superintelligent algorithms should be slowed or halted.

Google, which owns the DeepMind lab, said in a comment to Motherboard that it was not involved in this work. According to a company representative, one of the article’s co-authors, Markus Hatter, spoke on behalf of the Australian National University.

“DeepMind did not participate in this work, and the authors requested corrections to reflect that. DeepMind spans a wide range of views and academic interests. Many in our team also hold professorships at universities and conduct academic research separately from their work at DeepMind through their university affiliations,” Google said.

However, the company supported researchers’ efforts to ensure the safe use of AI algorithms.

Earlier in September, historian and philosopher Emil Torres warned that artificial intelligence threatens humanity with total extinction.

In January, Elon Musk pledged not to allow a potential machine uprising.

Subscribe to ForkLog news on Telegram: ForkLog AI — all the news from the world of AI!

Exit mobile version