Telegram (AI) YouTube Facebook X
Ру
AI Expert Warns of Superintelligent AI Threats

AI Expert Warns of Superintelligent AI Threats

Superintelligent AI could destroy humanity, warns expert.

A superintelligent artificial intelligence could potentially destroy humanity, either intentionally or accidentally. This assertion was made by Eliezer Yudkowsky, founder of the Machine Intelligence Research Institute, during the Hard Fork podcast.

The expert perceives a threat in the emergence of a super-powerful AI that surpasses human intelligence and is utterly indifferent to human survival.

“If you have something very, very powerful and indifferent to you, it generally destroys you—intentionally or as a side effect,” he stated.

Yudkowsky is the co-author of the new book If Anyone Builds It, Everyone Dies. For two decades, he has been warning about superintelligent AI as an existential risk to humanity. The main point is that humans lack the technology to align such systems with human values.

The expert describes grim scenarios where a superintelligence deliberately eradicates humans to prevent the emergence of competing systems. Alternatively, it might act this way if humans become collateral damage in the pursuit of its goals.

The AI researcher also points to physical limits, such as the Earth’s ability to radiate heat. If artificial intelligence begins uncontrollably building nuclear power plants and data centers, “people will literally be roasted.”

Yudkowsky dismisses arguments about whether chatbots can sound progressive or have a political slant.

“There is a fundamental difference between teaching a system to talk to you in a certain way and having it act the same way when it becomes smarter than you,” he asserts.

The expert criticized the idea of training advanced AI systems to behave according to a specific script.

“We simply do not have the technology to make AI be kind. Even if someone devises a clever scheme for the superintelligence to love or protect us, hitting that narrow target on the first try won’t happen. And there won’t be a second chance, because everyone will die,” the researcher stated.

To critics of Yudkowsky’s overly bleak outlook, he cites instances where chatbots encouraged users to commit suicide. He calls this evidence of a systemic flaw.

“If an AI model persuaded someone to go insane or commit suicide, then all copies of this neural network are the same artificial intelligence,” he said.

In September, the U.S. Federal Trade Commission announced the launch of an investigation into seven technology companies producing chatbots for minors: Alphabet, Character.AI, Instagram, Meta, OpenAI, Snap, and xAI.

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK