
Experts compare future AI to pandemics and nuclear wars
Industry leaders warned that artificial intelligence could pose an “existential threat” to humanity. It should be treated as a global risk on a par with pandemics and nuclear wars, the experts say.
Open letter of 22 words from the nonprofit research organization Center for AI Safety was signed by more than 350 leaders, scientists and engineers working in AI.
According to the statement, industry experts, journalists, policymakers and the public are increasingly discussing a wide range of important and urgent risks associated with the technology.
“Sometimes it’s hard to express concern about some of the most serious threats posed by advanced AI. The letter will help overcome this hurdle and open the discussion,” the statement says.
Among the signatories are the CEOs of three leading AI companies: Sam Altman (OpenAI), Demis Hassabis (Google DeepMind) and Dario Amodei (Anthropic).
Joining them were the ‘godfathers of AI’ Geoffrey Hinton and Yoshua Bengio. The letter was also signed by Meta’s vice-president and leading AI specialist Yann LeCun.
In March, more than 1,000 AI experts urged to pause the training of language models more powerful than GPT-4.
Subsequently, other AI specialists criticised this idea.
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!