
Experts doubt the emergence of ‘consciousness’ in Google’s algorithm
A number of experts in the field artificial intelligenceискусственного интеллекта have questioned Google engineer Blake Lemoine’s claims that LaMDA possesses ‘consciousness’. Bloomberg reports this.
According to Emily Bender, professor of computational linguistics at the University of Washington, such claims from large companies could shift responsibility for the decisions made by algorithms.
“The problem is that the more this technology is marketed as artificial intelligence — not to mention something capable of understanding — the more people will trust AI systems,” she says.
As an example, Bender cited hiring programs and student performance assessments that can contain biases depending on the data used to train the algorithm. In her words, if we were to grant the existence of consciousness to such systems, AI developers could distance themselves from direct responsibility for any flaws or biases in the software:
“The firm may say: ‘Oh, the program made a mistake’. But no, you, the company, created the software. It is you who are responsible for the error. And the discourse about ‘consciousness’ is a harmful practice.”
Yana Eggers, CEO of AI Nara Logics, said that LaMDA can imitate perception or feelings from the training data provided to it.
“[The algorithm] is designed to give the impression that it understands.”
According to Max Kreminski, a researcher at the University of California, Santa Cruz, the model’s architecture “simply lacks some of the key capabilities of human consciousness.” He added that if LaMDA is similar to other large language models, it cannot form new knowledge when interacting with users, as the веса of the deployed neural network are frozen.
The AI system also cannot “think” in the background, the scientist says.
Georgia Tech professor Mark Riedl argues that AI systems do not understand the influence of their answers or behavior on society. That is where the technology’s vulnerability lies, he says.
“The AI system may not be toxic or biased, but it does not understand that, in some circumstances, it is inappropriate to talk about suicide or violence,”
Earlier, in an interview said that he had detected signs of “consciousness” in the LaMDA AI system. The company soon refuted the statements and placed him on paid leave.
In May 2021, Google introduced the LaMDA conversational model.
In May 2022, at the I/O conference, the company announced the AI Test Kitchen application with LaMDA 2.
Subscribe to ForkLog AI on TikTok!
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!