Google placed engineer Blake Lemoine on paid administrative leave after he claimed that the LaMDA machine learning model had consciousness. The report comes from The Washington Post.
In an interview, the developer said that during testing for discriminatory and hateful statements he “conversed” with the model about religion. In the course of the exchange, Lemoine noted that LaMDA reasoned about its rights and considered itself a person.
“If I did not know for certain that I was talking to a model we built, I would have thought I was speaking with a seven- or eight-year-old child,” said the developer.
The company took a skeptical view of Lemoine’s claims:
“Our team, including ethicists and technologists, studied Blake’s concerns in line with our AI Principles and told him that the evidence did not support his claims.”
According to Google spokesperson Brian Gabriel, there is no evidence that LaMDA is conscious.
Lemoine, in turn, said that his claims were based on personal experience as a “priest,” not as a scientist.
The engineer’s removal was attributed to breach of the confidentiality agreement. Earlier Lemoine had discussed his work and Google’s handling of AI with a representative of the U.S. House Judiciary Committee.
In May, the company fired an AI researcher for criticizing the tech giant. The engineer disputed the claim that algorithms could design microchips more efficiently than humans.
In April 2021, the head of Google’s AI division resigned after 14 years with the company. His departure was linked to the controversial dismissals of Timnit Gebru and Margaret Mitchell — two leading AI ethics researchers.
Subscribe to ForkLog News on Telegram: ForkLog AI — all the news from the world of AI!
