Google fired engineer Blake Lemoine after he claimed that the LaMDA artificial intelligence had achieved consciousness. The New York Times.
Google sharply rejected Lemoine’s claims, noting that LaMDA underwent 11 separate reviews. Google also published a research paper detailing its work on responsible AI development.
According to the company, they are carefully evaluating employee concerns in line with the principles of responsible AI.
“We found that Blake’s assertions that LaMDA possesses consciousness were completely unfounded, and we have worked with him to clarify this for many months,” said Kris Pappas, a spokesperson for the tech giant.
He added that, despite his lengthy involvement in the matter, Lemoine violated clear employment and data-security policies that include protecting product information.
“We will continue our careful development of language models and wish Blake all the best,” added Pappas.
In June 2022, Lemoine, in an interview with Washington Post stated that one of the company’s AI projects allegedly gained consciousness. The engineer also openly questioned whether LaMDA has a soul.
In response to this, Google sent Lemoine on paid leave, accusing him of violating his nondisclosure agreement.
Some members of the research community did not agree with the engineer’s claims. Former Google employee Margaret Mitchell wrote on Twitter that systems like LaMDA do not develop intentions; they model human communicative capabilities.
Scientist and entrepreneur Gary Marcus called Lemoine’s claim ‘nonsense’.
Earlier in May, Google fired an AI researcher amid criticism of the tech giant. The engineer challenged the claim that algorithms can design microchips more efficiently than humans.
In April 2021, Google’s head of the AI division left the company after 14 years. His departure was linked to the controversial exits of Timnit Gebru and Margaret Mitchell — two leading AI ethics researchers.
Subscribe to ForkLog AI news on Telegram: ForkLog AI — all the news from the AI world!
