Only biological beings are capable of consciousness. Developers and researchers should cease work on projects suggesting otherwise, stated Mustafa Suleyman, head of Microsoft’s AI division, in a conversation with CNBC.
“I don’t think people should engage in such work. If you ask the wrong question, you get the wrong answer. I believe this is absolutely the case,” he said at the AfroTech conference in Houston.
The Microsoft executive opposes the prospect of creating artificial intelligence capable of consciousness or AI services that purportedly can suffer.
In August, Suleyman penned an essay proposing a new term—”Seemingly Conscious AI” (SCAI). Such artificial intelligence exhibits all the signs of sentient beings and thus appears to have consciousness. It simulates all characteristics of self-perception but is internally empty.
“The system I imagine is not actually conscious, but it will so convincingly mimic having a human-like mind that it will be indistinguishable from the assertion you or I might make to each other about our own thinking,” writes Suleyman.
Attributing consciousness to AI is dangerous, the expert believes. It will reinforce misconceptions, create new dependency issues, exploit our psychological vulnerabilities, introduce new dimensions of polarization, complicate existing debates about rights, and create a colossal new categorical error for society.
In 2023, Suleyman authored the book “The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma,” which delves into the risks of AI and other emerging technologies. Among them:
- Environmental and social disruptions—technologies may transform work so rapidly that familiar roles for people disappear;
- The problem of control—as AI develops, it becomes harder to ensure it operates in human interests;
- The use of technologies for cyberattacks, societal manipulation, and other threats;
- The need for technical and institutional mechanisms to contain AI;
- The weakening of the power of nation-states and traditional institutions.
AGI
The artificial intelligence market is moving towards AGI—artificial general intelligence capable of performing any task at a human level. In August, OpenAI’s CEO Sam Altman noted that the term might be “not very useful.” Models are evolving rapidly, and soon we will rely on them “more and more,” he believes.
For Suleyman, it is crucial to draw a clear line between AI becoming smarter and its ability to ever experience human emotions.
“Our physical experience of pain is what makes us very sad and feel terrible, but AI does not feel sadness when it experiences ‘pain’,” he said.
According to the expert, this is a very important distinction. In reality, artificial intelligence creates a perception—a seeming narrative of experience—about itself and consciousness, but it does not actually experience it.
“Technically, you know this because we can see what the model does,” the expert emphasized.
In the field of artificial intelligence, there is a theory proposed by philosopher John Searle called biological naturalism. It states that consciousness depends on the processes of a living brain.
“The reason we grant people rights today is that we do not want to harm them because they suffer. They have pain and preferences that include avoiding it. These models do not have that. It is merely a simulation,” said Suleyman.
The executive opposes the idea of researching consciousness in AI, as it does not possess it. He said Microsoft creates services that are aware they are artificial.
“Simply put, we create artificial intelligence that always works for the benefit of humans,” he noted.
In October, experts at Anthropic discovered that leading AI models are capable of exhibiting a form of “introspective self-awareness”—they can recognize and describe their own internal “thoughts,” and in some cases, even manage them.
