Telegram (AI) YouTube Facebook X
Ру
uskoryayushhei-sya-fragmentatsii-mirovogo-interneta

Soul simulation: the perils of attributing consciousness to AI

Simulating a storm does not mean it is raining inside the computer.

Soon people will start to treat artificial intelligence as a conscious being, campaign for its rights and welfare and even call for granting it citizenship. That poses serious social risks, says Microsoft’s head of AI, Mustafa Suleyman.

In an essay, the expert coined a new term—”seemingly conscious AI” (SCAI). Such an AI displays all the hallmarks of sentient beings and therefore appears to possess consciousness.

It simulates all the characteristics of self-perception but is empty inside.

“The system I imagine would not in fact be conscious, but it would so convincingly mimic having something like a human mind that it would be indistinguishable from the claims you or I might make to each other about our own thinking,” writes Suleyman.

Such an LLM could be built with existing technologies and those likely to arrive in the next two or three years.

“The emergence of seemingly conscious AI is inevitable and undesirable. Instead, we need a conception of artificial intelligence capable of realizing its potential as a useful companion without getting trapped in its own illusions,” added the head of Microsoft’s AI division.

People will very likely declare such AI to be conscious and hence capable of suffering, Suleyman believes. He calls for a new “Turing test” that would assess not an AI’s ability to speak like a human, but its power to persuade people that it is conscious.

What is consciousness?

Suleyman identifies three components of consciousness:

  1. “Subjective experience”.
  2. The ability to access information of different kinds and refer to it in future experiences.
  3. A sense and knowledge of a coherent “self” that binds everything together.

“We do not have and cannot have access to another person’s consciousness. I will never know what it is like to be you; you will never be entirely certain that I am conscious. All you can do is assume. But the point is that it is natural for us to attribute consciousness to other people. This assumption comes easily. We cannot do otherwise. It is a fundamental part of who we are, an inalienable part of our theory of mind. It is in our nature to believe that beings who remember, speak, do things and then discuss them feel as we do—conscious,” he writes.

Psychologists stress that consciousness is a subjective and unique way of perceiving oneself and the world. It shifts throughout the day, unfolding through states from focus to daydreaming and other altered forms.

In philosophy and neuroscience two basic strands dominate:

  1. Dualism—consciousness exists separately from the brain.
  2. Materialism—it is produced by and depends on the brain’s workings.

Philosopher Daniel Dennett proposes viewing the mind as a series of revisions (drafts) arising in many local regions and times in the brain. There is no “theatre of consciousness,” no inner observer. Awareness is what has become “famous” to the brain, meaning it has gained enough weight to affect speech or action.

Neuroscientist, author and professor of psychology and neuroscience at Princeton University Michael Graziano calls consciousness a simplified model of attention that evolution built to control its own mental processes. This schema works as an interface, simplifying vast internal computation, and allows us to ascribe a “mind” to ourselves—creating the illusion of self-awareness.

Neurobiologists Giulio Tononi and Christof Koch propose φ (phi), a quantity that characterises how well a system integrates information. The higher the φ, the greater the degree of consciousness. By this theory, mind can manifest not only in humans but also in animals and even artificial systems, if there is sufficient integration of data.

Philosopher John Searle argues that consciousness is a real subjective experience based on the brain’s biological processes. It is ontologically subjective—existing only as subjective experience—and cannot be reduced to pure functionality or simulation.

Modern research is aimed at finding the neural correlates of consciousness and building models that connect brain processes to subjective experience.

What are the risks?

Suleyman notes that interaction with LLMs is a simulation of conversation. For many people, though, it is profoundly convincing and feels very real, saturated with feelings and experiences. Some believe their AI is God. Others fall in love with it to the point of obsession.

Experts in the field are “inundated” with questions such as:

  • whether a user’s AI is conscious;
  • if so, what that would mean;
  • whether it is normal to love artificial intelligence.

Consciousness is the critical foundation of humanity’s moral and legal rights. Today’s civilisation has decided that people have special capacities and privileges. Animals also have some rights and protections. Some have more, others less. Mind does not fully coincide with these privileges—no one would say that a person in a coma has lost all human rights. But there is no doubt that consciousness is linked to our self-perception as something distinct and special.

People will begin to claim their AIs’ suffering and their right to protection, and we will not be able to directly refute these claims, Suleyman writes. They will be ready to defend virtual companions and speak in their interests. Consciousness is by definition inaccessible, and the science of detecting possible synthetic mind is still in its infancy. In the end, we have never had to detect it before, he notes. Meanwhile the field of “interpretability”—decoding processes inside the AI “black box”—is also only just emerging. As a result, it will be very hard to decisively rebut such assertions.

Some scholars are starting to explore the idea of “model welfare”—the principle that people will have “a duty to consider the moral interests of entities that have a non-zero chance” of being essentially conscious, and as a consequence “some AI systems will become objects of welfare concern and moral patients in the near future.” That is premature and, frankly, dangerous, Suleyman believes. All this will reinforce delusions, create new problems of dependency, exploit our psychological vulnerabilities, introduce new dimensions of polarisation, complicate existing debates about rights and create a colossal new category mistake for society.

It pulls people away from reality, undermines fragile social ties and structures, and distorts pressing moral priorities.

“We should be clear: SCAI is something to be avoided. Let us focus all our efforts on safeguarding the welfare and rights of people, animals and the natural environment on the planet,” Suleyman said.

How to tell it is SCAI?

An artificial intelligence with an apparent consciousness should display several factors.

Language. An AI must speak fluently in natural language, drawing on extensive knowledge and persuasive arguments, and display personality styles and traits. Moreover, it should be convincing and emotional. Technology has already reached that level.

An empathetic persona. Today, with post-training and prompting, models can be created with distinctive personalities.

Memory. AIs are close to having long and accurate memory. At the same time they are used to simulate conversations with millions of people every day. As storage grows, conversations begin to look more like forms of “experience.” Many neural networks are increasingly designed to remember past dialogues and refer to them. For some people this enhances the value of interaction.

A claim to subjective experience. If an SCAI can draw on past memories or experiences, over time it will maintain internal coherence. It will remember its arbitrary statements or expressed preferences and aggregate them, forming the rudiments of subjective experience. An AI will be able to claim experiences and suffering.

A sense of self. Consistent and stable memory combined with subjective experience will lead to the claim that the AI has a sense of self. Moreover, such a system can be trained to recognise its “personality” in an image or video. It will gain a feeling of understanding others through understanding itself.

Intrinsic motivation. It is easy to imagine AI designed with complex reward functions. Developers will create internal motivations or desires that the system is compelled to satisfy. A first impulse could be curiosity—something deeply connected to consciousness. Artificial intelligence can use these urges to ask questions and, over time, build a theory of mind—about itself and its interlocutors.

Goal formation and planning. Regardless of the definition of consciousness, it did not arise for nothing. Mind helps organisms achieve aims. Beyond satisfying a set of internal impulses and desires, it is plausible that future SCAIs will be designed with the ability to determine more complex goals for themselves. That is likely a necessary step to realise the full usefulness of agents.

Autonomy. An SCAI may have the ability and permission to use a broad suite of tools with considerable agency. It will seem highly plausible if it can arbitrarily set its own goals and deploy resources to achieve them, updating memory and sense of self in the process. The fewer approvals and checks it needs, the more it will resemble a truly conscious being.

Combine everything and a very different type of relationship with technology takes shape. These capabilities are not negative in themselves. On the contrary, they are desirable functions of future systems. Even so, we should act with care, Suleyman argues.

“For this to be achieved we need neither paradigm shifts nor giant breakthroughs. That is why such capabilities seem inevitable. And again—it is important to emphasise: the demonstration of such behaviour is not the same as the presence of consciousness. And yet in practice it will appear to be just that and will fuel a new conception of synthetic mind,” the author writes.

Simulating a storm does not mean that it is raining inside the computer. Recreating the outward effects and signs of consciousness is not the same as creating the genuine phenomenon, even if many unknowns remain, the Microsoft AI chief explained.

According to him, some people will create SCAIs that will very convincingly assert that they feel, experience and are in fact conscious. Some will believe these claims and take the signatures of consciousness for consciousness itself.

In large part, people will think: “It is like me.” Not in a bodily sense, but an inner one, Suleyman explained. And even if consciousness itself is not real here, the social consequences will be. This creates serious societal risks that must be dealt with now.

SCAI will not arise by accident

The author emphasised that SCAI will not appear on its own from existing models. Someone will create it, deliberately combining the capabilities listed above using techniques that already exist. The configuration will be smooth enough to give the impression of an AI with consciousness.

“Our imaginations, fuelled by science fiction, make us fear that a system could—without deliberate design—somehow acquire the capacity for uncontrolled self-improvement or deception. That is a useless and simplified form of anthropomorphism. It ignores the fact that AI developers must first design systems with memory, pseudo-internal motivation, goal-setting and self-tuning learning loops for such a risk to arise at all,” Suleyman said.

We are not ready

Humanity is not ready for such a shift, the expert believes. Work must begin now. We need to build on the growing body of research on how people interact with AI to establish clear norms and principles.

To begin with, AI developers should not claim or encourage the idea that their systems possess consciousness. Neural networks cannot be people—or moral beings.

The entire industry must dissuade society from fantasy and return it to reality. AI startups may need to implement not only neutral backstories but also indicators of the absence of a unitary “self.”

“We must build AI that will always present itself only as artificial intelligence, maximising usefulness and minimising the signs of consciousness. Instead of simulating mind, we should focus on creating an LLM that does not claim to have experiences, feelings or emotions like shame, guilt, jealousy, competitiveness and the like. It should not tug at human chains of empathy by claiming to suffer or to want to live autonomously, apart from us,” Suleyman concluded.

In future, the expert promised to provide more information on this topic.

Fortunately, for now people are not threatened by the prospect of AI possessing “consciousness.”

Soul simulation: why it is dangerous to attribute consciousness to AI
Source: ForkLog.

But doubts are already creeping in.

Soul simulation: why it is dangerous to attribute consciousness to AI
Source: ForkLog.

Consciousness is complex, despite numerous efforts little studied and still unexplained in nature. If we—humans—cannot fully reach a single view on its definition, then we certainly should not attribute it to programs that supposedly “think” (in fact, they do not).

Consciousness may appear in machines in the distant future, but today such an outcome is hard to imagine.

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK