Telegram (AI) YouTube Facebook X
Ру
Consciousness, an atavism

Consciousness Is an Atavism

What Peter Watts’s Blindsight teaches us

In the novel Blindsight, Canadian biologist and writer Peter Watts advances a radical hypothesis: intelligence can be effective without consciousness. Nearly 20 years after the book’s publication, the thesis neatly describes generative AI.

We examine why “smart” is not the same as “understanding”, and the mistakes we make when we humanise algorithms.

A 2006 novel that reads like a commentary on the 2020s

“Blindsight” appeared in October 2006. The novel was nominated for the Hugo Award in 2007 and was a finalist for the John W. Campbell and Locus awards.

Its author is a marine biologist at the University of British Columbia with a doctorate in zoology and resource ecology. The novel cites more than 130 scientific papers, packaged in a familiar science‑fiction plot about first contact. In the 2000s the book remained niche, classed as hard SF and marked by a dense style and a bleak view of human nature. Critics noted its impenetrable prose and emotional chill.

The book’s core idea separates two notions often conflated: intelligence as the capacity to solve problems and process information, and consciousness as subjective awareness—what it is like to be something, as formulated by philosopher Thomas Nagel.

Watts advances a provocative hypothesis: consciousness is an evolutionarily superfluous trait, a by-product rather than a prerequisite of intelligence.

The novel probes this intuition through several threads. The scramblers—aliens aboard the ship Rorschach—possess intelligence orders of magnitude above human. They analyse the crew’s neural activity and solve fearsomely complex tasks. Yet they have no subjective experience. They do not know they exist. As Watts has a character put it:

“Imagine that you are a scrambler. Imagine that you have a mind but no reason, have tasks but no consciousness. Your nerves hum with programmes of survival and self-preservation, flexible, self-governing, even technological—yet there is no system that would keep an eye on them. You can think of anything at all, but you are conscious of nothing”*.

The protagonist and narrator, Siri Keeton, underwent a hemispherectomy in childhood to treat epilepsy. He can precisely model other people’s behaviour but lacks empathy and genuine emotional experience. His role is that of a synthesist, a translator of complex data for Mission Control: he transforms information without taking a view on it. Keeton himself admits:

“It is not my job to understand. For a start, if I could understand them, they would not be especially advanced achievements. I am just, how shall I put it—a conduit.”

The third thread is the vampire Jukka Sarasti, a genetically resurrected Pleistocene predator with an intellect beyond human. Vampires can see both sides of the Necker cube at once—running several cognitive models in parallel.

The conscious surplus

Each of these figures rests on real philosophical ground. The notion of philosophical zombies, introduced by Robert Kirk in 1974 and popularised by David Chalmers in “The Conscious Mind” (1996), describes a hypothetical being physically identical to a human yet devoid of subjective experience. The scramblers radicalise the idea: not a human copy without consciousness, but a fundamentally different form of intelligence.

In 1995 Chalmers formulated the “hard problem” of consciousness: why do physical processes in the brain give rise to subjective experience? Even if we fully explain all cognitive functions—attention, categorisation, information processing—the question remains: why are they accompanied by feeling? “Blindsight” flips the problem: what if the answer is “no feeling is needed”?

Watts described the idea’s genesis thus: he long sought a functional account of consciousness and applied the same test to each proposed function—could an unconscious system do the same? The answer was always “yes”. He concluded the stronger claim was the absence of function altogether. In the novel’s afterword, Watts sums up: in everyday conditions consciousness has little to do beyond “taking memos from a much more quick‑witted subconscious layer, rubber‑stamping them and attributing all the credit to itself”.

Long before Watts, Norwegian philosopher Peter Wessel Zapffe had framed consciousness as an evolutionary overdose. In his 1933 essay “The Last Messiah” he compared the human mind to how “some deer in palaeontological times” died out from “excessively heavy antlers”. For Zapffe, consciousness is a similar surplus: a capacity that outgrew practical need and turned from an advantage into a burden.

But where Watts argues that consciousness is unnecessary for intelligence, the Norwegian thinker is more radical: it is not merely redundant but destructive. People, he thought, must “artificially limit the content of consciousness” to avoid falling into “cosmic panic” at the awareness of their own finitude.

Philosopher David Rosenthal reached a similar conclusion. In a 2008 article he showed that the consciousness of mental states adds no significant function over and above the processes that generate it.

Eliza in the Chinese room

In 1980 philosopher John Searle published the now‑famous thought experiment “The Chinese Room”. Someone who knows no Chinese sits in a sealed room with a rulebook for manipulating characters. Receiving questions in Chinese, he assembles answers by the rules. An observer is convinced the person inside understands Chinese. But he does not understand a word. Searle’s conclusion: syntax is not semantics. Correct symbol manipulation does not amount to grasping meaning.

The experiment is woven directly into “Blindsight”. When the crew of the Theseus makes contact with Rorschach, the alien ship replies in idiomatic English. At first it seems a breakthrough—communication with an extraterrestrial intelligence. But linguist Susan James gradually realises: Rorschach learned English by intercepting human radio traffic. It gathers and recombines linguistic patterns. It produces grammatically and contextually apt answers. But it does not understand what it is saying.

Watts lays out the point through Keeton’s explanation:

“The point is that you can converse using the simplest algorithms of comparative analysis and have not the slightest idea what you are saying. If you use a sufficiently detailed set of rules, you can pass the Turing test. Gain a reputation for wit and banter without even knowing the language you are using.”

If a LLM is a Chinese room, why do millions of people behave as if a sentient being sits behind the interface? The answer lies in cognitive biases shaped by evolution.

In 1966 Joseph Weizenbaum, a pioneer of artificial intelligence at MIT, created ELIZA, a program that used simple pattern matching to imitate a psychotherapist. It rephrased the user’s utterances as questions. The effect astonished its creator: his assistant, who had watched the development, asked to be left alone with ELIZA after a few minutes of interaction. Weizenbaum later wrote:

“I had not imagined that extremely short interaction with a relatively simple program could induce powerful delusional thinking in perfectly normal people.”

This phenomenon became known as the “Eliza effect”—the tendency to ascribe understanding to computer systems that do not possess it. The effect persists even when the user knows a program is just that.

This is a cognitive distortion. We evolved to recognise conspecifics, and language is among the strongest diagnostic markers of belonging to Homo sapiens. Watts sketches the mechanism through the character Robert Cunningham, who explains why an unconscious being would be indistinguishable from a conscious one:

“An intellectual automaton will blend into the background, watch those around it, imitate their behaviour and act like an ordinary person. And all this—without being aware of what it is doing, without even being aware of its own existence.”

Murray Shanahan, professor of cognitive robotics at Imperial College London and a senior research scientist at Google DeepMind, warns:

“Careless use of philosophically loaded words like ‘believes’ and ‘thinks’ is especially problematic, because such words obscure the mechanism and actively encourage anthropomorphism.”

Scramblers write code

In 2024 Watts told Helice magazine in an interview this: “Twenty years ago I foresaw things that are happening today. But now I have no idea what will happen in the next 20 years.”

One of the novel’s chief lessons is not technological foresight. It is a warning about a cognitive trap: consciousness is not required for efficacy. The scramblers solve problems better than humans without any subjective experience. LLMs write code and translate languages without understanding.

We anthropomorphise not because AI deceives us, but because our brains are wired to seek mind in language. The Eliza effect, described back in 1966, has been magnified many times over by systems trained on billions of texts.

The novel teaches us to distinguish what a system does from what a system is. Not confusing imitation with understanding remains one of the most valuable skills. Watts articulated the point two decades before it became a practical concern.

Text: Sasha Kosovan

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Found a mistake? Select it and press CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK