In Alpine Nonfiction releases a new edition of Michio Kaku’s international bestseller The Future of Humanity, which explores various aspects of the development of our civilization in the 21st century. We invite you to read an excerpt describing the best and worst scenarios of artificial intelligence evolution and what the space-time theory of consciousness is.
Self-aware Robots
In 2017, a dispute between two billionaires — Facebook founder Mark Zuckerberg and SpaceX and Tesla founder Elon Musk — took place. Zuckerberg argued that artificial intelligence would become a huge source of wealth and prosperity and enrich society. Musk expressed a far more pessimistic view, saying that AI could threaten humanity’s existence and that one day our creations may turn against us.
Who is right? We are heavily dependent on robots in maintaining lunar bases and Martian cities. What will happen if robots someday decide that they no longer need us? Might we end up building colonies in the distant cosmos only for robots to take them away from us?
These fears did not arise yesterday; as early as 1863, the novelist Samuel Butler warned: “We ourselves create successors for ourselves. Man will become to a machine what the horse and the dog are to man.” When robots gradually become smarter than us, we may feel inferior, as we are left behind and forced to swallow the dust of our own creations. AI expert Nick Bostrom says: “Life may seem meaningless if we are destined to stare stupidly at our superintelligent offspring as they try to explain their incredible discoveries to us in a simplified language that we are only able to understand.” A certain computer scientist quoted in The New Yorker said: “I am not bothered by this [machine intelligence] for the same reason that Mars overpopulation does not bother me.”
If we consider the Zuckerberg–Musk debate, my personal view is that in the short term Zuckerberg is right. AI will not only enable us to build cities in space but will also make society wealthier, produce many things more efficiently, better and cheaper, while simultaneously creating a mass of fundamentally new jobs in the robotics industry, which may someday become enormous. But in the long run Musk is right; he points to a more serious danger. The key question in this debate is: at what moment will this transition occur and robots become dangerous? I personally think the turning point will be the moment when robots become self-aware.
Today, robots do not know that they are robots. But one day, perhaps, they will gain the ability to formulate their own goals, rather than passively accepting the goals that programmers have chosen for them. Then they may realize that their intentions differ from ours. Once our interests diverge, there is a real possibility that robots will pose a danger to us. When could this happen? No one knows. Today, robots possess the intelligence of a beetle. But by the end of this century they may become self-aware. By then, among other things, there will be rapidly growing settlements on Mars. Therefore it is important to address this question now, before the moment when robot decisions determine humanity’s very survival on the Red Planet.
To gain some sense of the scale of this fundamental problem, let us examine the best and worst-case scenarios for how events may unfold.
The Best and Worst Case Scenarios
One of the most vocal advocates of the best-case scenario for developments is the inventor and author of several bestsellers, Ray Kurzweil. Whenever I interview him, he describes a clear and compelling, yet paradoxical picture of the future. He believes that by 2045 we will approach a ‘singularity’, i.e., the moment when robots match or surpass human intelligence. The term ‘singularity’ is borrowed from the concept of gravitational singularity in physics, where we talk about regions of infinite gravity, as in a black hole. In computer science, it was introduced by the mathematician John von Neumann, who wrote that the computer revolution should bring about “all accelerating progress and changes in the way human life is, creating the impression of approaching some substantive singularity… after which human affairs, as we know them, cannot continue.” Kurzweil argues that when this singularity arrives, a computer worth a thousand dollars will be a billion times smarter than all humans combined. Moreover, robots will be self-improving, and their ‘offspring’ will inherit the acquired traits, so that each generation will surpass the previous — giving rise to a rising spiral of high-performance machines.
Kurzweil maintains that the robots we create, instead of seizing control from us, will open up a new world of physical well-being and prosperity. If you believe him, microscopic robots, i.e., nanobots, will circulate in our blood and “destroy pathogens, repair DNA errors, remove toxins and perform many other tasks to improve our physical condition”. He hopes science will soon yield a cure for aging, and is confident that if he lives long enough, he will live forever. He told me that in anticipation of his immortality he takes several hundred pills a day. But in case he fails to reach that happy moment, he has arranged to have his body preserved in liquid nitrogen at a cryogenic company.
Kurzweil also foresees a much more distant future, when robots will turn the atoms of the Earth into computers. Over time, all the atoms of the Sun and the Solar System will be incorporated into this magnificent thinking machine. Kurzweil says that, looking at the sky, he sometimes feels that one day he may see signs there of superintelligent robots moving the stars.
But not everyone is convinced by such a rosy vision of the future. Mitch Kapor, founder of Lotus Development Corporation, says that the Singularity movement is “in my view largely driven by a religious impulse. And no feverish flailing will blind me to that fact”. Hollywood counterposed Kurzweil’s utopia with the worst-case scenario: what to expect if you create your evolutionary successors capable of pushing you off the path and sending you into oblivion, along with mammoths and other extinct animals. In the film The Terminator, the military creates an intelligent computer network called Skynet to control all our nuclear weapons. Designed to remove the threat of nuclear war, Skynet unexpectedly becomes conscious. The military, fearing that the machine has gained consciousness, tries to switch it off, but the self-defending network does what guarantees the elimination of the threat — it attempts to wipe out the human race. On the planet, only scattered bands of guerrillas remain, trying to fight the overwhelming power of machines.
Perhaps Hollywood is simply trying to sell as many tickets as possible, frightening viewers to near death? Or something like that could actually happen? The question is delicate because concepts of mind and consciousness are so clouded by moral, philosophical and religious disputes that we simply do not have strict, widely accepted frameworks within which to discuss them. Before continuing the discussion of machine intelligence, it is necessary to provide a clear definition of self-consciousness.
Space-Time Theory of Consciousness
In due course I proposed a theory that I named the Space-Time Theory of Consciousness. It is testable, reproducible, falsifiable and amenable to quantitative assessment. It not only defines self-consciousness but also allows us to quantify it on a dedicated scale.
The theory begins with the idea that animals, plants and even machines may possess consciousness. I claim that consciousness is the process of creating a model of oneself using many feedback loops — for example, in space, in society or in time — to achieve some goal. To measure the level of consciousness, we simply need to count the number and types of feedback loops necessary for a subject to create a model of itself.
The minimal level of consciousness, in this sense, can be found in a thermostat or, say, a photoelement, which with a single feedback loop creates a model of itself in terms of temperature or illumination. A flower may have, say, ten units of consciousness, because to build a model of itself it uses ten feedback loops — measuring water, temperature, the direction of gravity, sunlight, and so on. According to my theory, all feedback loops can be grouped and mapped to a specific level of consciousness. Thermostats and flowers thus fall to the zero level.
Consciousness of the first level is the consciousness of reptiles, fruit flies and mosquitoes, which form models of themselves in the dimensions of space. For example, a reptile uses multiple feedback loops to determine the coordinates of prey and the locations of potential mates, potential rivals and itself.
The second level relates to social animals. Feedback loops relate to other members of the pack or clan and form models of complex social hierarchy within the group, expressed through emotions and gestures.
These levels roughly reflect the stages of evolution of the mammalian brain. The oldest part of our brain lies at the very back, where balance, territoriality and instincts are processed. As the brain expanded forward, the limbic system—the emotional brain of the primates—developed, located in the center of our brain. This direction of development—from the back of the brain to the front— is also reflected in how a child’s brain matures.
But where in this scheme is human consciousness? What distinguishes us from plants and animals?
I argue that humans differ from animals because we feel and understand time. Beyond spatial and social, we possess temporal consciousness. Through evolution, the last part of the brain developed the prefrontal cortex behind the forehead, dedicated to the continuous modeling of the future. It may seem that animals can plan—for example, when to hibernate—but such behavior is largely dictated by instinct. It is not possible to explain to your dog or cat what tomorrow is, because all animals exist only in the present. Humans constantly prepare for the future, even beyond their own lives. We plan and dream — we cannot live without it. Our brains are planning machines.
Magnetic resonance imaging (MRI) shows that when preparing to perform a task, we review and mobilize memories of previous attempts to perform it, making our plans more realistic. According to one theory, animals do not have a complex memory system because they rely on instinct and thus do not need to foresee the future. In other words, perhaps the main purpose of memory is the task of projecting it into the future.
Within this theory we can now define self-consciousness as the ability to place oneself in a model of the future that corresponds to our goal.
Applying this theory to machines, we see that today even our best machines are at the bottom of the first level of consciousness, based on the ability to determine their position in space. Most of them can barely move around an empty room. There are robots capable of modeling the future to some extent, such as Google’s DeepMind company, but only in a very narrow domain. If you ask DeepMind to do something besides playing Go, it simply stops.
How long the road ahead remains and what steps must be taken to obtain a self-aware machine like Skynet from The Terminator?
Translation from English by Natalia Lisova. Published as: Michio Kaku, The Future of Humanity. Colonization of Mars, Journeys to the Stars and Attaining Immortality. Moscow: Alpine Nonfiction, 2023.
