In 2019, the French philosopher and publicist Gaspard Koenig undertook, as he himself puts it, “a round-the-world journey to the land of artificial intelligence.” He headed west: from Cambridge to Oxford, then to Boston, New York, Washington, San Francisco, Los Angeles, Shanghai, Beijing, Tel Aviv, Copenhagen and, finally, Paris. Along the way he conducted interviews — in total 125 — with researchers, entrepreneurs, investors, professors, officials, artists who work with AI, refine it and develop it.
The ambitious undertaking, Koenig explains, was to determine whether those who see AI as a threat to free will — and, above all, the fashionable philosopher Yuval Noah Harari — are right, and to see how that threatens liberty, and with it the free market, democracy, privacy and other things dear to Koenig, a liberal by conviction. Thus the book “Конец индивидуума”, whose Russian edition was released at the end of August by Individuum.
Who would say otherwise? Four years, by modern standards of technological progress, is a blink in the context of history; what could be important in a book about AI written in 2019? Such arrogance, confusing the growth of computing power with a change in everything, feeds into the myths around technology that the philosopher dispels with unusually convincing clarity — including fears of a malevolent superintelligence and the total disappearance of jobs. The picture he paints is not more optimistic, but the real grounds for concern come through with striking clarity.
At ForkLog’s request, sociologist and editor of the book-and-reading site Горький Ivan Napreenko visited Koenig in his Paris office. Under a barely audible drone-ambient, the philosopher explained why he believes technology itself is neutral, why we all need to look at China, and whether decentralisation can save free will.
— Your book depicts a future in which AI prevails. It is a world of freedom without democracy, art without artists, dating without a chance to meet someone compatible, and so on. The picture is unbearably oppressive, even as you offer advice on how to avoid all this. How optimistic are you about the prospects for preserving freedom in a world of total AI domination?
— In the book “Democracy in America” Tocqueville writes about how democracy threatens freedom and, at the end, invites readers to look to the future with “a saving fear, which keeps one on the alert and fighting, rather than a shrinking and idle horror.” In other words, the issue is not whether to be pessimistic or optimistic, nor to ban technology — Tocqueville, by the way, did not propose returning from democracy to aristocracy. I see my role as talking about what these societal changes mean for a person who loves personal freedom.
People are obsessed with technology. And I have to offer guidance, whether people heed it or not is not my concern. If we talk about personal practices, I would offer what one might call 2.0 Stoicism. What does that mean? The problem of attention capture has always existed. Montaigne wrote about the “greedy and sharp craving for news,” which compels us to disregard civility and interrupt a conversation to peel away the seal on a newly delivered letter. In other words, the problem of unsolicited messages existed then as well. Montaigne, aware of his own fondness for news, the “passion for novelty,” opposed it with discipline. He followed the ancient Stoics in this, who sought to avoid painful passions. Neither they nor I speak of abstract things. Rather, we are talking about very concrete practices of “disconnection.”
For example, on my phone I have disabled all notifications, and in the mornings, while I write, I switch it off altogether. I do not use geolocation, but before going to an unfamiliar place I study the route on Google Maps and try to memorize it. I strive, in small things, to preserve what I call free will, independence, autonomy. It is what allows me to decide, not the machine. If you look a little deeper, technologies can even help support freedom. For instance, you can anonymise searches to avoid results based on your own queries. In other words, the threat is not in AI as such.
— Here many philosophers would disagree.
— Unlike the representatives of two opposing camps — techno-optimists and techno-pessimists — I do not believe technology serves as the carrier of political ideology or power. That is what Elon Musk and Jacques Ellul argue, though their views are opposite. But I do think technology is neutral and that human beings simply use it.
David Graeber and David Wengrow in the wonderful book The Dawn of Everything, drawing on archaeology and anthropology, show that with the emergence of the first agricultural tools some communities became sedentary while others did not. Some were sedentary, others nomadic, and variability among communities was immense. This means, say researchers, that even early on societies used technology in line with different political projects, not the other way around. In short, anthropologically we are capable of deciding what our values will be and how to use technology functionally. I think today the situation is exactly the same.
From an anthropological point of view our society is mature enough to discuss and decide things in public. There are technologies that some have rejected. For example, human cloning. There is a broad consensus on that. These are political, consciously articulated decisions that reflect chosen values. I believe that in France we can do this, and on a European level the prospects are encouraging.
— You argue that technologies are neutral, yet the technological history of AI is full of forks that had clearly ideological overtones — as Mattew? Pasquinelli’s research shows. And these ideological forks, at least, create a predisposition toward particular uses of technology.
— It is not the technologies themselves, but their specific commercial uses that can carry ideological undertones. Jaron Lanier, who coined the term virtual reality, has interesting things to say about this. In the 1990s, tech circles dreamt of a libertarian internet, where the principle of openness and lack of price-taking prevailed. Then advertising, marketing, targeting, personalization entered, and we ended up where we are — a world of hyper-manipulative nonsense.
What does this imply? The internet can be used in very different ways. Data can be handled in very different ways. We can make very different choices. For example, I use AI to identify plants from leaves. Unlike Jacques Ellul, I do not see my phone bending me toward certain public preferences. I think I am simply enjoying the fruits of human genius.
But there are other situations — when AI, for commercial reasons, tries to influence what I watch, read, or love. Suppose I bought a projector to connect to my computer and watch films. It turns out the device only outputs to Netflix, Google Play, and YouTube, with dedicated buttons on the remote. I must sign into Gmail to access all this. It is a total nuisance! A tool intended to give me the freedom to watch what I want becomes a doorway into Google, where AI identifies me and immediately starts pushing a deluge of content. When I see that on screen, it feels as if I am under assault.
— And you do not notice the philosophical irony that AI started as an attempt to emulate the individual human brain and, used by proponents of individualism to push their ideas, has ended up as a general-intelligence, a kind of collective mechanical consciousness, which most successfully develops in China?
— Peter Thiel says that “artificial intelligence is a communist thing,” and I think he is right. The way technology is used depends on society. The choice AI offers me is not the best choice for me alone but the best choice for me and the rest of society. In other words, I see utilitarian logic at work. Think of it as a ladder with ten rungs. A group of five stands on rung five. They decide that four can move to rung seven, while one drops to rung three. From an AI standpoint this is a good decision because it raises the average rung height for the group. So navigation routes generated by GPS do not necessarily give you the best route for you personally; the route overall improves traffic.
For China this approach is entirely natural, even morally virtuous, since in a Confucian society virtue is integration into society. In the West the philosophy is different. In the West we are more self-focused. Our thinking traces back to Socrates, who did not venerate the polis’ gods and who taught people to think for themselves — and opposed authority. Confucius, who lived at the same time as Socrates, never opposed authority. He served as an official and advised the king on how to govern for harmony and balance.
That is to say, the answer to your question depends on the point of view. For Western individualist philosophy the current state of affairs is ironical. But from the vantage of Chinese philosophy — there is no irony, all is well. AI, in fact, fits China perfectly: it helps counter the United States on geopolitical and economic levels and aligns with its social model. When Alibaba engineers share data with the police, that is perfectly normal. If the police catch criminals, why not help them? In the West privacy is valued; the police are the bad guys here.
— So you regret not having interviewed more Chinese for the book?
— It is easy to be charmed by them. They are confident, relentlessly optimistic, and it is contagious. They have no taboos; they go all the way. There is a sense you are missing something. It reminds me a little of late 19th-century France — everywhere entrepreneurs, people working hard, living with the expectation that tomorrow will be better than today. It is hardly like the melancholy that surrounds us Europeans today. We feel a sense of lost identity. They feel they are conquering the universe and that it will belong to them. I have seen Europeans in China adopt a Chinese view on certain things and begin to criticize Europe’s approach to regulating technologies. They say: it is very cool to have facial-recognition cameras in schools; that personalization of education is excellent.
Tocqueville went to the United States to understand how democracy will work tomorrow. Today one must go to China to understand how a society governed by AI will function. Is that the future we want? China is rushing ahead, Europe is not rushing, but cracks are already appearing. For example, France, because of the Olympic Games, has approved smart cameras. That is why we need China as a counterexample, to have a well-argued answer to how and why not to proceed.
— In your book you debunk myths about AI in a systematic way. Is there still anything in this technology that remains mysterious or does it no longer arouse awe?
— I still find it astonishing that the human mind can create such powerful and efficient technologies. But once you start to understand how it works, it becomes fairly simple and, in a sense, calming. The plus and minus of being a journalist lie in the fact that talking to experts who animate technologies quickly dispels the myths, but also leaves fewer topics to discuss because people enjoy talking about what they do not understand well.
Take Elon Musk, who backed the idea of a moratorium on AI. He later recanted, but that is not the point. If you look at the list of signatories to the pause request, there are relatively few AI experts. Mostly astrophysicists, philosophers and sociologists. In other words, similar ideas are often advanced by people who do not truly understand how it works. But if you peer inside the machine, pop the hood, you see that it is not so complicated. The principle, at least, is not hard to grasp.
What lies before us is a product of human endeavour. People arrive at results through causal reasoning. A machine does not think; strictly speaking it has no intelligence. It imitates thinking by correlating results of human labour. You have millions who recognise images of cats — for the machine, indeed. Then the machine makes a million correlations and eventually recognises the image of a cat better than a human, without performing human reasoning. But a child who sees a cat once will recognise cats for life. The processes behind these two recognitions are entirely different. Human thought is rooted in continuous bodily processes, in homeostasis and evolution. Machine intelligence is disembodied and discrete. Intelligence, in this sense, is a metaphor as much as the neural network — which has no neurons at all.
As for jobs. Machines have long since beaten humans at chess. Yet chess is more popular than ever, because it matters little what machines can or cannot do. They can accelerate to 100 km/h in a second, but we still race in running. The only jobs that will disappear are those that are already automated. This is what Graeber calls a delusional job, which we ourselves created. But the problem is not the machine; it is that we have built a world where this work is nonsense, yet people nevertheless perform it.
— And do you use ChatGPT yourself?
— At first it did not impress me, but I gave it a second chance while I was working on a book about philosophers and the earth. I was looking for philosophical remarks about compost, a very philosophical object, if you think about it. I asked friends, googled, but found nothing. I asked ChatGPT and immediately got a quote from Rousseau, from the fourth book of the treatise Emile, or On Education. Wow, what a great quotation, perfectly on topic, I thought. I decided to check it out, read the whole volume, but found nothing similar. I go to ChatGPT, and it writes to me: “Pardon.” Not because it checked the original and saw it was wrong, but simply because “pardon” is the word that, in its data, most likely follows a message saying that you’ve found an error.
It is rather monstrous: ChatGPT is the anti‑Wikipedia. If Wikipedia is obsessively linked, ChatGPT is knowledge without sources that cannot be verified or refuted. If unchecked “knowledge” circulates, that is a catastrophe. A fog hangs over where illusions and truths are indistinguishable, a classic post-truth scenario. Yet tech‑fident elites, driven by fears of foregone profits, promote the widespread use of ChatGPT and similar technologies with almost reckless enthusiasm. This is a real problem posed by AI, not the spectre of a malevolent superintelligence or mass unemployment.
— You also write that in a world ruled by AI there will be a theology without God. Have you yourself met people who think AI could serve as a materialist deity?
— In the United States there is a whole church of AI, but I chose not to engagement with it. So, no, I have not met such people. Yet there is a genuinely mystical, even religious side to this technocratic universe. It has an infantile, regressive character that blocks the gains of the Enlightenment. I speak of a kind of animism that ascribes a soul to objects. Devices are spoken of as if they have human agency. Amazon asks children to say ‘please’ to Alexa; I detest the idea of treating a technological object as if it were something other than what it is. Or the idea of digital immortality — as if digitised discrete data contain more than a tiny fraction of what living, continuous bodies can do. Or the metaphor of the “cloud” — as if data exist in the heavens, when it is simply a matter of electricity. No energy, no data; that is why I still print everything, including photos. The more striking and troubling fact is that many of these metaphors and regressive notions are shared by people who, it turns out, are not exactly scholars or cosmoplites.
— For this book you conducted 125 interviews and circled the globe. Was it difficult to stitch the material into a storyline?
— By and large, this is a rather narrow corner of the universe. There were few adventures. People are closed in their own views and give you the minimum time; books on the topic are dull; everywhere there is analytic philosophy. I do not find this world particularly human; it lacks richness of relationships. And it is very secretive. For instance, to access people at Google you must navigate labyrinths of approvals. There is a culture of secrecy there, which is absolutely disconcerting. We have a right to know what goes on inside the company that supplies us with search results. That is a question of democracy.
The plot formed easily: I began with Harari’s ideas about free will and its disappearance. They are strong for their contestability and relevance; I tried to rethink them and then see what follows for public policy and technology. All in all, I spent a few months on the topic, and then it accelerated.
— AI is a story of hyper-centralisation. Do decentralised technologies, in your view, work to the benefit of the free will?
— Yes, certainly. And here we return to the idea that technology does not carry ideology; it depends on how we use it. We clearly see that AI and other technologies can be deployed in directions opposite to what Silicon Valley desires — for privacy. When Austria’s chancellor proposes to enshrine the use of cash in the constitution to preserve individual freedom, he is absolutely right. But with the same technologies it is possible to build electronic payment systems that are fully anonymous and an alternative to the terrifying state digital currencies. Again, this is a political question. The European Central Bank has a chance to restore some freedom to citizens, and let us hope we land on the right side.
— You have tested public policy. Have you had a chance to gauge how popular ideas about digital anonymity and independence are in France?
— Yes, I spent a full year running a public campaign and imagining what people in different parts of the country think. Workers and middle-class people have a healthy instinct of distrust toward the central authorities and a strong sense of self‑respect. They understand that liking, favouriting, clicking — all of that runs counter to dignity. Politicians and economic elites press them to think, “This will improve your life. It will simplify this, that, it is perfect for you, your children, your business.” People feel irritated, but they do not have a political framework yet. I hope that will change. I can only persuade and act as a proponent of my ideas.
— Among your practical recommendations for Europe you propose legally restricting technologies, but that would inevitably reduce economic competitiveness. Does that imply that restricting AI requires a different economic paradigm, where efficiency and competitiveness matter less?
— First, what I propose would reduce efficiency, not competition. In an industrial paradigm, more competition meant more efficiency. But in an AI paradigm, more competition means less efficiency, which is why China builds colossal monopolies and pools information in a “data lake,” from which these monopolies derive knowledge about the entire population. The natural relation between a market economy and efficiency has shifted. In the United States a major antitrust case against Google has just begun. If the company loses, its efficiency will decline. People will have to decide what they value more: efficiency or freedom. For me, competition is an aspect of personal freedom.
Secondly, on the economic paradigm: if we have a market economy, that does not mean we can buy or sell people. The economy is steered by political parameters that in turn shape values. Values — for instance, private property in personal data — must be the subject of discussion. If we reach a common understanding on this question, we can make decisions based on those values that reduce the efficiency of the economy, but still leave room for market mechanisms — the best of the mechanisms we have found.
— Finally, a question: those who call for a moratorium on AI development you dislike. But a recurring figure in your book is a Silicon Valley startup founder who forbids his own children from using what he develops. How does that work?
— I do not like moratoria for invented reasons. And what you describe is pure hypocrisy. Remember Steve Jobs, who forbade his own children from using iPad. People preach that technology will save everyone, but at home they make their children read, exercise, and not touch gadgets. These people know all too well the problems created by technology. And those most affected by the tech onslaught, the working class, have at least smartphones from the age of two, video games with no restrictions, and they literally become less intelligent as a result. This cognitive inequality is well understood in China, where children are not allowed to use TikTok for more than an hour a day. In other countries, the Chinese do not impose such limits.
I proposed a bill to ban social networks for people under 16; in any case, until 15, that issue should be decided by parents. Some steps have already been taken in this direction: for example, mobile phones are banned in middle schools in France. Everyone is happy; no one suffers. Such rules can and should be advanced through public policy, in a French manner.
I do not fear those who worry that humanity will lose its wits and its intellect. The cognitive elite, those who read books and receive an education, will always be present. But the problem is that technology dramatically deepens the initial inequality between those who study and those who are manipulated by algorithms from childhood.
