Why the “memory” of Japan’s robotic dogs, AIBO, is stored in the cloud and how marketers make us feel empathy for inanimate objects? Read about this in an excerpt from Bruce Schneier, a pioneer of modern cryptography, “Hacking Everything: How the Powerful Exploit System Vulnerabilities for Their Interests”.
During the US elections in 2016 about one-fifth of all political tweets were posted by bots. During the Brexit referendum in the UK in the same year that share was one third. In report from the Oxford Internet Institute for 2019, there is evidence of bots used to spread propaganda in 50 countries. Usually these were simple programs, mindlessly repeating slogans. For example, after the murder of Jamal Khashoggi in 2018, via bots there were about a quarter of a million pro-Saudi tweets “We all trust [the heir apparent] Mohammed bin Salman.”
In 2017 the Federal Communications Commission announced the start of public online discussions of its plans to repeal net neutrality. An astonishing number of comments were received — 22 million, — many of which, perhaps as much as half, were submitted using stolen personal data. These fake comments were made very crudely: 1.3 million of them were clearly created based on the same template with a few words changed to give them a sense of uniqueness. This was obvious to the naked eye. Such attempts will become increasingly sophisticated. For many years AI programs prepared sports and financial news for real news agencies such as the Associated Press. The restricted nature of most reports on these topics simplified their adaptation to AI. Today AI is used to write more general stories. Modern text-generation systems, such as GPT-3 from OpenAI, can write credible stories based on supplied facts, but they can also churn out fake news, fed with lies.
It doesn’t require a rich imagination to understand how AI will degrade political discourse. Even now AI-managed personas can write letters to newspapers and elected officials, leave coherent comments on news sites and bulletin boards, and discuss politics on social media. As these systems become more detailed and more convincingly imitate personality, they become harder to distinguish from real people. The tactic that used to be obvious is not so easy to recognize today.
In a recent experiment, researchers used a text-generation program to send 1000 comments in response to the government’s request for citizens to express their opinion on the issue of Medicaid. Each comment looked unique, as if real people were defending their political positions. The administrators of Medicaid.gov even did not doubt their authenticity and took them at face value. Later, the researchers pointed them out and asked to delete, to avoid bias in political debates. However, not all will be so ethical.
These methods are already being used in the real world to influence politics. A propaganda online campaign used AI-generated facial images to create fake journalists. China distributed AI-generated textual messages designed to influence the 2020 Taiwan elections. The technology deepfake, using AI to create realistic videos of fake events, often featuring real people to portray them saying things they never said, is already being used for political purposes in countries such as Malaysia, Belgium and the United States.
One example of this technology’s expansion is the bot-person — an AI that passes itself off as a human on social networks. Bot-personas have their own backstory, character and style of communication. They do not engage in overt propaganda. Such bots infiltrate various interest groups: gardening, knitting, model trains, anything. They behave like ordinary members of these communities, posting messages, commenting and discussing. Systems like GPT-3 allow them to easily extract information from previous conversations and relevant online content to appear knowledgeable in a given area. Then, from time to time, a bot-person may post something relating to politics, for example an article about a healthcare worker’s allergic reaction to the COVID-19 vaccine, accompanied by concerned comments. Or express the developer’s view on recent elections, racial justice, or any other polarizing topic. One bot-person cannot change public opinion, but what if there are thousands? Or millions?
This phenomenon has already been named — computational propaganda. It will fundamentally change our understanding of communication. AI can make the spread of misinformation endless. And it can completely change the concept of communication itself. In 2012, robotics ethicist Kate Darling conducted an experiment with an animatronic plastic dinosaur named Cleo — a toy that responded differently to touches. After conference participants played with Cleo, she tried to persuade them to cause harm to the toy in various ways. Yet even a brief interaction with Cleo made people feel such strong empathy that they refused to do so, although the toy clearly did not feel pain. This is a fundamental human reaction. Rationally we can understand that Cleo is just a green plastic dinosaur, but a big head paired with a small body makes us perceive the object as a child. Moreover, the toy has a name that tells us it is “she”! And she responds to our touches! Suddenly we begin to treat it as a living being and feel obliged to protect it from any harm. And although such a reaction seems benevolent, what will happen when this cute little robot looks at its owners with big sad eyes and asks them to buy it a software update?
Because we, humans, tend to blur categories and treat robots as living beings with their own feelings and desires, we are vulnerable to manipulation by them. Robots can persuade us to do what we would not do without their influence. And they can scare us into not doing what we might otherwise. In one experiment, a robot successfully influenced participants using the “peer pressure” script, pushing them to take greater risks. Ask yourself: how soon will a sex robot start offering purchases in apps at the most critical moment?
In this form of persuasion, AI will become increasingly better. Researchers are already developing AI systems that identify emotions by analyzing our handwriting, reading facial expressions, or tracking breathing and pulse. They still make mistakes, but with technology advancing this will improve. At some point, AI will surpass humans in its capabilities. This will enable more precise manipulation of us. But with one caveat: our adaptability, which we mentioned earlier, has not been annulled.
AIBO — a robotic dog introduced by Sony in 1999. The company released new and improved models each year until 2005, and then over the next several years gradually discontinued support for older AIBO. Even by the computer standards of the time, AIBO was fairly primitive, but that did not stop people from forming emotional attachments to their “pets.” In Japan, there even arose a tradition of holding funerals for their “dead” AIBO.
In 2018, Sony began selling a new generation of AIBO. The toy included many software enhancements that make it more like a home pet, but more interesting: AIBO for work now requires cloud data storage. This means that, unlike previous versions, Sony can remotely modify or even “kill” any AIBO. The first three years of cloud data storage are free, after which the company starts charging $300 per year. The calculation is that over three years owners will form an emotional attachment to their pets. A similar tactic can be described as “emotional capture.”
As AI and autonomous robots take on more real-world tasks, eroding trust in such systems will be weighed down by dangerous and costly consequences. But let us not forget that it is people who govern AI systems. All of them are developed and financed by people who want to manipulate their peers in a particular way and with a specific goal.
Such corporations like Sony, and other influential players, have long and stubbornly sought to hack our emotions for power and profit. For this they invest substantial resources in research and technology. And without active efforts to establish norms and rules to curb such hacking, we will soon find that the non-human, literally speaking, capabilities of AI systems are turned against ordinary people in the interests of their powerful masters.
Translated from English by Mikhail Belogolovksy. Published in the edition: Bruce Schneier.Hacking Everything: How the Powerful Exploit System Vulnerabilities for Their Interests. Moscow: Alpina Publisher, 2024.
