Telegram (AI) YouTube Facebook X
Ру
God or golem? Why AI inspires religious awe

God or golem? Why AI inspires religious awe

Individuum has released a Russian translation of American writer Meghan O’Gieblyn’s book «Бог, человек, животное, машина. Поиски смысла в расколдованном мире». With the publisher’s permission, we present an excerpt explaining how a seemingly odd religious observation by Norbert Wiener, the father of cybernetics, has been clarified by the rise of modern artificial intelligence.

God or golem? Why AI inspires religious awe
Data: Individuum.

In 1964 Norbert Wiener published a slim and decidedly odd book, God and Golem, Inc., about the way new technologies raise questions that are, in essence, religious. One arose in connection with machine learning—artificial intelligence that, in Wiener’s words, “learns or, as it seems, learns” from its own experience. He was writing when computers were only just beginning to play games against their human creators. Most played checkers; some had been taught chess, though they were not very good at it. Even then, Wiener saw glimmers of what such machines might become. One checkers-playing computer, known as Samuel’s machine, learned to beat its creator. The engineer eventually worked out the strategy the machine was using and regained the upper hand, but this raised the question of whether a machine could outwit the engineer who made it. For Wiener, the question was fundamentally theological. In the Judeo-Christian tradition it is assumed that a creator is always more complex and powerful than his creation.

Yet, Wiener noted, there are exceptions—and one appears in the Book of Job. He had in mind the opening chapter, in which God makes a wager with Satan over Job’s soul. “According to orthodox Jewish and Christian beliefs, the devil is also created by God, like everything living,” Wiener wrote. “Such a game initially appears to be an unequal rivalry in which one player holds an overwhelming advantage. To engage in a game with an omnipotent and omniscient God is simply foolish.” But he argued that Satan’s defeat was not inevitable or foreordained.

The conflict between God and the devil turns out to be a real conflict, in which God appears as something that does not possess absolute omnipotence. Yes, he truly engages in conflict with his creation—and may well lose. Yet this creation was made by him of his own will and therefore has apparently acquired its capacity for action from God himself. Can God play a full-fledged game with his own creation? Is any creator at all, even one limited in his capacities, able to play a full-fledged game with his own creation?

For Wiener the answer was yes—at least when it came to machine learning. These models were not programmed to search for optimal moves; they were acquainted only with the essence and rules of the game. The machine moved without prolonged deliberation, but the outcome of each move was stored in memory. Each past decision was weighted according to its usefulness—whether it contributed to a win—and this “experience” was used to refine strategy continually. In other words, the machine learned as humans usually do: by trial and error. In some cases, Wiener observed, the machine’s decisions seemed intuitive and even displayed “supernatural cunning”. It became clear that the machine had mastered abilities not built into it at the outset. “In designing machines with which he plays games, the inventor assumes the functions of a limited creator,” Wiener wrote, “whatever the nature of the game devices he is creating. This is especially true of game machines that learn from their experience.” Alan Turing said much the same in 1951: “It seems likely that once the method of machine thinking begins to work, it will take very little time to surpass our feeble powers.”

In 1994 a computer program beat the world checkers champion. Two years later Deep Blue defeated Garry Kasparov at chess. In 2011 IBM’s Watson won the quiz show Jeopardy!, trouncing two long-standing champions. Yet even after these triumphs, few were prepared for what happened in 2016 in Seoul. That spring hundreds gathered on the sixth floor of the Four Seasons Hotel to watch Lee Sedol, one of the world’s leading Go players, face AlphaGo, an algorithm created by Google’s DeepMind. Go is an ancient Chinese board game far more complex than chess; the number of possible moves exceeds the number of atoms in the universe. Mid-match, AlphaGo made a move so strange that everyone assumed it was an error. “This is not a human move,” said a former champion. “I have never seen a person make such a move.” It proved decisive. The computer won that game, then the next, and took the five-game match.

AlphaGo is based on deep learning, an especially potent form of machine learning that has become the tool of choice for predicting from vast troves of raw data. It helps credit underwriters decide whether to approve a loan. The CIA uses it to forecast social unrest. Deep-learning systems are deployed by airport security to recognise faces on scanned passport photos, by hospitals to detect signs of cancer, and by Instagram to warn users that a post may be offensive. Much in life, it turns out, can be gamified—reduced to a set of simple rules that let these machines build their own models of the world—and those models can be eerily accurate. The years after AlphaGo’s match were marked by boundless (or so it seemed) enthusiasm for the machine-learning revolution, and deep learning in particular, lauded for its “incomprehensible effectiveness”. By 2017 these algorithms had surpassed radiologists at detecting lung cancer, learned to recognise images faster and more efficiently than humans, and composed baroque chorales so convincing that professional musicians mistakenly attributed them to Bach.

But the technologies have also provoked misgivings. Many forms of machine learning are technological black boxes—we cannot determine what, exactly, goes on inside. They comprise many layers of neural networks, and it is impossible to know what model of reality they build from their experience. During training, internal nodes form that correspond to abstract features or relations they have discovered, but these do not map onto any concepts in human language. (Even algorithms with a preternatural knack for recognising, say, dogs in photos have no idea what a dog is; they merely pick up patterns in data.) If you tried to document everything neural networks do between input and output, you would get billions of arithmetic operations—an “explanation” that is impossible to comprehend. When AlphaGo won in Seoul, even its creator, David Silver, could not explain the logic of the algorithm’s unexpected move. “He discovered it for himself,” Silver said, “in the course of reflection and analysis.” Specialists have often tried to decode such processes after the fact, but efforts to reduce them to simple, unambiguous explanations seem as futile as attempting the same with human thought (a professor at Uber’s artificial-intelligence lab called such efforts “artificial neurobiology”). More than any other technology, these algorithms answer Anderson’s call to abandon the quest for understanding in favour of objective knowledge. To access the higher knowledge possessed by machines, we must relinquish the desire to know why and accept the results as pure revelation.

Wiener sensed that there was something essentially religious about machine learning, but perhaps miscast the roles in Job’s drama. Machine-learning algorithms are not an ingenious devil outwitting their creator. Instead they have become absolute sovereigns demanding blind submission. As these technologies are ever more integrated into public life, many people find themselves in Job’s position: denied the right to know why they were refused a loan, dismissed from a job or predicted to be at risk of cancer. The comparison with divine judgement is hard to avoid, especially given that the justice system has become a live laboratory for machine learning. Though statistical analysis has been used in policing since the mid-1990s, many law-enforcement agencies now rely on predictive algorithms to identify crime hotspots. One such system, PredPol, claims to forecast crime locations twice as accurately as human analysts. Drawing on historical crime data, it boxes in red on a map the boundaries of neighbourhoods or city blocks it deems high-risk. As Jackie Wang notes in her book Carceral Capitalism, if you believe PredPol’s marketing, the system appears almost clairvoyant. The materials include stories of officers going to high-risk areas and catching offenders in the act.

Proponents deny any resemblance to Minority Report (2002), in which police coerce people with extrasensory abilities to predict crimes so they can arrest would-be offenders before they break the law. One PredPol PR strategist said the program’s operation is not science fiction but “scientific fact”, stressing that the algorithms are entirely neutral and objective.

The technology is often touted as a way to reduce the impact of racist prejudice on policing. One of PredPol’s creators argued that because the algorithms focus on “objective” factors such as the time, place and date of possible crimes, rather than on demographic characteristics of individual offenders, the software “potentially reduces the role of any racial or socioeconomic biases among officers”.

Similar algorithms are used in courts to assess the danger posed by a defendant and the risk of absconding before sentencing. These machines analyse a defendant’s file, including the alleged crime, place of arrest and number of prior detentions (some models also consider employment, associates and credit score). They then compare this with hundreds or thousands of cases and assign a score. On that basis, the court decides whether the arrested person awaits judgement at home or in jail. Such algorithms first made national headlines in 2017, during the trial of Eric Loomis, a 34-year-old from Wisconsin. His six-year prison sentence—for resisting a police officer—was partly based on an assessment by COMPAS, a predictive model of recidivism. During the proceedings the judge told Loomis that COMPAS had classified him as high risk. Naturally, Loomis wanted to know the criteria behind the decision, but he was informed that he could not challenge the algorithm’s judgement. The case reached the Wisconsin Supreme Court, which ruled against him.

The state attorney general said Loomis had exactly the same information about the case as the court (judges’ understanding of the algorithm, he noted, was just as hazy), and declared that the defendant was “free to question this assessment and point out its possible shortcomings”. That is a rather loose reading of freedom. Loomis was as free to doubt the algorithm as Job was to question Jehovah’s justice.

Another, more recent, case involved Darnell Gates, a Philadelphia resident who was on parole in 2020 after serving a prison term. He noticed that some months he had to report to his parole officer more often, some less, but was not told why: as it turned out, an algorithm was constantly revising his risk level. He might never have known had a New York Times journalist covering technology not told him. Gates was plainly troubled by the revelation. Judging by his interview with the Times, he felt how blurred the line is between omniscience and predestination—and how futile it is to try to win a game where the odds are stacked against you. “How can the algorithm understand me if it is also deciding what I am going to do?” Gates said. “How do you beat a computer that is designed to stop you? How do you stop something that is predetermining your destiny?”

Published edition: Meghan O’Gieblyn. God, Human, Animal, Machine: Seeking Meaning in a Disenchanted World. Moscow: Individuum, 2024. Translated from English by Maria Slavorosova.

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK