Telegram (AI) YouTube Facebook X
Ру
'The market is a computer': how Friedrich von Hayek reached that conclusion

‘The market is a computer’: how Friedrich von Hayek reached that conclusion

The publisher Individuum has released the Russian translation of philosopher of science Matteo Pasquinelli’sMeasure and Impose: A Social History of Artificial Intelligence. ForkLog publishes an excerpt on Friedrich von Hayek’s connectionism and the debates over the role of computing machines in a market economy.

«Рынок — это компьютер». Как Фридрих фон Хайек пришел к такому выводу
Data: Individuum.

Neural networks as a model of the mind

How does a set of different stimuli come to be associated with the same class, that is, recognised as a recurring pattern? What neural process makes classification possible? Hayek’s connectionism offered an empirical account of the relation between perception and cognition. Under the influence of Warren McCulloch and Walter Pitts’s neural-network ideas, Hayek reduced cognition to simple decision-making (in the Gestalt school, cognition is linked to intuition, Einsicht). In the McCulloch–Pitts model, a cascade of layers of nodes (each consisting of several neurons, or switches) filters a large volume of input into a simple binary output (a single neuron, or switch) that decides whether a group of incoming stimuli belongs to a given class or not. The solution is elegant: one node converts a large volume of input into a simple binary verdict of “yes” or “no”. As in supervised machine learning, the final node is assigned to that class by convention (for example, given the label “apple”). The model is said to be non-isomorphic, meaning none of its parts resembles the knowledge it interprets: there is no localised region of the network that stores, say, the general shape of an apple in recognisable proportions. Correct classification of stimuli depends on the overall behaviour of the computational structure.

Yet Hayek’s connectionism—which may be called Gestalt-connectionism to distinguish it from McCulloch and Pitts’s logical connectionism and Rosenblatt’s statistical connectionism—was not a defence of a computational theory of mind. Hayek argued that the mind (which, in his view, is nothing other than a mental order and a self-organising network of entities, such as neurons) can create only a model, not a representation of the world (the sensory order formed by relations among qualia). Hayek wrote: “Thus what we call mind is a particular order of a set of events taking place in some organism and in some manner related to the physical order of events in the environment but not identical with it.” In 1945 the cyberneticians Arturo Rosenblueth and Norbert Wiener described model-building in much the same way:

“Partial models, however imperfect, represent the only means devised by science to comprehend the Universe. This statement implies not a defeatist position but the recognition that the basic instrument of science is the human mind and that the human mind is finite.”

Building a model means realising one environment within the internal parameters and constraints of another. In translation, some elements are dispersed, rounded and distorted. Hayek also acknowledged that mental order is a partial, often false interpretation of reality:

“We have seen that the classification of stimuli effected by the sense organs is based on a system of acquired connexions which reproduce only partially and imperfectly the relations existing between the respective physical stimuli. The ‘model’ of the physical world formed in this manner will reproduce the relations of the world only in a very distorted fashion; and the classification of these events by our senses will often be false, that is, will generate expectations which will not be confirmed by the events.”

Tellingly, after Charles Babbage another political economist stood at a watershed in the history of computing. Babbage proposed deploying computation to automate mental labour in industrial process; Hayek contended that the calculation of market transactions was impossible and, in any case, would harm the autonomy of the market. The theoretical differences and the historical gap between Babbage and Hayek mirror the split between symbolic and connectionist AI, between a representational idea of cognition and a modelling one. Babbage’s project of automating mental work on the model of manual calculation unfolded into the Turing machine and the deductive algorithms of symbolic AI. Numerical manipulation became symbol manipulation, leaving little room for meaning and adaptation. If Babbage’s computing was born of a quest for the accuracy required to purge errors from logarithmic tables, connectionism (including Hayek’s strand) offered a flexible, adaptive epistemology. Following Hayek and John von Neumann, Frank Rosenblatt stressed that his perceptron neural network was a simplification and exaggeration of certain features of the human mind and did not claim to be a definitive paradigm of intelligence.

The market as a model of neural networks

Beyond a theory of pattern recognition, Hayek is known for introducing a technical definition of information—before the term was in common use—in his 1945 essay “The Use of Knowledge in Society”, which anticipated Shannon’s 1948 mathematical theory of communication. His working definition concerned units of communication—more precisely, “price signals”. Hayek is also known for describing the market as a computer—or, in the language of the time, as a distinctive distributed telegraph network, “a kind of machine for registering changes or a system of telecommunications” (it should be noted that electronic computers were not yet a widespread technology in those years):

“We must look at the price system as such a mechanism for communicating information if we want to understand its real function — a function which, of course, it performs less perfectly as prices grow more rigid. <…> The most significant fact about this system is the economy of knowledge with which it operates, or how little the individual participants need to know in order to be able to take the right action. In abbreviated form, by a kind of symbol, only the most essential information is passed on, and passed on only to those concerned. It is more than a metaphor to describe the price system as a kind of machinery for registering changes, or a system of telecommunications which enables individual producers to watch merely the movement of a few pointers, as an engineer might watch the hands of a few dials, in order to adjust their activities to changes of which they may never know more than is reflected in the price movement.”

Where cyberneticians aspired, with some hubris, to full automation, Hayek argued that the complexity of markets would exceed the hardware limits of any device for computation and equation-solving. Two decades later, from the other camp in the planning debate, the economist Oskar Lange, arguing for the use of new powerful electronic computers to tackle the mathematical problems of economics, retorted that innovation had overcome those limits: “So what is the problem? Let us put the simultaneous equations into the electronic computer — and we shall obtain the solution in less than a second.” For Lange the computer was a new instrument of cognition enabling a new view of the economy, since “the electronic computer performs a function which the market will never be able to perform.” Indirectly, Lange proposed using the computer as a technical mediator to resolve the tensions between centralised planning and market spontaneity. This was the insight later embraced by left-accelerationist rhetoric, which argues for public algorithmic planning against corporate planning in the age of big data; Fredric Jameson, for instance, advocates nationalising the computing power of global logistics giants such as Walmart and Amazon. But what kind of computation did Lange have in mind? In the next, often overlooked, part of his argument he refers not to deterministic computation but to something akin to the learning process of artificial neural networks:

“The market mechanism and the trial-and-error method which I proposed in my essay did indeed play the role of a computing device for solving a system of simultaneous equations. The solution was obtained by presumably convergent iterations. The iterations were based on the principle of feedback which gradually eliminated deviations from equilibrium. The process was expected to operate like a servomechanism which, by means of feedback, automatically eliminates disturbances… The same process can be carried out by means of an electronic analog machine which simulates the iterative process implied by the tâtonnements of the market mechanism. Such an electronic analog (servomechanism) simulates the working of the market. This statement, however, can be turned around: the market simulates an electronic analog computer. In other words, the market can be regarded as a peculiar calculating machine which serves to solve a system of simultaneous equations. It operates like an analog machine: a servomechanism based on the principle of feedback. The market can be regarded as one of the oldest historical devices for solving simultaneous equations. What is interesting is that the mechanism of solution operates through social rather than physical processes. It turns out that social processes can also serve as the basis of feedback devices which solve equations through iterations.”

In Hayek’s connectionist tradition, Lange portrayed the market as a social machine that solves simultaneous equations through stepwise approximations (tâtonnements), akin to training an algorithm that adjusts its parameters by trial and error. This use of approximate methods to solve market equations has nothing to do with centralised socialist economics; it is closer to modern training algorithms for artificial neural networks (notably backpropagation and gradient descent). As the passages from Hayek and Lange show, in twentieth-century economic debates models of market and computation sometimes traded places, but the real stakes were the agency and autonomy of the underlying social processes.

Published from: Matteo Pasquinelli. Measure and Impose: A Social History of Artificial Intelligence. Moscow: Individuum, 2024. Translated from English by Ivan Napreenko.

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK