Telegram (AI) YouTube Facebook X
Ру
The chat sovereign

The chat sovereign

The very phrase “sovereign AI” evokes authoritarian regimes few would choose to inhabit. Yet “global artificial intelligence” conjures its own spectres: digital colonialism and other delights of a brave new world.

Why, in AI development, both notions are largely meaningless and serve chiefly as tools of manipulation is the subject of this ForkLog essay by an author who asked to remain anonymous.

Printing press or atom bomb?

That is the favoured dichotomy when people try to convey that AI’s advance could bring both benefits and freedoms, as well as threats with potentially catastrophic global consequences.

Demis Hassabis, head of Google DeepMind’s research lab, argues that AI’s impact on society is akin to the discovery of fire or electricity—greater even than the advent of mobile phones or the internet.

Such comparisons help suggest the scale of the invention, but they are at best approximations—and, more accurately, wide of the mark. With AI we are confronting something qualitatively new; even those building it cannot foresee all the consequences.

The race to develop artificial intelligence is already in full swing, while debate over where and how systems are built and deployed lags far behind. Advocates of sovereign AI and products built on it are multiplying across countries. Broadly, sovereign AI means a state’s ability to control its own technologies, citizens’ data, scientific and military work.

Contrary to cliché, it is not sought only by authoritarian states. Governments in the EU, the United States and Britain are in one way or another pondering regulation and national AI systems. They are creating rulebooks to govern the spread and operation of the technology.

Britain, for instance, sees sovereign AI as a way to retain a developer’s edge rather than become merely a consumer. In 2023 the government’s action plan envisaged state-owned “sovereign compute” and supercomputers.

The plan’s authors argue this approach would secure national priorities in AI, such as “targeted research” and “critical public services”. The idea is to give researchers, startups and institutions access to local infrastructure “in the event of market disruptions”. By investing in domestic compute and talent, Britain aims to spur homegrown innovation and reduce reliance on external vendors.

The European Union is also pursuing sovereign AI, citing the need to protect citizens’ privacy and foster a competitive AI ecosystem less dependent on foreign tech giants.

Brussels frets that critical capabilities are concentrated in non-European (mostly American) firms. In response, Europe launched projects such as OpenEuroLLM—a programme to develop “truly open Master of Laws programmes” across all EU languages. Above all, the bloc has become a leader in restrictive AI legislation.

In 2018 it created the General Data Protection Regulation (GDPR). Later, the European Commission’s AI Office began drafting measures that undergird the AI Act.

The regulation classifies AI applications into three categories.

  1. Products that pose an unacceptable risk—such as government social scoring—are outright banned.
  2. High-risk applications, such as CV-scanning tools used to rank job candidates, face special legal requirements.
  3. Products that are neither banned nor deemed high-risk remain largely unregulated.

Here is the (not really) paradox: while American firms lead in off-the-shelf AI, the White House is crafting its own brand of AI sovereignty. A January 2025 directive says building AI systems in the United States is critical to preventing “U.S. dependence on other countries’ infrastructure”.

Officials make no secret of viewing AI as a strategic asset. Preserving leadership and control over the supply chain—from chips to data centres—is paramount for economic competitiveness and defence.

These sovereignty arguments, dressed in human-rights language, resemble an arms race. Researchers and developers are concerned both by AI’s rapid spread, with its existential risks, and by the prospect of a slowdown in America. Many fear that would let rivals—especially China—leap ahead in the race to “transformative AI”. A darker scenario is that AI gains access to critical infrastructure, including defence systems.

From this perspective, whether we frame AI’s evolution as global or sovereign hardly matters: the risks are global regardless. Regulating within states, governments and borders is likely easier than at a translocal level. Yet the need for international co-operation—in development and in control—is already on the table.

“It strikes me as strange when some leaders seem to think that AI could be so brilliant that it could solve the world’s problems with solutions we haven’t even thought of, but not brilliant enough to escape any control limitations we come up with,” said Margaret Mitchell, chief ethics scientist at Hugging Face.

In her view, the risk rises as people cede control to increasingly autonomous AI agents. She points to a gradual loss of control over ever more autonomous systems whose behaviour cannot be fully predicted or tracked, with unwanted and potentially dangerous consequences.

Yoshua Bengio, the “godfather of AI” and a professor at the Université de Montréal, also warns of risks. In 2024 he was named to Time’s list of the 100 most influential people, and a year earlier he told the BBC he felt “lost” about his life’s work.

He voiced concern that AI systems could fall into the hands of wrongdoers as they grow more complex and powerful. Bengio called for tighter regulation, software registration, ethics training and active government involvement in monitoring and auditing products. In July 2023 he published an article in The Economist warning: “The risk of catastrophe is sufficiently real that we must take action immediately.”

In June 2025 Bengio said the AI “arms race” is preventing companies from focusing on safety. He again backed strict regulation and international collaboration to tackle risks from advanced systems.

Even sovereign AI carries global risks

In discussing AI, Bengio uses the term “situational awareness”: the ability to perceive elements in the environment, understand their meaning and project their future status. It is vital for decisions in complex, dynamic systems such as air-traffic control, military operations and health care. Instilling situational awareness in AI, for all its potential, also brings dangers.

As AI grows more capable at perceiving, interpreting and forecasting situations, people delegate more tasks and become passive observers. Human situational awareness may then atrophy: individuals stop actively gathering and analysing information and, in a crisis, may fail to react in time or appropriately.

If AI faces an unforeseen situation or errs, a person who relies on it entirely may miss or misunderstand the problem—leading to unwanted, even catastrophic outcomes.

Unlike a human decision-maker, AI does not see nuance. Despite processing vast quantities of data and spotting patterns, its grasp of a situation is bounded by predefined parameters and training data.

Proponents of sovereign AI aim to reduce local vulnerabilities—not of the technology per se, but of the state. Yet global risks plainly cannot be eliminated. Even when developed under a sovereign model, the technology remains essentially networked and transboundary—data, algorithms and compute are tightly interwoven across countries. The resulting threats (the spread of fakes, automated hacking, effects on labour markets) spill beyond local confines.

AI infrastructure—processors, data centres and cloud services—is highly globalised. Disruptions in one part of the chain inevitably ripple through the rest.

Many AI risks—economic (joblessness, fraud, restricted access to finance), social (disinformation, manipulation), technological (failures and cyberattacks) or political (armed conflict)—are irreducibly global. That reflects the technology’s nature and scale, deep interdependence and the difficulty of regulation.

All of them, however, can have local manifestations and consequences that call for adaptive national and regional responses. Does that imply protections and risk-mitigation strategies must be purely global and international? Probably not. No one, at least for now, is proposing an either-or choice.

Intelligence with an accent

“You can upload a text into ChatGPT and ask it to answer your questions. In practice, the chatbot analyses information far better than many writers and editors.”

People who say such things are sometimes professionals—wordsmiths, subject-matter experts. Trust in AI as an impartial arbiter, unclouded by emotions or social conditioning, runs high. But is AI truly objective, and should we outsource ethical or legal judgments that affect real lives to it?

The vast majority of LLMs are trained on English-language data and reflect cultural norms rooted in the Western tradition. This was noted at CNBC’s East Tech West conference in Thailand in late June.

Kasima Tarnpipitchai, who leads AI at SCB 10X, stressed the importance of countries building their own systems rather than relying solely on “translated” off-the-shelf models.

“How you think, how you interact with the world and who you become when you speak a different language can be completely different,” he explained.

LLMs operate on statistical regularities gleaned from billions of public texts. Most are in English and Chinese, reflecting America’s economic dominance and China’s population size.

LLMs’ achievements—in helping write articles, draft CVs, translate—are undeniable. But they do not grasp the fundamentals of human communication and language, which can yield meaningless prose. More troubling, they can mirror cultural, racial or gender stereotypes woven into the human-written content they ingest.

Consider one example: a team at the Hong Kong University of Science and Technology asked ChatGPT to translate 30 sentences. The bot correctly translated 28 from Indonesian into English, but only 19 back the other way. It is not hard to imagine the problems monoglot Americans could face if they rely on a bot to strike deals with Indonesian partners.

The issue is less the growing dominance of English than unequal access to information—and, by extension, to trade and economic, technological and scientific development.

Concern is shared not only by AI specialists but by politicians with a decolonial lens. At US congressional hearings in May 2023, Senator Alex Padilla of California asked Sam Altman what his company is doing to overcome language barriers.

OpenAI does not hide that its most advanced LLM is biased. The term existing bias denotes a tendency to interpret information in ways that align with certain beliefs. It can produce poor decisions, missed opportunities and reinforce discriminatory stereotypes. The company’s report on GPT-4 shows that tuning efforts and evaluation focused primarily on “US-specific characteristics”. Yet even in America, many do not speak English at home: around 44% of Californians do not.

Altman said he hopes to work with governments and other organisations to obtain datasets that will bolster ChatGPT’s language skills and reach “as broad an audience as possible”. Padilla, a native Spanish and English speaker, is sceptical that systems can deliver equitable linguistic outcomes without major changes in developers’ strategies.

“These new technologies open great opportunities for access to information, education and expanded communication, and we must ensure that language does not become a barrier to these benefits,” the senator said.

Is data colonialism a creature of globalism?

You work for corporations for free every day—whether you want to or not. And you probably even like it. In The Costs of Connection: How Data Colonizes Human Life and Appropriates It for Capitalism, Ulises Mejias and Nick Couldry recount the story of the novelist Sally Rooney, who, after fainting, was told by doctors to drink more water. To remember, she installed an app on her phone.

“A simple daily action that every person performs—tracking whether they have drunk enough water—suddenly became routine within a competitive social space,” the authors write. “The human body was reworked into something requiring remote infrastructure that, incidentally, can be monetized. Rooney herself said she ‘outsourced one of her body’s basic functions to a software contract’. But this is just one small example of something much bigger: a systematic attempt to convert all human lives and relationships into resources for profit. Human experience, potentially every layer and aspect of it, becomes the object of extraction. We call this condition data colonialism, and it is a key dimension of how capitalism is developing today.”

Their concept of data colonialism holds that the information people produce worldwide becomes a resource from which global corporations extract profit. “Information” here is broad: it includes data about your physical activity, movements, search behaviour, language learning via apps, social-media likes, and more.

“Data colonialism appropriates not only physical resources but our very resources for knowing the world. […] The exploitation of human life for profit via data is the culmination of five centuries of attempts to know, exploit and manage the world from certain centres of power. We are entering an age not so much of a new capitalism as a new interweaving of the entwined histories of capitalism and colonialism—and the intertwining force is data,” the philosophers argue.

Data colonialism can be seen as a modern form of exploitation in which corporations or states gain control over vast volumes of human-generated data. That power is exercised through the collection, storage, processing and use of information, primarily in the interests of the “colonisers”—tech giants and influential platforms. Anti-globalists see it as a by-product of the global economy, in which resources are extracted in some regions and processed for profit in others; the lack of physical borders for data only makes the process more efficient.

Critics are convinced this kind of colonialism is born of capitalism, yet sits comfortably with the principles of globalisation.

“Human life must first be configured so that it ‘naturally’ generates a resource such as data. Instant data about one person’s actions or attributes are useless if they cannot be combined with data about other people, actions, moments and attributes to create connections among data points, and that is why the infrastructure enabling this has been built. The world of internet connection is one in which new kinds of corporate power, with privileged access to the data flows extracted from the flow of life, can press a metaphorical button—the extraction apparatus, whether a platform, an app or an AI system. This apparatus restructures human life so that it continuously facilitates data collection and, therefore, potentially, profit,” write Mejias and Couldry.

“People have become sources of behavioural surplus. We are not users; we are mined,” echoes Shoshana Zuboff in The Age of Surveillance Capitalism (2019).

Zuboff shows how the “deepening of capitalism” leads to the automation of human behaviour and turns us into objects. Users, in her view, become raw material, losing access to or control over outcomes: “they are the objects of technology, not its participants”. She highlights “behavioural surplus”—firms collecting data not needed for a service but valuable for sale. Data are used not for the benefit of platform, app and social-media users, but to predict and manipulate behaviour in the interests of corporations and their clients. And we are used without our active, informed consent.

Data are collected worldwide, but the economic returns and analytic capacity are concentrated among platform developers—typically in rich countries. That deepens existing inequalities and creates new dependencies. Today’s digital landscape thus reproduces old colonial patterns: extracting value from less developed regions and using their resources—including cheap energy—for the benefit of the “centre”.

People in so-called “developing” countries, often with limited economic and social protections, have turned to cryptocurrencies to secure remote work or send remittances faster and more cheaply. In doing so, they put control over their data and finances on the line.

In Nigeria, citizens and firms suddenly lost access to the Binance exchange amid allegations of speculation in the naira’s exchange rate. And in March 2022 MetaMask users in Venezuela, Lebanon and Iran reported difficulties accessing their wallets when the company suddenly decided to “comply with the law” and restricted users’ control over their funds.

Stories like these make it easy to rally under the decolonial banners of patriotic populists and embrace national, closed sovereign digital systems, AI included. But is that actually necessary? And is sovereign AI even feasible?

Global versus sovereign AI—a false dichotomy

AI has powerful dual-use potential. It can serve people and society, and it can be wielded by states for repressive and militaristic ends. But it is wrong to frame its development as a binary choice between sovereign and global models.

Yes, there is tension between these poles. What matters is not only who first builds a quasi-perfect “superintelligence”, but whose morals, modes of thought, cultural tradition, legal system and ideology are embedded in AI’s behaviour. Yet this contest does not change the risks humanity actually faces as AI spreads.

AI development requires a formidable financial and economic base; it cannot proceed without capital. Capitalism’s nature is expansive—its actors must ceaselessly seek new domains of profit.

Colonialism became a tool of that expansion: first through physical seizure of resources, today through control of data and technology. So long as capitalism endures, new forms of colonialism will emerge and adapt. And while we remain in a world of capitalism and state interests, AI strategies will need to blend technological independence with openness to international co-operation.

Sovereign AI is no guarantee against data colonialism. In certain political contexts it can even become its local variant. By sovereign AI we mean intelligence developed and controlled within national or regional sovereignty. It is often taken to entail local data storage and processing; training models on native languages, histories and norms; independence from Big Tech; and state oversight. In theory it can be pitched as anti-colonial. In practice, it depends on who wields that “sovereignty”, and to what end.

Proponents of a globalist AI model argue that if digital resources, infrastructure and benefits were distributed evenly, opportunities for exploitation and dominance would fade. Opening markets, free trade and digital integration, they say, boost growth and reduce poverty in developing countries, as in India, China and Vietnam. Digital globalisation broadens access to knowledge, innovation and markets, spurring education and widening horizons.

When EU leaders worried that critical AI capabilities were concentrated mainly in American firms and began mulling sovereignty in such technologies, who rode to the rescue? Globalists. Major cloud providers swiftly invested in EU data centres and offered options to keep users’ data within the bloc.

At CNBC’s East Tech West, the aforementioned Kasima Tarnpipitchai said Asian states lack AI development that would let them control their own technologies, data and infrastructure, ensuring autonomy and meeting unique priorities and security needs. He agreed that one key route to sovereign AI environments is to use open-source models.

“Here in Southeast Asia, and especially in Thailand, there is a lot of amazing talent. It would be a shame if this were captured in a way that didn’t become publicly available or help develop the ecosystem,” said Tarnpipitchai.

He added that open source creates a “collective energy” that could help Thailand become more competitive in AI and strengthen sovereignty in ways that benefit the whole country.

What next

States calling for sovereign AI cannot entirely forgo components, know-how and cloud infrastructure from abroad. Many countries chasing sovereign AI still rely on foreign hardware (chips, for example) and base models built overseas. Such partial dependence undermines the very idea of sovereignty and leaves national systems vulnerable, at the very least, to trade restrictions.

Building and maintaining the hardware and cloud infrastructure needed for sovereign AI is costly. Upfront investment—from data centres and GPUs to secure storage—is substantial. Many countries, especially developing ones, may lack the funds or technical capacity to build this on their own, making it a serious barrier. And who will bankroll someone else’s sovereignty? Tech, digital and financial giants often linked—rightly—to the globalist camp in AI.

The AI ecosystem is inherently international; it thrives on collaboration, talent mobility, publications and standards. Countries tend to pursue mixed strategies, balancing autonomy with the need to remain part of a global community. It is therefore misguided to pit global and sovereign AI against each other in absolute terms.

Most cases lie on a spectrum between the poles. When building national AI systems, it is essential to ensure transparency, rights protection and democratic oversight, aligning local standards with international norms to avoid new forms of digital colonialism and authoritarianism.

Text: comrade-tovarishch

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK