Site iconSite icon ForkLog

How the EU AI Act will affect industry and people: a briefing

How the EU AI Act will affect industry and people: a briefing

On 13 March, the European Parliament adopted the EU Artificial Intelligence Act. The measure became a legislative saga: for years, opponents of regulating the AI industry vowed to block it, yet a compromise was ultimately reached. ForkLog asked participants of Privacy Day 2024 how the act will affect the AI industry overall, why France and Germany objected, and whether the experience can be used beyond the EU. 

In brief

ForkLog: Why has the law been criticised from the standpoint of technological development and human rights?

Sarkis Darbinyan, cyber-lawyer, head of Roskomsvoboda’s legal practice: It took many years to draft and agree the law. At last, on 8 December last year, EU institutions reached agreement on the AI Act after three days of intensive talks and almost two and a half years since the first proposal in April 2021. Yet it still has critics on both sides. From a technology perspective, the law restricts the development, deployment and use of some technologies classified as high-risk AI systems. These include systems that harm health, safety and fundamental human rights, as well as those that damage the environment. 

The category covers systems that influence voters’ opinions during political campaigns; recommender algorithms used by social networks with more than 45 million users; real-time facial recognition in public places; emotion-recognition systems for law-enforcement agencies (including border control), as well as employers and educational institutions. The document separately bans so-called predictive-policing systems — mechanisms that use AI to analyse big data and identify potential offenders or victims. 

As for neural networks such as ChatGPT, their output must be labelled, and operators must not allow their AI to generate illegal content. In addition, services such as Midjourney or Stable Diffusion will be required to disclose the materials used to train and “broaden the gaze” of their models. Rightsholders of such materials may bring a lawsuit if they discover a developer trained a model on their content unlawfully.

Human-rights advocates criticise the law too. After painfully negotiated trilogues, it suddenly emerged that the act would include a highly questionable list of exceptions to the general prohibitions. As a result, broad criteria appeared that let developers escape obligations to protect human rights, confirming that the AI Act puts industry interests above the fundamental right to privacy. 

For example, exceptional cases were introduced that allow the use of facial recognition in law enforcement and in migration contexts, demonstrating a two-tier approach to fundamental rights in which migrants and already marginalised people are deemed less worthy of protection. 

ForkLog: Which provisions might be transplanted into the laws of Russia, Ukraine, Armenia, Kazakhstan and other countries?

Sarkis Darbinyan: It seemed the most contentious provisions on biometric identification in public places had been agreed thanks to substantial work by a coalition of rights groups in Europe and the US, but last-minute amendments wiped out what had taken so long to negotiate. They introduce post-recognition — that is, using facial recognition not in real time but on previously recorded footage without a court order. Rights advocates who worked on the text believe this makes a mockery of earlier statements by the Commission and Parliament that facial recognition would be subject to strict safeguards and allowed only in narrow, exceptional circumstances and solely by court order. 

Russian lawmakers will likely copy these provisions if they pass their own AI law. For now, facial recognition in Russia develops in a complete grey zone. Not only is there no court order required; there is not even basic oversight of the technology’s use by the competent authority. And lacking any independence, that authority is unlikely to do anything. 

It is impossible to imagine Roskomnadzor arriving with inspections and fines at the Interior Ministry or Moscow’s DIT to see how biometric identification of citizens is run. Russian, Kazakh and Armenian law has no requirements at all for reference images, grounds or procedures for using such technologies. And of course there are no legal guarantees for citizens whatsoever. 

Even with all the criticism of the final text, such a broad list of prohibitions on state and corporate uses is unlikely to be adopted in Russia or Kazakhstan. A more “Chinese” path is likely — where essentially everything is permitted for everyone, but with mandatory algorithm escrow so the state can control any creation and deployment of AI-based technologies. 

In detail

ForkLog: The EU AI Act has spawned many fears and myths. Opponents cite a “risk ranking” devised by the drafters. What is this threat classification?

Alexander Tyulkanov, associate researcher at the Centre for International Intellectual Property Studies in Strasbourg: Yes, that is a common misconception. It began when, in 2021, the European Commission poorly presented the categories of legal regimes in the first draft. A “risk pyramid” was shown in a presentation — but it does not appear in the text. Instead there are different categories, any of which an AI system may fall under. 

In the final text, two basic categories are key: AI models and AI systems. The authors propose treating them separately. 

Systems are closer to an end product that can deliver value. But a system typically runs on a model. Take the large language model GPT-4. In the act’s terms, that is an AI model. Its developers may face obligations, but only if their project exceeds a threshold of capabilities defined by the European Commission and competent bodies. Each case will be treated individually. People often cite various numbers — FLOPS and the like. That is not the point. What matters is the assessment of so‑called systemic risks in each specific case. 

The main question in such an assessment is how high the risks a given model can create in critical areas are. If analysis shows a model has such potential, it may be classified as a model with systemic risk. 

There are expected to be few such models, if any. Most likely they are leading products from American developers such as OpenAI or Anthropic. Lately we have also heard about the French firm Mistral. These top models might be recognised as bearing systemic risks — but even that is not certain. 

Even if an AI development is recognised as high-risk, disclosure requirements cover only information that does not harm developers’ interests. The goal is not to reveal trade secrets or strip developers of competitive advantages. It is about a necessary minimum of information — for example, what data the model was trained on.

ForkLog: Models are clear enough; what is meant by systems?

Alexander Tyulkanov: AI systems are straightforward. Most systems on the market — that is, finished AI products — will not fall under the act. ChatGPT itself is not regulated, because the manufacturer does not envisage its use in high-risk tasks. That does not mean someone cannot maliciously or mistakenly use it for such purposes, but OpenAI has never stated its product is intended for disease diagnosis or court decisions. Such systems can in principle be used for high-risk tasks. But since that is not claimed by the developer, such tools are not regulated. 

ForkLog: How do European lawmakers intend to distinguish high risks from low ones?

Alexander Tyulkanov: There is a list of fields that in the EU and beyond are subject to special regulation — for example, medicine. Every country has sectoral laws on medical devices requiring clinical trials before a device is placed on the market. The AI act envisages similar requirements: developers must provide technical documentation that meets industry standards. If existing sectoral rules already detail the development and operation of a specific type of AI system, the act does not create a significant additional regulatory burden.

But there are gaps, set out in Annex III to the AI Act. Suppose a company uses a system to evaluate staff performance at work, or an educational institution uses a system to grade students. And employment or expulsion depends on the system’s output. The potential consequences are clearly substantial. This is what counts as a high-risk system. If the system is poor quality, its use can be catastrophic for a person who loses a job or education. 

These potential problems are addressed in Annex III: if AI can plausibly cause harm, a set of requirements applies to the system’s developers and operators. I repeat: regulation covers not only the system but also the actors involved in its creation and operation. 

ForkLog: What are those requirements?

Alexander Tyulkanov: They are the same as in traditional risk management and quality-control systems used at any large enterprise — adjusted for AI specifics. Naturally, AI projects must meet the same consumer-protection requirements that apply to other products. The standards are not that hard to follow. But some sensitive areas require transparency for societal safety. 

For AI, the most obvious examples are systems that generate deepfakes or interact directly with people — for instance, by phone. As a matter of basic ethics, a person should know when they are talking to a robot. In such cases the act requires disclosure. The same applies to systems used for emotion analysis or categorisation by biometric traits: a person must be warned that AI is collecting such data. 

ForkLog: Debates about data collection often raise the question of who holds copyright — since datasets are formed from any available information, in theory anyone could claim royalties. Is this addressed in the act?

Alexander Tyulkanov: The document describes requirements for transparency about dataset provenance. There is a list, in broad terms, of what must be checked and what information developers must then disclose, including for compliance with intellectual-property law. The aim is not for the act to answer the issue exhaustively: in the coming years, additional guidance will be issued, including on how far disclosure about a dataset’s composition and origin should go.

At the same time, the act does not prescribe how data may be collected — that is governed by other laws. In particular, Europe’s general personal-data regulation — the GDPR — applies. There is no need to duplicate it or the copyright directives; the act simply indicates the degree to which information must be disclosed on a developer’s compliance with existing law. 

Note that scientific research and commercial data collection are subject to different requirements. In the latter case, if there is a complaint about copyright infringement during product creation, the competent authority may investigate and request evidence that the rights holder’s wishes were respected when compiling the training dataset.

ForkLog: The act was criticised from both “right” and “left”. France’s president, Emmanuel Macron, said excessive regulation could hurt AI’s development in the EU. Others claim the document regulates nothing at all and that big firms can always evade it by using new legal entities and delegating research to proxy startups. How do you see these two lines of criticism?

Alexander Tyulkanov: I do not find these arguments sufficiently substantiated, though I understand them. There is a widespread view that the document does not regulate enough and that Annex III should be expanded because it omits all potential societal dangers. Critics also demand a list of prohibited AI products incompatible with the EU’s fundamental values. In this context, much was said about real-time biometric identification that could be used, for example, against participants in civil protests. Such fears will always exist. 

People fear ending up in the same position as those in jurisdictions where human rights are less protected. National-security bodies see things differently; they would like to use these technologies to the maximum. Law-enforcement agencies are understandable, too — but if they are not constrained in time, the coercive apparatus that exists in any state can intrude deeply into private life, violating fundamental rights. 

It is also worth noting that the final text includes many caveats allowing a system that formally meets high-risk criteria not to be treated as dangerous. For instance, if it is used not to assess staff performance but to check how adequately a manager carried out that assessment. In that case AI does not evaluate people itself; it helps a human verify a human’s evaluation. This provokes protests: critics believe that if something is potentially dangerous, it should be banned outright. 

It is clear why France and Germany opposed overly strict rules, fearing European projects would lose the ability to compete with American ones. France has Mistral, whose output is comparable with some OpenAI models; Germany’s Aleph Alpha has also done well. Why Italy protested is less clear: it has no firms at that level, yet still argued regulation would harm its developers. It is therefore not obvious what was actually discussed behind closed doors, what compromises were struck and why some governments periodically advanced odd arguments. The main thing is that the Germany–Italy–France alliance that could have blocked the text did not materialise, and these countries backed the compromise version adopted unanimously.

ForkLog: Are similar regulations being considered elsewhere — and is there any need for them? 

Alexander Tyulkanov: There is such an initiative in Kyrgyzstan, and I was involved in drafting a new law — more precisely, a Digital Code. The authors first decided to codify existing rules, say on personal data and other aspects of the digital economy. Then new material is added in separate chapters, including on AI. 

As the participant responsible for the AI chapter, I relied on the EU’s AI Act as the document with the most comprehensive, clear and consistent approach to regulating the development and use of AI systems where necessary. 

To reiterate, no one proposes regulating the industry as a whole — that is pointless. It should be regulated only where it matters for safety, for a state’s critical infrastructure and for citizens’ rights. My colleagues and I proposed that Kyrgyzstan adopt the basic principles set out not only in the EU act but also in OECD recommendations. 

We also tried to account for the nuances of a specific jurisdiction. Kyrgyzstan is a small market. Some bureaucratic processes that are easy to imagine in the EU are impossible to sustain in a small jurisdiction; they carry no positive value there. 

So in Kyrgyzstan we effectively created a simplified version of regulation based on the same principles but adapted to a specific country with a much smaller market and far fewer bureaucratic resources. To be clear: when I say bureaucracy, I do not mean it pejoratively. In any rule-of-law state, rights, duties and powers must be clearly allocated between the market and regulators so that checks and balances work. Each actor must be unable to act to the detriment of others. The state should not take on too much, and companies should not violate citizens’ rights. Numerous studies show firms adopt ethical pledges and then fail to follow them until the state takes a real interest. 

In my view, relying on non-binding ethics codes is rash, especially in jurisdictions such as the Russian Federation. Citizens expect the state to protect them. It is illogical that, say, medical devices are regulated while high-risk AI systems — for example, those affecting critical infrastructure — are not. That should not be the case; a system of checks and balances is needed. 

In Russia, however, the problem is different. AI systems are already actively used by law enforcement. With no regulation in place, no one can tie their hands and limit abuse. That helps no one. 

Exit mobile version