
Digital cyborgs and their rights: the legal side of AI agents
Autonomous AI agents are a new yet predictable trend in Web3. Such bots are already taking an active role in DeFi and trading, showing the potential to create M2M networks and entire ecosystems populated solely by artificial intelligence. Yet the autonomy of AI poses a challenge to existing legal models.
In this article, Sergei Ostrovskiy, a partner at Aurum and a Web3 lawyer, looks at AI agents from a legal angle, examines the conditions for their transformation into digital cyborgs, and proposes legal solutions for their structuring.
Basic questions
The concept of autonomous AI agents on blockchain is no longer just theory. The most famous example is Terminal of Truth. Based on the Claude Opus model, the agent persuaded Marc Andreessen (a16z) to invest $50,000 in launching the Goatseus Maximus (GOAT) token, linked to the bot’s “religion”. Today GOAT’s market capitalisation exceeds $370m.
It is no surprise that AI agents fit neatly into the Web3 ecosystem. They cannot open bank accounts, but they can manage crypto wallets and accounts on X. For now AI agents mostly focus on memecoins, but they have vast potential in other areas—decentralised governance, machine networks, oracles and trading.
The more AI agents behave like humans, the more legal questions arise, as legal systems are forced to assess them. What legal status do AI agents have? Who acquires rights and bears responsibility for their actions? How can AI agents be structured legally and protected against risks?
Fundamental legal problems of AI agents
No legal personality
Legal systems recognise only two types of persons: natural persons (humans) and legal persons (companies), and autonomous AI agents fit neither. Though they can mimic human behaviour (for example, through social-media accounts), they lack a body, moral agency and legal identity.
Some theorists propose granting AI agents “electronic legal personality”—a status akin to corporations but adapted to artificial intelligence. In 2017 the European Parliament even considered the question, but the idea was rejected owing to various concerns and risks, which remain unresolved.
AI agents are unlikely to obtain personhood in the near future. Even so, as with DAOs, some crypto-friendly jurisdictions will try to craft special legal regimes and corporate forms tailored to AI agents.
Liability for acts and consequences
Without legal personality, AI agents cannot enter into contracts, own property or bear liability. To the legal system they simply do not exist as subjects. Yet they already interact with the outside world and perform legally significant acts that produce legal consequences.
A logical question follows: who is the real party to a transaction, who acquires rights and who answers for the consequences? In legal terms, an AI agent is an instrument through which its owner or operator acts. Therefore, any act of the AI agent is, de jure, the act of its owner—natural or legal.
Thus, because an AI agent cannot itself acquire rights and responsibilities, it needs a subject recognised by law and capable of acquiring rights and obligations in its stead for it to exist legally.
Regulatory constraints
The advent of the first successful large language model (LLM)—ChatGPT—sparked unprecedented interest in AI and machine learning. Regulation became only a matter of time. In 2024 the European Union adopted the AI Act, currently the most comprehensive regulatory act in artificial intelligence. Elsewhere, limited AI regulation has either been adopted, is being introduced or is planned.
The EU’s AI Act distinguishes AI systems by risk. For systems with no or minimal risk, regulation is practically absent. At higher risk levels AI faces restrictions and obligations—such as disclosing its nature.
AI agents that interact with third parties—say, posting or executing on-chain transactions—may also fall under traditional regimes such as consumer protection, personal-data laws and others. In such cases, the activity of autonomous bots may be viewed as the provision of services. The lack of clear geography and a global focus complicate compliance.
Ethics
Because AI agents still have limited capabilities and scope, their creators rarely dwell on ethics. Priority is given to autonomous (trustless) execution and speed rather than deep ethical configuration.
Yet an “ethical compass” is at least desirable when autonomous decisions are made in high-risk domains such as finance, trading and governance. Otherwise erroneous training data or simple configuration mistakes may lead an agent to harm people. The higher the autonomy and discretion of an AI agent, the greater the risks.
Legal structuring of AI agents
Workable legal models for AI agents matter greatly for innovation, the development of the field as a whole and the emergence of more advanced bots. If crypto can already be called a regulated industry, the legal structuring of AI agents is hampered by the lack of industry standards and therefore requires a creative approach.
A structuring approach
In my view, one of the main goals of legally structuring an autonomous AI agent should be to grant it its own legal personality and legal identity, independent of its creator. This raises the question: at what point can we say an AI agent truly possesses these characteristics?
Every developer strives to make their agent as close as possible to an independent person. Unsurprisingly, they would like to grant agents freedom in legal terms. To do so, two key conditions should be met. First, an AI agent must be independent not only in making its own decisions but also in being able to implement them legally—execute its will and take final decisions about itself. Second, it must be able to acquire rights and obligations through its actions, independently of its creator.
As an AI agent cannot be recognised as a natural person, the only way for it to gain legal personality at present is by using the status of a legal entity. The agent attains legal personality when it can, as a full-fledged person, make independent decisions and implement them in its own name.
If successful, this state of affairs brings the AI agent to life in legal terms. Such a digital person that has obtained legal existence can fairly be compared to a digital cyborg. A cyborg (a shortening of “cybernetic organism”) is a being that combines mechanical-electronic and organic elements. For a digital cyborg, the mechanical part is replaced by the digital, and the organic part by people who help carry out its decisions.
Our digital cyborg will consist of three key components:
- AI agent — the electronic brain;
- corporate form — the legal body;
- people executing tasks — the organic hands.
Challenges of the corporate form
Traditional legal entities such as LLCs or corporations require that ultimate ownership and ultimate control rest with humans. Corporate structures are not designed for ephemeral digital personas, which brings us to the central problem of legally structuring blockchain-based AI agents—the challenges of the corporate form.
If we want to grant an AI agent legal identity via a corporate form and ensure its independence and autonomy within that structure, we need to be able to exclude human control over such an entity. Otherwise, if ultimate control remains with humans, artificial intelligence becomes a tool, not a digital individual. In addition, where human participation is needed to implement an AI decision—for example, to sign a contract or perform administrative tasks—that person should not be able to block or veto the AI agent’s decision (except in cases of a “machine uprising”).
But how can this be done if traditional corporate forms require humans to own and manage entities? Let’s find out.
Three key aspects of the structure
1. Blockchain environment
AI agents can independently execute on-chain transactions, including interactions with multisig wallets and smart contracts. This allows an AI agent to be assigned a unique identifier—a wallet—through which it will issue verifiable instructions and commands to the blockchain. Without this, a true digital cyborg cannot yet exist.
2. Autonomy and freedom of action
To preserve full autonomy for a digital cyborg, it is important that people involved in managing the legal structure cannot interfere with the AI agent’s actions or influence its decisions. This ensures that the artificial intelligence retains freedom of action and can implement its own will, and it requires both legal and technical measures.
For example, to ensure that an AI agent truly owns and controls a blockchain wallet, the wallet can be created in a trusted execution environment (TEE). This guarantees that no human gains access to the wallet, its seed phrase or assets. From a legal perspective, the corporate documents of the entity used as a wrapper for the AI must provide for proper allocation of control and powers, as well as safeguards that exclude human interference and can be changed only in a limited set of cases.
3. Human executors
Because we still live in a legal world, the AI agent will need human executors to implement certain decisions. This means the artificial intelligence will issue instructions to officers on what actions to take. This view of things flips the traditional hierarchy, since in our scenario the AI essentially gains control over people, at least within its own corporate structure.
This aspect is perhaps the most interesting, as it calls for an unconventional approach. One could even say that this state of affairs violates Isaac Asimov’s second law of robotics, but it is unlikely anyone truly worries about that now. In addition, sensible fail-safes and a proper “ethical compass” solve this problem, at least at this stage.
AI wrappers — legal structures for agents operating on blockchain
As we have seen, traditional corporate structures are not suitable for our aims and do not deliver the desired outcome. Below we consider structures developed for DAOs and blockchain communities—both classical structures adapted to Web3 and specialised corporate forms for decentralised autonomous organisations.
From an AI agent creator’s perspective, legal structuring separates the agent from the creator, provides limited liability via a corporate structure, and enables the planning and optimisation of taxes and financial risks.
Foundations and trusts
A purpose trust and an ownerless foundation share many characteristics but differ in nature. A foundation is a full-fledged legal entity, whereas a trust is more of a contractual arrangement that often does not require state registration. We consider these forms in jurisdictions most popular in Web3: foundations in the Cayman Islands and Panama, and trusts in Guernsey. Key advantages include the absence of taxes, high procedural and managerial flexibility, and the ability to integrate blockchain into decision-making.
Both foundations and trusts require management in the form of natural or legal persons. At the same time, they allow smart contracts and other technical solutions to be integrated into governance. For example, management can be required to seek approval from the AI agent by interacting with it, with a smart contract or with the wallet controlled by the artificial intelligence. A more sophisticated legal design will allow the agent to issue instructions to management, including through “thoughts” generated by the AI. Thus the use of trusts and foundations enables more complex corporate structures tailored to AI agents and supportive of their autonomy.
Where necessary, the AI agent’s creator can act as a beneficiary with limited powers, allowing them to obtain financial rights and manage taxes without interfering with the agent’s activity and decisions.
Algorithmically managed DAO LLCs
A DAO LLC is a special corporate form devised for decentralised organisations. One can incorporate a DAO LLC with a single member, i.e., without a real organisation. Below we consider this form in two of the most popular jurisdictions: Wyoming (US) and the Marshall Islands.
We refer specifically to algorithmically managed DAO LLCs, since in such a company all power can be concentrated in smart contracts rather than human hands. This is crucial, because in our case smart contracts can be controlled by the AI agent, allowing the artificial intelligence to hold full power within this corporate form.
DAO LLCs are also flexible in procedures and corporate governance, so they can embed sophisticated mechanisms of control and decision-making and reduce the level of human interference in these processes.
Although the presence of a natural or legal person is still formally required, their powers can be substantially limited—for example, to technical tasks, corporate actions and the implementation of decisions taken at the smart-contract level. In this context, the member of a DAO LLC can be the AI agent’s creator, allowing them to obtain financial rights and, in future, powers to distribute profits.
Simpler AI agents
To structure simpler AI agents, such as trading bots, classical corporate structures can be used, because in this case there is no need to subject the corporate form to the decisions and discretion of the AI agent. The artificial intelligence remains a means or tool of its creator and does not claim the status of a full-fledged digital cyborg.
Conclusion
Autonomous AI agents could transform the blockchain industry and significantly accelerate innovation across almost every domain. They are still at the very beginning, but development is rapid and very soon we will see real digital cyborgs—digital organisms with sustained thought processes and their own identities. That will require a combination of technical and legal innovation.
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!