Humanity is not sufficiently adept at aligning global values to ensure that a superintelligent artificial intelligence will benefit all people. This statement was made by Ethereum co-founder Vitalik Buterin.
He commented on a post by 3D artist Utah teapot.
How is AI alignment even a thing? Everyone is always asking “aligned to what” and no one ever has an answer other than “human values”. Coherent, self-consistent human values are not a thing. I still don’t get it. What are we researching? It’s ridiculous that this is a growing…
— Utah teapot 🫖 (@SkyeSharkie) August 24, 2025
“How can the concept of AI alignment even exist? I always ask: ‘aligned to what?’ No one ever answers anything other than ‘human values.’ But coherent, self-consistent human values do not exist. I still don’t understand. What are we even researching? It’s absurd that this is becoming a growing industry. It’s a fictitious job,” he wrote.
Buterin emphasized that the world is full of things that contradict ethical principles. He cited examples such as the killing or imprisonment of innocents.
“[…] We are not yet good enough at alignment to ensure that a single superintelligent AI would even avoid this,” the developer added.
Utah teapot noted that the development of aligned artificial intelligence is moving towards restricting public access to development. Meanwhile, large companies sell versions of their AI systems that contribute to negative phenomena and serve as instruments of war.
“I am deeply concerned that centralizing access to AI technologies allows imposing as security issues things that are not truly security-related — for example, discussions around ‘psychosis’ from LLM. This poses a risk of harming unique or marginal cultures and their values,” the user noted.
Buterin shares many of these concerns.
“I think the greatest risks will come from military and other powerful structures that can exempt themselves from the safety rules that apply to everyone else by default,” added the Ethereum co-founder.
Utah teapot cited AI startup Anthropic as an example. It develops alternative versions of ‘civilian’ models and provides them to governments. These can be used for military or intelligence operations.
Buterin stressed that the likelihood of a catastrophic scenario for humanity due to AI increases if there is only one superintelligent agent AI with its own will and ability to act as a subject.
If the environment is pluralistic, no single system can totally control the situation. However, creating such an environment requires more than just market mechanisms. Intentional efforts are needed, including changes in laws and incentives that large corporations may not favor.
Back in 2022, India’s Minister of Information Technology, Rajeev Chandrasekhar, called for the development of global standards to ensure AI safety for humans.
