{"id":26510,"date":"2025-08-26T14:00:37","date_gmt":"2025-08-26T11:00:37","guid":{"rendered":"https:\/\/forklog.com\/en\/?p=26510"},"modified":"2025-08-26T14:14:41","modified_gmt":"2025-08-26T11:14:41","slug":"vitalik-buterin-warns-of-the-catastrophic-potential-of-superintelligent-ai","status":"publish","type":"post","link":"https:\/\/forklog.com\/en\/vitalik-buterin-warns-of-the-catastrophic-potential-of-superintelligent-ai\/","title":{"rendered":"Vitalik Buterin Warns of the Catastrophic Potential of Superintelligent AI"},"content":{"rendered":"<p>Humanity is not sufficiently adept at aligning global values to ensure that a superintelligent artificial intelligence will benefit all people. This statement was <a href=\"https:\/\/x.com\/VitalikButerin\/status\/1959947538909266318\">made<\/a> by Ethereum co-founder Vitalik Buterin.<\/p>\n<p>He commented on a post by 3D artist Utah teapot.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">How is AI alignment even a thing? Everyone is always asking &#8220;aligned to what&#8221; and no one ever has an answer other than &#8220;human values&#8221;. Coherent, self-consistent human values are not a thing. I still don&#8217;t get it. What are we researching? It&#8217;s ridiculous that this is a growing\u2026<\/p>\n<p>\u2014 Utah teapot \ud83e\uded6 (@SkyeSharkie) <a href=\"https:\/\/twitter.com\/SkyeSharkie\/status\/1959406940028854333?ref_src=twsrc%5Etfw\">August 24, 2025<\/a><\/p><\/blockquote>\n<p> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>&#8220;How can the concept of <span data-descr=\"AI alignment \u2014 a field of research aimed at ensuring AI behavior aligns with human goals, values, and interests\" class=\"old_tooltip\">AI alignment<\/span> even exist? I always ask: &#8216;aligned to what?&#8217; No one ever answers anything other than &#8216;human values.&#8217; But coherent, self-consistent human values do not exist. I still don&#8217;t understand. What are we even researching? It&#8217;s absurd that this is becoming a growing industry. It&#8217;s a fictitious job,&#8221; he wrote.<\/p>\n<\/blockquote>\n<p>Buterin emphasized that the world is full of things that contradict ethical principles. He cited examples such as the killing or imprisonment of innocents.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>&#8220;[\u2026] We are not yet good enough at alignment to ensure that a single superintelligent AI would even avoid this,&#8221; the developer added.<\/p>\n<\/blockquote>\n<p>Utah teapot noted that the development of aligned artificial intelligence is moving towards restricting public access to development. Meanwhile, large companies sell versions of their AI systems that contribute to negative phenomena and serve as instruments of war.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>&#8220;I am deeply concerned that centralizing access to AI technologies allows imposing as security issues things that are not truly security-related \u2014 for example, discussions around &#8216;psychosis&#8217; from <span data-descr=\"large language models\" class=\"old_tooltip\">LLM<\/span>. This poses a risk of harming unique or marginal cultures and their values,&#8221; the user noted.<\/p>\n<\/blockquote>\n<p>Buterin shares many of these concerns.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>&#8220;I think the greatest risks will come from military and other powerful structures that can exempt themselves from the safety rules that apply to everyone else by default,&#8221; added the Ethereum co-founder.<\/p>\n<\/blockquote>\n<p>Utah teapot cited AI startup Anthropic as an example. It develops alternative versions of &#8216;civilian&#8217; models and provides them to governments. These can be used for military or intelligence operations.<\/p>\n<p>Buterin stressed that the likelihood of a catastrophic scenario for humanity due to AI increases if there is only one superintelligent agent AI with its own will and ability to act as a subject.<\/p>\n<p>If the environment is pluralistic, no single system can totally control the situation. However, creating such an environment requires more than just market mechanisms. Intentional efforts are needed, including changes in laws and incentives that large corporations may not favor.<\/p>\n<p>Back in 2022, India&#8217;s Minister of Information Technology, Rajeev Chandrasekhar, called for the development of global standards to ensure AI safety for humans.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Humanity is not sufficiently adept at aligning global values to ensure that a superintelligent AI will benefit people.<\/p>\n","protected":false},"author":1,"featured_media":26511,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"1","news_style_id":"1","cryptorium_level":"","_short_excerpt_text":"Humanity struggles to align global values for AI's benefit.","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,1138,200],"class_list":["post-26510","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-opinions","tag-vitalik-buterin"],"aioseo_notices":[],"amp_enabled":true,"views":"258","promo_type":"1","layout_type":"1","short_excerpt":"Humanity struggles to align global values for AI's benefit.","is_update":"","_links":{"self":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/26510","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/comments?post=26510"}],"version-history":[{"count":1,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/26510\/revisions"}],"predecessor-version":[{"id":26512,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/26510\/revisions\/26512"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media\/26511"}],"wp:attachment":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media?parent=26510"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/categories?post=26510"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/tags?post=26510"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}