{"id":87517,"date":"2023-11-25T13:00:00","date_gmt":"2023-11-25T11:00:00","guid":{"rendered":"https:\/\/forklog.com\/en\/?p=87517"},"modified":"2025-09-13T12:20:11","modified_gmt":"2025-09-13T09:20:11","slug":"on-acceleration-what-e-acc-is-and-how-it-could-have-influenced-sam-altmans-dismissal","status":"publish","type":"post","link":"https:\/\/forklog.com\/en\/on-acceleration-what-e-acc-is-and-how-it-could-have-influenced-sam-altmans-dismissal\/","title":{"rendered":"On acceleration: what e\/acc is and how it could have influenced Sam Altman&#8217;s dismissal"},"content":{"rendered":"<p>The drama surrounding <a href=\"https:\/\/forklog.com\/en\/news\/openai-scandal-why-sam-altman-was-fired-and-how-attempts-were-made-to-reinstate-him\">OpenAI CEO Sam Altman<\/a> has given rise to various theories about the reasons for his sudden dismissal. The trigger was the company&#8217;s press release in which <a href=\"https:\/\/forklog.com\/en\/news\/sam-altman-ousted-as-openai-ceo\">it was said<\/a> that he \u201cwas not always candid with the board.\u201d But what exactly?<\/p>\n<p>According to <a href=\"https:\/\/forklog.com\/en\/news\/media-openai-board-warned-of-a-breakthrough-ahead-of-sam-altmans-ouster\">data<\/a> Reuters, shortly before the dismissal OpenAI researchers warned the board about a major AI breakthrough that \u201ccould threaten humanity.\u201d In the view of many employees, the development could constitute a breakthrough in creating artificial general intelligence (AGI).<\/p>\n<p>The speculation shed light on the divergent views within the AI community. Essentially, it split into opposing camps: proponents of <span data-descr=\"Effective Accelerationism, e\/acc\" class=\"old_tooltip\">effective accelerationism<\/span> and supporters of <span data-descr=\"Deceleration\" class=\"old_tooltip\">deceleration<\/span> \u2014 \u201cdecels.\u201d Alex Wilhelm from TechCrunch also <a href=\"https:\/\/techcrunch.com\/2023\/11\/20\/e-acc-doomers-decels-openai-altman\/\" target=\"_blank\" rel=\"noopener\" title=\"\">highlights<\/a> <span data-descr=\"AI doomers\" class=\"old_tooltip\">AI doomers<\/span>.<\/p>\n<h2 class=\"wp-block-heading\">Speed and concern<\/h2>\n<p>The views of accelerationists, decels, and AI doomers lie on two planes:<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>speed<\/strong> \u2014 how quickly AI technologies are developing;<\/li>\n<li><strong>worries<\/strong> about their potential to develop too quickly; as a result, one should be very careful about how people use and deploy AI.<\/li>\n<\/ul>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh7-eu.googleusercontent.com\/JdODF3mSNELE_euK7dOA3a7dyumWnOSWIWU4aJpfjMbXNm-wAcnyUA101A0QDnVArr2moqHhrqRO0ys9um9DEClMuC_zPwI_6cbvhlZkAXukyn9qEdd4rx3iB1JFww4biJpp2W3OawOKdIIjYEHUPkA\" alt=\"On acceleration: what e\/acc is and how it could have influenced Sam Altman's dismissal\"\/><figcaption class=\"wp-element-caption\">Attitudes toward the pace of AI development. Data: <a href=\"https:\/\/techcrunch.com\/2023\/11\/20\/e-acc-doomers-decels-openai-altman\/\">TechCrunch<\/a>.<\/figcaption><\/figure>\n<h2 class=\"wp-block-heading\">e\/acc and a16z<\/h2>\n<p>Effective accelerationism is an ideology and movement grounded in the theory of <a href=\"https:\/\/ru.wikipedia.org\/wiki\/%D0%90%D0%BA%D1%81%D0%B5%D0%BB%D0%B5%D1%80%D0%B0%D1%86%D0%B8%D0%BE%D0%BD%D0%B8%D0%B7%D0%BC\" target=\"_blank\" rel=\"noopener\" title=\"\">accelerationism<\/a>, tracing back to the work of French post-structuralists <a href=\"https:\/\/forklog.com\/en\/news\/how-gilles-deleuze-imagined-an-analogue-web3\">Gilles Deleuze<\/a> and F\u00e9lix Guattari. Advocates of this concept, as a rule, sit at the extremes of the usual political spectrum. Thus, accelerationist theory has been actively <a href=\"https:\/\/syg.ma\/@igor-stavrovsky\/nik-lend-bystroie-i-niepristoinoie-vviedieniie-v-aksielieratsionizm\" target=\"_blank\" rel=\"noopener\" title=\"\">pushed<\/a> by British philosopher Nick Land since the 1990s, who chose the term \u201c<a href=\"https:\/\/en.wikipedia.org\/wiki\/Dark_Enlightenment\" target=\"_blank\" rel=\"noopener\" title=\"\">Dark Enlightenment<\/a> to describe his views.\u00a0<\/p>\n<p>In the name e\/acc one can also read a nod to \u201ceffective altruism\u201d (EA), whose popularity rose and fell along with former FTX CEO Sam Bankman-Fried. The idea of EA is to \u201cfind the best ways to help others and apply them in practice.\u201d<\/p>\n<p>In the recently published \u201c<a href=\"https:\/\/a16z.com\/the-techno-optimist-manifesto\/\" target=\"_blank\" rel=\"noopener\" title=\"\">The Techno-Optimist Manifesto<\/a>\u201d by a16z co-founder Marc Andreessen, one can find this definition:<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>\u201cWe believe in accelerationism \u2014 a conscious and purposeful stimulation of technological development to ensure the law of accelerating returns is fulfilled. So that the spiral of technocapital growth continues indefinitely.\u201d<\/em><\/p>\n<\/blockquote>\n<p>The manifesto also states:<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>\u201cWe have enemies. Our enemies are not bad people, but rather bad ideas. Our modern society for six decades has been subjected to mass demoralisation \u2014 against technology and against life \u2014 under various names: \u2018existential risk\u2019, \u2018resilience\u2019, \u2018ESG\u2019, \u2018sustainable development goals\u2019, \u2018corporate social responsibility\u2019, \u2018stakeholder capitalism\u2019, \u2018the precautionary principle\u2019, \u2018trust and safety\u2019, \u2018technoethics\u2019, \u2018risk management\u2019, \u2018anti-growth\u2019, \u2018limits to growth\u2019.\u201d<\/em><\/p>\n<\/blockquote>\n<p>These excerpts help summarise the core ideas and values of e\/acc. Its supporters advocate deliberately pushing society forward through aggressive technological innovation, unrestrained by concerns about potential negative effects.<\/p>\n<p>The central argument of e\/acc is that new technologies can lead to radical social changes, and the long-term benefits from this are so large that potential risks can be ignored.<\/p>\n<p>In the \u201cTechno-Optimist Manifesto\u201d the decels are also described:<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>\u201cOur enemy is deceleration, anti-growth, depopulation \u2014 a nihilistic desire that is all the rage among our elite: to have fewer people, less energy, more suffering and death.\u201d<\/em><\/p>\n<\/blockquote>\n<h2 class=\"wp-block-heading\">Decels and doomers<\/h2>\n<p>Decels continue to advocate AI development, just with greater caution on this issue. Among them are many technologists such as co-founder of <span data-descr=\"Machine Intelligence Research Institute \u2014 Institute for Machine Intelligence Research\" class=\"old_tooltip\">MIRI<\/span> <a href=\"https:\/\/www.youtube.com\/watch?v=gA1sNLL6yg4&#038;t\" target=\"_blank\" rel=\"noopener\" title=\"\">Eliezer Yudkowsky<\/a>.<\/p>\n<p>Also in early 2023, more than a hundred scientists, independent researchers and businesspeople, including Elon Musk and Steve Wozniak, signed <a href=\"https:\/\/forklog.com\/en\/news\/hundreds-of-experts-call-for-a-six-month-pause-on-training-the-gpt-4-successor\">an open letter<\/a> proposing a six-month pause on training language models more powerful than GPT-4:<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>\u201cPowerful AI systems should be developed only when we are confident that their effects will be positive and the risks manageable,\u201d the statement said.<\/em><\/p>\n<\/blockquote>\n<p>AI doomers argue that humanity is digging its own grave with the help of new AI technologies.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>\u201cBut since doomism doesn\u2019t earn much money (aside from writing good science fiction), it\u2019s more of an academic position than the stance of tech companies.\u201d <em>\u201cAs far as I can tell, e\/acc is winning the current debate largely due to rapidly advancing AI technologies and the fact that a large portion of the economy wants to use them to save money and do more,\u201d<\/em> says Alex Wilhelm of TechCrunch.<\/em><\/p>\n<\/blockquote>\n<p>Thus today, a tailwind from capitalism is blowing in behind e\/acc supporters.<\/p>\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n<p>One can suppose that it was the decels who attempted to remove Altman from his post at OpenAI, which <a href=\"https:\/\/forklog.com\/en\/news\/openai-scandal-why-sam-altman-was-fired-and-how-attempts-were-made-to-reinstate-him\">enraged<\/a> Microsoft CEO Satya Nadella. His company <a href=\"https:\/\/forklog.com\/en\/news\/microsoft-to-invest-10-billion-in-openai-reports-say\">has invested<\/a> $10 bn in OpenAI, and Altman <a href=\"https:\/\/forklog.com\/en\/news\/openai-reaches-agreement-to-reinstate-sam-altman-as-ceo\">returned<\/a> to the role of CEO with a new board of directors.<\/p>\n<p>If his dismissal is viewed as a coup and a bid to \u201cbuild safe AI or do nothing,\u201d it has effectively failed.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>How &#8216;effective accelerationism&#8217; split the AI research community.<\/p>\n","protected":false},"author":1,"featured_media":87518,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"1","news_style_id":"1","cryptorium_level":"","_short_excerpt_text":"","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[1144],"tags":[438,1190,286,1361],"class_list":["post-87517","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-longreads","tag-artificial-intelligence","tag-openai","tag-society","tag-virtual-world-innovations"],"aioseo_notices":[],"amp_enabled":true,"views":"56","promo_type":"1","layout_type":"1","short_excerpt":"","is_update":"","_links":{"self":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/87517","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/comments?post=87517"}],"version-history":[{"count":1,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/87517\/revisions"}],"predecessor-version":[{"id":87519,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/87517\/revisions\/87519"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media\/87518"}],"wp:attachment":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media?parent=87517"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/categories?post=87517"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/tags?post=87517"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}