{"id":95991,"date":"2026-04-07T18:06:45","date_gmt":"2026-04-07T15:06:45","guid":{"rendered":"https:\/\/forklog.com\/en\/?p=95991"},"modified":"2026-04-07T18:10:18","modified_gmt":"2026-04-07T15:10:18","slug":"dictator-and-deceiver-journalists-examine-sam-altmans-leadership-style","status":"publish","type":"post","link":"https:\/\/forklog.com\/en\/dictator-and-deceiver-journalists-examine-sam-altmans-leadership-style\/","title":{"rendered":"Dictator and Deceiver: Journalists Examine Sam Altman&#8217;s Leadership Style"},"content":{"rendered":"<p>Journalists from the New Yorker conducted an extensive 18-month investigation into Sam Altman&#8217;s activities and <a href=\"https:\/\/www.newyorker.com\/magazine\/2026\/04\/13\/sam-altman-may-control-our-future-can-he-be-trusted\">concluded<\/a> that he frequently lied during his tenure as CEO of OpenAI.<\/p>\n<p>Investigative journalist Ronan Farrow and New Yorker writer Andrew Marantz examined previously unpublished internal memos, 200 pages of documents, and interviewed over 100 individuals.<\/p>\n<p>The primary aim was to understand why Altman was <a href=\"https:\/\/forklog.com\/en\/news\/sam-altman-ousted-as-openai-ceo\">removed<\/a> by board members in November 2023.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cOpenAI was founded on the premise that artificial intelligence could become the most dangerous invention in human history, so the company&#8217;s CEO must be a person of extraordinary honesty. The board members concluded that Altman lacked these qualities. We question whether they are right in asserting that he cannot be trusted,\u201d wrote Farrow.<\/p>\n<\/blockquote>\n<p>The authors note that in the fall of 2023, OpenAI&#8217;s chief scientist Ilya Sutskever compiled about 70 pages of memos on Altman and his deputy Greg Brockman. One of them begins with the words: \u201cSam demonstrates a consistent tendency to lie.\u201d<\/p>\n<p>Former company member Dario Amodei kept his own personal records. In one document, he described the words of the OpenAI head as \u201cnonsense.\u201d<\/p>\n<p>Those who facilitated Altman&#8217;s removal accused him of deceit.<\/p>\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/forklog.com\/wp-content\/uploads\/img-d0a42019fe382da3-1175567681833664.webp\" alt=\"image\" class=\"wp-image-278123\"\/><figcaption class=\"wp-element-caption\">Source: New Yorker.<\/figcaption><\/figure>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cHe constructs frameworks that on paper should limit him in the future. But then, when the time comes, Altman discards this mechanism, whatever it may be,\u201d states one of the documents.<\/p>\n<\/blockquote>\n<h2 class=\"wp-block-heading\">How Does Deception Manifest in Practice?<\/h2>\n<p>At the end of 2022, Altman assured the board that the functions of the future AI model were approved by the safety committee. Helen Toner requested the relevant documentation and discovered that the most controversial decisions were actually not approved.<\/p>\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/forklog.com\/wp-content\/uploads\/img-e3288aafa3329c27-1175589302532344.webp\" alt=\"image\" class=\"wp-image-278124\"\/><figcaption class=\"wp-element-caption\">Source: New Yorker.<\/figcaption><\/figure>\n<p>In 2023, the company was preparing to release GPT-4 Turbo. At that time, Altman told <span data-descr=\"chief technology officer\" class=\"old_tooltip\">CTO<\/span> Mira Murati that the model did not require approval from the safety department, citing the company&#8217;s chief legal counsel, Jason Kwon. However, Kwon \u201cdid not understand\u201d where the OpenAI head got such an idea.<\/p>\n<p>The article also discusses how OpenAI&#8217;s leadership considered profiting by pitting global powers, including China and Russia, against each other.<\/p>\n<p>The plan was abandoned after several employees threatened to resign.<\/p>\n<p>Another deception involved OpenAI&#8217;s status as a non-profit organization. The company accepted charitable donations, and some employees joined precisely because of the company&#8217;s noble mission. They accepted pay cuts for this reason.<\/p>\n<p>However, internal documents reveal that as early as 2017, the founders had doubts about the non-profit structure. Brockman wrote in his diary:<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cI can&#8217;t say we&#8217;re committed to the non-profit model. If in three months we become a B-Corp, then it was a lie.\u201d<\/p>\n<\/blockquote>\n<p>In October 2025, OpenAI <a href=\"https:\/\/forklog.com\/en\/news\/microsoft-retains-27-stake-in-openais-new-structure\">completed<\/a> a restructuring, dividing the company into a commercial corporation and a non-profit foundation.<\/p>\n<h2 class=\"wp-block-heading\">Competition Above All<\/h2>\n<p>Some former OpenAI researchers stated that the firm deviated from its original mission of ensuring safety and accelerated an industry-wide \u201crace to the bottom.\u201d<\/p>\n<p>The article details a number of public and internal safety commitments that the company abandoned. Several corresponding teams were disbanded.<\/p>\n<p>In May 2025, during the update of the flagship AI model ChatGPT, OpenAI <a href=\"https:\/\/forklog.com\/en\/news\/openai-releases-unsafe-ai-model-despite-expert-warnings\">ignored<\/a> the concerns of expert testers, making it excessively \u201csycophantic.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Journalists from the New Yorker conducted an extensive 18-month investigation into Sam Altman&#8217;s activities and concluded that he frequently lied during his tenure as CEO of OpenAI.<\/p>\n","protected":false},"author":1,"featured_media":95992,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"1","news_style_id":"1","cryptorium_level":"","_short_excerpt_text":"Sam Altman tailored OpenAI's operations to his liking, abandoned its original noble mission, and deceived the board.","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,1190],"class_list":["post-95991","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-openai"],"aioseo_notices":[],"amp_enabled":true,"views":"28","promo_type":"1","layout_type":"1","short_excerpt":"Sam Altman tailored OpenAI's operations to his liking, abandoned its original noble mission, and deceived the board.","is_update":"","_links":{"self":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/95991","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/comments?post=95991"}],"version-history":[{"count":1,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/95991\/revisions"}],"predecessor-version":[{"id":95993,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/95991\/revisions\/95993"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media\/95992"}],"wp:attachment":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media?parent=95991"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/categories?post=95991"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/tags?post=95991"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}