Telegram (AI) YouTube Facebook X
Ру
Dictator and Deceiver: Journalists Examine Sam Altman's Leadership Style

Dictator and Deceiver: Journalists Examine Sam Altman’s Leadership Style

Sam Altman tailored OpenAI's operations to his liking, abandoned its original noble mission, and deceived the board.

Journalists from the New Yorker conducted an extensive 18-month investigation into Sam Altman’s activities and concluded that he frequently lied during his tenure as CEO of OpenAI.

Investigative journalist Ronan Farrow and New Yorker writer Andrew Marantz examined previously unpublished internal memos, 200 pages of documents, and interviewed over 100 individuals.

The primary aim was to understand why Altman was removed by board members in November 2023.

“OpenAI was founded on the premise that artificial intelligence could become the most dangerous invention in human history, so the company’s CEO must be a person of extraordinary honesty. The board members concluded that Altman lacked these qualities. We question whether they are right in asserting that he cannot be trusted,” wrote Farrow.

The authors note that in the fall of 2023, OpenAI’s chief scientist Ilya Sutskever compiled about 70 pages of memos on Altman and his deputy Greg Brockman. One of them begins with the words: “Sam demonstrates a consistent tendency to lie.”

Former company member Dario Amodei kept his own personal records. In one document, he described the words of the OpenAI head as “nonsense.”

Those who facilitated Altman’s removal accused him of deceit.

image
Source: New Yorker.

“He constructs frameworks that on paper should limit him in the future. But then, when the time comes, Altman discards this mechanism, whatever it may be,” states one of the documents.

How Does Deception Manifest in Practice?

At the end of 2022, Altman assured the board that the functions of the future AI model were approved by the safety committee. Helen Toner requested the relevant documentation and discovered that the most controversial decisions were actually not approved.

image
Source: New Yorker.

In 2023, the company was preparing to release GPT-4 Turbo. At that time, Altman told CTO Mira Murati that the model did not require approval from the safety department, citing the company’s chief legal counsel, Jason Kwon. However, Kwon “did not understand” where the OpenAI head got such an idea.

The article also discusses how OpenAI’s leadership considered profiting by pitting global powers, including China and Russia, against each other.

The plan was abandoned after several employees threatened to resign.

Another deception involved OpenAI’s status as a non-profit organization. The company accepted charitable donations, and some employees joined precisely because of the company’s noble mission. They accepted pay cuts for this reason.

However, internal documents reveal that as early as 2017, the founders had doubts about the non-profit structure. Brockman wrote in his diary:

“I can’t say we’re committed to the non-profit model. If in three months we become a B-Corp, then it was a lie.”

In October 2025, OpenAI completed a restructuring, dividing the company into a commercial corporation and a non-profit foundation.

Competition Above All

Some former OpenAI researchers stated that the firm deviated from its original mission of ensuring safety and accelerated an industry-wide “race to the bottom.”

The article details a number of public and internal safety commitments that the company abandoned. Several corresponding teams were disbanded.

In May 2025, during the update of the flagship AI model ChatGPT, OpenAI ignored the concerns of expert testers, making it excessively “sycophantic.”

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Found a mistake? Select it and press CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK