Site iconSite icon ForkLog

Another Top Executive Departs OpenAI Amid Safety Concerns

Another Top Executive Departs OpenAI Amid Safety Concerns

Last week, OpenAI saw the departure of two key figures—co-founder and leading researcher Ilya Sutskever and Jan Leike. The latter headed the team responsible for safety and the development of human-centric AI models.

Following their exit, the company dissolved their department, which focused on long-term AI risks (Superalignment), a division announced in July 2023, according to CNBC citing a source.

The Beginning of the Exodus

In April, The Information reported that OpenAI had dismissed two researchers—Leopold Aschenbrenner, considered an ally of Sutskever, and Pavel Izmailov. The publication suggested the reason was a potential leak of corporate information.

They worked in the Superalignment department, from which Daniel Kokotajlo and William Saunders also resigned. According to Kokotajlo, he made this decision “due to a loss of confidence that [OpenAI] will act responsibly in the era of AGI.”

On May 15, Sutskever announced his resignation. He expressed confidence that the team would create a safe and beneficial artificial general intelligence.

“It has been an honor and a privilege to work together, and I will miss everyone greatly. Goodbye and thank you for everything,” the message read.

Jakub Pachocki succeeded Sutskever. Previously, he led the firm’s research and development of GPT-4.

Sutskever was considered one of the initiators of Sam Altman’s dismissal in November 2023. It was from him that the CEO received an invitation to an online call with the board of directors, where the decision was announced.

After the CEO’s return, Sutskever lost his board seat but continued to work at the company. At that time, Altman stated that he harbored “no ill will” towards him.

Comments under the now-former executive’s post speculated that the current OpenAI leadership is “not interested in creating safe AGI.”

“Why should 8 billion ordinary people like us be confident that OpenAI will prioritize humanity’s safety over their own wealth, power, and arrogance?” a user wrote.

Sutskever chose not to raise this issue, unlike Leike.

Accusations from the Former Executive

Leike announced his departure on May 15, and two days later explained his reasons on X.

“I joined [the company] because I believed OpenAI would be the best place in the world to conduct these [AI systems management and control] studies. However, I have disagreed with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point. For the past few months, my team has been going against the wind,” the former executive wrote.

According to him, in recent years, culture and safety processes have taken a back seat, “giving way to shiny products.”

“It’s high time we take the consequences of AGI seriously. We must prioritize preparing for them as much as possible. Only then can we ensure that AGI benefits all of humanity,” Leike emphasized.

He noted that OpenAI should become a company focused on the safety of artificial general intelligence. Addressing former colleagues, the researcher reminded them that “the world is counting on you.”

OpenAI’s Response

OpenAI CEO Sam Altman and company president Greg Brockman responded to Leike’s statements.

“We are very grateful to Jan for everything he has done for OpenAI […]. In light of the questions raised by his departure, we wanted to share our thoughts on our overall strategy,” the message read.

According to the top executives, the company has raised awareness of the risks and opportunities of AGI, “so the world can better prepare for it.”

“Secondly, we are laying the groundwork necessary for the safe deployment of increasingly powerful systems. […] Thirdly, the future will be harder than the past. We need to continue improving our safety work to meet the demands of each new model,” Altman and Brockman emphasized.

They highlighted the need for thorough testing and safety, conducting research, and collaborating with authorities or stakeholders on risk mitigation.

“[…] we take our role very seriously and carefully weigh feedback on our actions,” OpenAI’s leadership stressed.

Back in November 2023, Altman announced plans to secure additional funding from Microsoft for AGI development and discussed the next generation of the artificial intelligence model—GPT-5.

In May 2024, the head of OpenAI noted that the future of AI would be marked by the emergence and growing popularity of personal assistants. They will become an integral part of daily life and even replace smartphones.

In the same month, the company introduced a “more human-like” version of its chatbot—GPT-4o.

Exit mobile version