Telegram (AI) YouTube Facebook X
Ру
Sam Altman on Job Losses Due to AI and the Tragedy of a Former OpenAI Employee — An Interview

Sam Altman on Job Losses Due to AI and the Tragedy of a Former OpenAI Employee — An Interview

Many support staff will lose their jobs due to AI, stated OpenAI CEO Sam Altman.

Many current support staff working via phone or computer will lose their jobs due to AI, stated OpenAI CEO Sam Altman in an interview with Tucker Carlson.

However, there will be professions less affected by AI, where “human contact and deep connection” are important. Altman cited nurses as an example.

Regarding programmers, he believes that artificial intelligence allows them to achieve much more and earn higher salaries.

“It turns out the world needed far more programmers and software than it could previously create. So there is a huge unmet demand,” Altman emphasized.

As for the next 5-10 years, the OpenAI CEO is uncertain about how the number of jobs will change. He believes that the displacement of some professions may lead to the emergence of new ones.

“I recently heard that half of jobs do not disappear but rather transform every 75 years. This is essentially a kind of half-life. […] I can quite believe in a world where in 75 years half of the people will be doing something new, and the rest will be doing something that somewhat resembles today’s professions,” he stated.

Carlson touched on the topic of the death of Suhir Balaji, found dead in December 2024, who had worked at OpenAI for four years. In October, the researcher published an essay criticizing the use of copyrighted data in AI training, questioning their compliance with the “concept of fair use.” The police investigation found no signs of foul play, and the city coroner ruled the cause as suicide.

The deceased’s mother questioned the investigation’s conclusion. According to Carlson, in a conversation with a journalist, she blamed Altman for her son’s death. A private detective she hired allegedly found signs of a struggle in Balaji’s apartment, and the interviewer noted the discovery of a wig, a food order placed hours before the incident, cut surveillance camera wires, a recent return from a holiday with friends, and no signs of suicidal tendencies.

“He committed suicide. That’s what I think,” Altman noted.

The entrepreneur stated that he spent a lot of time examining the incident and, in his opinion, “it looks like suicide.” As arguments, the head of OpenAI cited the purchase of a gun, a medical report, and a second report on the bullet’s trajectory.

“After the first batch of information, I thought it didn’t look like suicide. […] But then, when the second set of details emerged, I thought ‘okay’,” he stated.

Carlson also spoke with Altman about God, religion, the entrepreneur’s moral principles, their formation in ChatGPT, and privacy. During the conversation, the journalist asked whether OpenAI would allow the military to use ChatGPT to make decisions about killing people.

“If I were making rifles, I would spend a lot of time thinking about how their primary purpose is to kill, whether people, animals, or someone else. If I were making kitchen knives, I would still understand that they would kill a certain number of people a year. But as for something like ChatGPT, […] I realize that people in our armed forces use it for advice on how to do their jobs,” Altman replied.

In July, the US Department of Defense awarded contracts worth up to $200 million to Anthropic, Google, OpenAI, and xAI for developing AI solutions in the security sector.

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK