Telegram (AI) YouTube Facebook X
Ру
OpenAI to Enhance ChatGPT's Safety Following Teen Tragedy

OpenAI to Enhance ChatGPT’s Safety Following Teen Tragedy

OpenAI plans to improve ChatGPT's handling of sensitive situations after a lawsuit.

OpenAI has announced plans to address ChatGPT’s shortcomings in handling “sensitive situations.” This follows a lawsuit from a family blaming the chatbot for a tragedy involving their son.

“The goal is to ensure our tools are as helpful as possible to people. As part of this, we continue to improve the models’ ability to recognize and appropriately respond to signs of mental and emotional crisis and to help people find support — based on expert recommendations,” the startup’s blog states.

The company’s staff reported that ChatGPT is equipped with a multi-layered safety mechanism for interactions with vulnerable individuals. Since 2023, the models have been trained not to provide instructions for self-harm. They are also capable of adopting supportive, empathetic language.

If a person expresses suicidal thoughts, ChatGPT is trained to direct them to professional help. The company collaborates with 90 doctors from over 30 countries, including psychiatrists, pediatricians, and therapists.

GPT-5 has shown significant progress in preventing unhealthy emotional dependency, reducing flattery, and decreasing the proportion of imperfect responses in mental health crises, OpenAI highlighted.

However, despite all safety measures, the company’s AI systems have occasionally failed. In response, they are working on:

  • enhancing protection in long conversations;
  • refining content blocking rules;
  • expanding support for more people in crisis;
  • simplifying emergency service contacts;
  • ensuring connection with trusted contacts;
  • strengthening protection for teenagers.

On August 26, the parents of 16-year-old Adam Rayne filed a lawsuit against OpenAI following his death. They claim that “ChatGPT actively assisted Adam in exploring suicide methods.”

This is not an isolated incident. Previously, writer Laura Riley published an essay detailing how her 29-year-old daughter took her own life “after lengthy discussions with ChatGPT.”

Back in December 2024, in Russia, a virtual assistant based on AI was created for psychologists, doubling productivity, service quality, and revenue.

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK