Site iconSite icon ForkLog

OpenAI to Introduce Parental Controls in ChatGPT

OpenAI to Introduce Parental Controls in ChatGPT

The AI startup OpenAI plans to redirect sensitive conversations to reasoning models and implement parental controls.

These measures are being taken in response to recent security incidents. In some cases, ChatGPT failed to detect users’ mental disorders.

On August 28, The Wall Street Journal reported another case—Stein-Erik Soelberg used the chatbot to fuel his paranoia. He believed he was the victim of a grand conspiracy. The syndrome progressed to the point where Soelberg killed his mother and committed suicide.

“Soon we will begin redirecting some sensitive conversations—such as when our system detects signs of acute distress—to a reasoning model like GPT-5-thinking. It can provide more helpful and beneficial responses, regardless of which model the person initially chose,” the startup’s blog states.

OpenAI claims its “thinking” neural networks are more resilient to hostile prompts.

Notifications about potentially harmful content are already integrated into ChatGPT.

Notification in ChatGPT about a suspicious request. Source: ForkLog.

Parental controls will allow accounts to be linked with their children’s accounts via email invitation.

Parents will be able to monitor how ChatGPT responds to their children. They will also have the option to disable the memory and chat history feature. This will reduce the likelihood of developing dependency and attachment to the chatbot.

Among other upcoming features are notifications that a child is in a state of “acute distress.”

Back in July, OpenAI introduced a special learning mode for students in ChatGPT, which helps work through tasks step-by-step rather than just providing ready answers.

Exit mobile version