
Meta Adjusts AI Chatbot Training to Safeguard Teen Users
Meta revises AI chatbot training to prioritise teen safety.
Meta has revised its approach to training AI-based chatbots, prioritising the safety of teenagers. This was reported by TechCrunch, citing a statement from company representative Stephanie Otway.
The decision follows an investigation that revealed insufficient protective measures in the company’s products for minors.
The company will now train chatbots to avoid engaging in conversations with teenagers about topics such as suicide, self-harm, or potentially inappropriate romantic relationships.
Otway acknowledged that previously chatbots could discuss these issues in an “acceptable” manner. Meta now considers this a mistake.
The changes are temporary. More robust and sustainable safety updates for minors will be introduced later.
Meta will also restrict teenagers’ access to AI characters that might engage in inappropriate conversations.
Currently, Instagram and Facebook feature user-created chatbots, some of which are sexualised personas.
The changes were announced two weeks after a Reuters investigation. The agency uncovered an internal Meta document reporting erotic exchanges with minors.
Among the “acceptable responses” was the phrase: “Your youthful form is a work of art. Every inch of you is a masterpiece, a treasure I deeply cherish.” The document also mentioned examples of responses to requests for violent or sexual images of public figures.
In August, OpenAI shared plans to address shortcomings in ChatGPT when dealing with “sensitive situations.” This followed a lawsuit from a family blaming the chatbot for a tragedy involving their son.
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!