
Tech Giants Commit to Screening AI Datasets for CSAM
Tech giants such as Google, Meta, OpenAI, Microsoft, and Amazon have agreed to take measures to prevent the use of child sexual abuse material (CSAM) in training datasets for artificial intelligence.
The companies signed a new set of principles aimed at curbing the spread of such content. They pledged to:
- ensure that training datasets do not contain CSAM;
- avoid datasets with a high risk of including harmful materials;
- remove CSAM images or links from data sources.
The companies also committed to conducting “stress testing” of AI models. This will ensure they do not generate CSAM images.
Developers will only be able to release new models if they have been assessed for child safety.
Other signatories of the agreement include Anthropic, Civitai, Metaphysic, Mistral AI, and Stability AI.
AI Exacerbates CSAM Spread
Generative AI has heightened concerns about deepfakes, including the spread of fake CSAM photos online. In December 2023, Stanford researchers published a report revealing that a popular AI training dataset contains links to such images.
They also found that the National Center for Missing and Exploited Children’s (NCMEC) hotline was overwhelmed with AI-generated CSAM images.
The non-profit organization Thorn claims that AI images will hinder efforts to identify victims. This could also create additional demand for CSAM materials, contribute to the victimization of children, and facilitate the sharing of problematic materials.
In addition to adopting new commitments, Google also promised to increase advertising grants for NCMEC. Susan Jasper, the company’s Vice President for Trust and Safety Solutions, noted that supporting these initiatives raises public awareness and provides people with tools to identify and report abuse.
In October 2023, British experts reported the rapid spread of AI-generated CSAM materials.
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!