Site iconSite icon ForkLog

Content Moderators Report 80% Error Rate in AI Systems

Content Moderators Report 80% Error Rate in AI Systems

Artificial intelligence complicates rather than simplifies the work of content moderators. It fails to reliably identify violence and malice, reports Bloomberg, citing 13 professional online content filters.

A TikTok employee under the pseudonym Kevin stated that over a year ago, he was tasked with using an AI program in his work. He estimated the error rate at about 70%. For instance, artificial intelligence might report a low fuel level on a car’s dashboard in a video when it should analyze a speedometer showing 200 km/h.

Kevin believes that despite the challenges, employers will continue to replace humans with machines.

Professional moderators confirmed the claim—AI is being adopted faster than it learns to effectively filter harmful content.

Experts fear the emergence of an AI-controlled internet environment where hate, propaganda, child exploitation, and other online threats will spread freely.

“If we go down the path of implementing AI to reduce the number of security specialists, it will lead to a decrease in safety. We need live people,” said Lloyd Richardson, Chief Technology Officer of the Canadian Centre for Child Protection.

The work of a content moderator is emotionally taxing, and hiring enough people is a challenging task. Users upload over 20 million videos a day on YouTube alone. Companies are betting on artificial intelligence, but it only hinders.

Of the 13 moderators Bloomberg spoke with, 12 reported increased difficulty in their work after the integration of neural networks.

“We cannot rely on AI prompts. It is wrong 80% of the time. We don’t even look at its suggestions,” said 36-year-old Janerke Kaidenova, an employee of a Kazakhstani content moderation firm.

Kevin spends time correcting errors, manually tagging everything he sees with high precision. This work only increases the workload and takes time away from more urgent tasks.

He is convinced that his meticulous corrections effectively train the system, and eventually, it will replace him, although management does not explicitly say so.

Back in May 2024, Reddit announced a partnership with OpenAI, which provided Redditors and moderators with new AI-based features.

Exit mobile version