{"id":26454,"date":"2025-08-25T18:13:37","date_gmt":"2025-08-25T15:13:37","guid":{"rendered":"https:\/\/forklog.com\/en\/?p=26454"},"modified":"2025-08-26T10:27:50","modified_gmt":"2025-08-26T07:27:50","slug":"content-moderators-report-80-error-rate-in-ai-systems","status":"publish","type":"post","link":"https:\/\/forklog.com\/en\/content-moderators-report-80-error-rate-in-ai-systems\/","title":{"rendered":"Content Moderators Report 80% Error Rate in AI Systems"},"content":{"rendered":"<p>Artificial intelligence complicates rather than simplifies the work of content moderators. It fails to reliably identify violence and malice, reports <a href=\"https:\/\/www.bloomberg.com\/news\/features\/2025-08-22\/ai-is-replacing-online-moderators-but-it-s-bad-at-the-job?srnd=phx-ai\">Bloomberg<\/a>, citing 13 professional online content filters.<\/p>\n<p>A TikTok employee under the pseudonym Kevin stated that over a year ago, he was tasked with using an AI program in his work. He estimated the error rate at about 70%. For instance, artificial intelligence might report a low fuel level on a car&#8217;s dashboard in a video when it should analyze a speedometer showing 200 km\/h.<\/p>\n<p>Kevin believes that despite the challenges, employers will continue to replace humans with machines.<\/p>\n<p>Professional moderators confirmed the claim\u2014AI is being adopted faster than it learns to effectively filter harmful content.<\/p>\n<p>Experts fear the emergence of an AI-controlled internet environment where hate, propaganda, child exploitation, and other online threats will spread freely.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cIf we go down the path of implementing AI to reduce the number of security specialists, it will lead to a decrease in safety. We need live people,\u201d said Lloyd Richardson, Chief Technology Officer of the Canadian Centre for Child Protection.<\/p>\n<\/blockquote>\n<p>The work of a content moderator is emotionally taxing, and hiring enough people is a challenging task. Users upload over 20 million videos a day on YouTube alone. Companies are betting on artificial intelligence, but it only hinders.<\/p>\n<p>Of the 13 moderators Bloomberg spoke with, 12 reported increased difficulty in their work after the integration of neural networks.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cWe cannot rely on AI prompts. It is wrong 80% of the time. We don\u2019t even look at its suggestions,\u201d said 36-year-old Janerke Kaidenova, an employee of a Kazakhstani content moderation firm.<\/p>\n<\/blockquote>\n<p>Kevin spends time correcting errors, manually tagging everything he sees with high precision. This work only increases the workload and takes time away from more urgent tasks.<\/p>\n<p>He is convinced that his meticulous corrections effectively train the system, and eventually, it will replace him, although management does not explicitly say so.<\/p>\n<p>Back in May 2024, Reddit <a href=\"https:\/\/forklog.com\/en\/news\/reddit-shares-surge-12-following-openai-partnership\">announced a partnership<\/a> with OpenAI, which provided Redditors and moderators with new AI-based features.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence complicates rather than simplifies the work of content moderators. It fails to reliably identify violence and malice.<\/p>\n","protected":false},"author":1,"featured_media":26455,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"1","news_style_id":"1","cryptorium_level":"","_short_excerpt_text":"AI complicates content moderation, failing to identify violence and malice.","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438],"class_list":["post-26454","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence"],"aioseo_notices":[],"amp_enabled":true,"views":"241","promo_type":"1","layout_type":"1","short_excerpt":"AI complicates content moderation, failing to identify violence and malice.","is_update":"","_links":{"self":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/26454","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/comments?post=26454"}],"version-history":[{"count":1,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/26454\/revisions"}],"predecessor-version":[{"id":26456,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/26454\/revisions\/26456"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media\/26455"}],"wp:attachment":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media?parent=26454"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/categories?post=26454"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/tags?post=26454"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}