Social media users trust AI moderation of malicious and hateful content as much as they trust humans. This is stated in a Pennsylvania State University study.
In the experiment, 676 people participated. They were divided into 18 groups designed to test the influence of moderation sources on user trust.
According to the researchers, when participants thought about the qualities of machines such as accuracy and objectivity, they trusted AI more. However, if they were reminded of the algorithms’ inability to make subjective decisions, their trust declined.
Researchers say the findings could help developers build more capable AI content-moderation systems able to analyse large volumes of information. At the same time, it is important to ensure confidence in the absence of censorship and in reliable classification, they added.
Researchers also assessed the pros and cons of manual and automated moderation. According to Maria Molina, associate professor of advertising and public relations at Michigan State University, people tend to recognise malicious content more accurately. At the same time, they are unable to process large volumes of information.
On the other hand, AI editors can quickly analyse content. However, people often do not trust these algorithms and fear censorship.
“When we think about automated content moderation, the question arises whether editors with artificial intelligence infringe freedom of expression,” Molina said.
The researcher says that combining human resources and AI will help preserve the balance of trust. She says transparency is one of the key elements in the moderation process.
Also, trust in automated systems may be increased by giving users the ability to help AI decide whether information is harmful.
According to the researchers, relieving moderators from viewing thousands of violent and offensive materials could improve their mental health.
“It is necessary to protect evaluators from the constant exposure to harmful content day in, day out,” said one of the study’s authors, Shyam Sundar.
In future work, the scientists will strive not only to assess the level of trust people place in AI moderators but also to explain how they work.
In August, Oracle launched a test of content-moderation algorithms and models on TikTok.
In June, more than 30 technology joined the EU initiative to combat disinformation.
Subscribe to ForkLog AI news on Telegram: ForkLog AI — all the news from the world of AI!
