Facebook engineers have found that the algorithms remove only a portion of content that violates the platform’s rules. This is according to a new investigation The Wall Street Journal, based on internal documentation from the social network.
According to the report, two years ago Facebook cut the amount of time that review teams had to focus on user reports of hate speech, and reduced their overall volume. Instead, the company shifted to applying AI to post moderation. According to employees, this contributed to overstating the technology’s success in public metrics.
According to another document, in March 2021 an internal research team found that Facebook’s automated systems removed posts that generated between 3% and 5% of views of hate-speech expressions on the network. The same systems flagged and removed about 0.6% of all content that violated the platform’s policy on violence and incitement.
Facebook’s Vice President of Integrity, Guy Rosenrefuted the findings of the investigation, and said that, thanks to the algorithms, the prevalence of hate speech on the social network has fallen by 50% over the past three years.
“The data from the leaked documents are used to construct a narrative that the technology we use to combat hate speech is inadequate and that we deliberately distort our progress. This is not true,” he said.
According to Rosen, the company does not countenance the use of the platform to incite hatred and is transparent in its efforts to curb it.
“These documents show that our work on ensuring integrity is a multi-year endeavour. While we will never achieve perfection, our teams continually work on improving systems, identifying problems and seeking solutions,” he added.
Rosen also contends that it is “incorrect” to judge Facebook’s success in fighting hate speech solely by the removal of posts. In his view, the reduction in the visibility of such content is a more meaningful metric. Over the last three quarters, views of such posts fell to 0.05% — about five views per 10,000, Rosen said.
Earlier in October, a Facebook whistleblower at Senate hearings said that the company uses algorithms to incite hatred for profit.
A week later, Nick Clegg, Facebook’s vice president for global affairs and communications, said that Facebook is prepared to grant regulators access to the algorithms, to demonstrate their harmlessness.
Follow ForkLog’s news on Telegram: ForkLog AI — all the news from the world of AI!
