In 2017, Facebook’s content-formation algorithms directly contributed to killings and other atrocities carried out by Myanmar’s military against the Muslim Rohingya minority. This is stated in a report by the international organisation Amnesty International.
In a 74-page report, the advocates catalogued abuses against the Rohingya over the past five years amid systematic persecution and apartheid. Since 2017, the genocide has claimed more than 25,000 lives.
According to the organisation, Meta has substantially contributed to ethnic cleansing in Myanmar, profiting from divisions and hatred.
“The widespread dissemination of messages that incite cruelty and restrict the Rohingya’s rights, as well as other inhumane content, has fuelled long-standing discrimination and significantly increased the risk of violence,” the report says.
At the end of 2016, the Myanmar Armed Forces began a series of repressive actions in the Rakhine state, where much of the Rohingya population lived in crowded ghettos. Numerous human rights abuses were documented—beatings, killings, rapes, arbitrary arrests and forced labour.
Satellites also recorded footage showing the military burning thousands of houses. The abuses—many carried out by radical Buddhist nationalists—intensified in early 2017 and triggered a surge of counterattacks by rebels.
That year, the country’s armed forces launched the so-called “cleansing operation” — a genocidal campaign that involved artillery, attack helicopters and anti-personnel mines.
According to the report, social media platforms such as Facebook helped extremist nationalists persecute and dehumanise the Rohingya by circulating vast volumes of content.
“People once followed their religious leaders, and when they, together with the government, began sharing hateful statements on the platform, public opinion shifted,” Amnesty International quotes Rohingya schoolteacher Mohamed Ayas.
According to Agnes Callamard, the secretary-general of the organisation, even before the “escalation of atrocities”, Facebook’s algorithms were fuelling hostility toward minorities, contributing to violence in the real world. While Myanmar’s military committed crimes against humanity, Meta was profiting from the echo chamber of hatred created by its toxic algorithms, she added.
Callamard also said the company must be held to account and made to compensate all those harmed by the consequences of its reckless actions.
As one of the “countless” examples of dehumanising the Rohingya, Amnesty International highlighted a post by Min Aun Hlaing. In September 2017 the commander stated on Facebook that “there is absolutely no Rohingya race in Myanmar.”
The tech giant blocked his account only after a year.
Meta’s Director of Global Public Policy for Developing Markets in the APAC region, Rafael Frankel, said the company backs efforts to hold the military to account for its crimes against the Rohingya.
“To this end, we have voluntarily and lawfully provided data to the UN Investigative Mechanism for Myanmar and the Gambia, and we are currently engaged in filing complaints with the OECD,” he said.
In November 2021, the Israeli army deployed an extensive facial recognition system for tracking Palestinians on the West Bank.
In December 2020, Alibaba acknowledged that it had developed AI technology to identify the Uyghur minority in China.
Follow ForkLog’s news on Telegram: ForkLog AI — all the news from the world of AI!
