Microsoft announced a partnership with StopNCII to combat deepfake pornography on its Bing search engine.
StopNCII’s tools enable victims of deepfakes to create a digital fingerprint of explicit images on their devices. The organisation’s partners then use this hash to remove the content from their platforms.
StopNCII already collaborates with Facebook, Instagram, Threads, TikTok, Snapchat, Reddit, Pornhub, and OnlyFans.
Microsoft reported actions taken on 268,899 explicit images as part of a pilot collaboration with the organisation’s database. Previously, the company employed a direct reporting tool, but these efforts proved insufficient.
“We have heard concerns from victims, experts, and other stakeholders that reporting alone is insufficient in terms of impact and adequately addressing the issue of image availability in search,” the Microsoft blog states.
In July, Google intensified its efforts against deepfake pornography. The search engine will exclude such content from results and downgrade sites that receive numerous complaints. If an image is removed from search, Google’s systems will automatically scan and remove duplicates.
Previously, the search giant faced criticism from former employees and victims for not collaborating with StopNCII, reports Wired. According to media reports, since 2020, Google users in South Korea have reported 170,000 links to unwanted sexual content in search queries and on YouTube.
The proliferation of deepfake pornography has been exacerbated by advances in artificial intelligence, which enable the generation of such content. StopNCII’s tools work only for individuals over 18, while the technology is also used among minors.
In September, media reported on methods of using custom versions of ChatGPT to create pornographic content.
In February, Vitalik Buterin provided guidance on detecting deepfakes.
In August, the office of San Francisco City Attorney David Chiu filed a lawsuit against the owners of 16 of the world’s largest websites that allow women and girls to be “undressed” in photos using AI without their consent.
For more on combating deepfakes and extortion, read ForkLog’s News+ format:
