Site iconSite icon ForkLog

Apple vows not to turn its child-abuse content detection algorithm into a surveillance tool

uskoryayushhei-sya-fragmentatsii-mirovogo-interneta

Apple released a six-page document with answers about how its technology for scanning users’ photos for signs of child abuse works.

The company denies any possibility of repurposing its tool to search for other kinds of material on users’ phones and computers beyond CSAM. According to Apple, they will not broaden the scope of the scanning capabilities and will refuse any such demands from authorities.

“We have previously faced government demands to create and implement changes that degrade user privacy, and we have steadfastly refused those demands. We will continue to refuse them in the future,” the company said.

Apple explained that the features will initially launch only in the United States. The company pledged to conduct thorough legal assessments for each country before launching the tool to avoid abuses by authorities.

In addition to scanning photos that will be uploaded to iCloud in the future, Apple plans to scan all photos stored on its cloud servers.

Despite explanations from Apple, privacy advocates and safety researchers expressed concerns. Security expert Matthew Green suggested that the system has vulnerabilities that could be exploited by law enforcement.

Earlier in August, Apple described the tool for scanning user photos for signs of child abuse.

Earlier in June, Apple introduced passwordless authentication using Face ID and Touch ID.

In December 2020, Apple released an update for iOS that allows users to disable data collection within apps. The new rules sparked a wave of outrage from app developers, led by Facebook and Google.

Subscribe to ForkLog News on Telegram: ForkLog AI — all the news from the world of AI!

Exit mobile version