
Apple vows not to turn its child-abuse content detection algorithm into a surveillance tool
Apple released a six-page document with answers about how its technology for scanning users’ photos for signs of child abuse works.
The company denies any possibility of repurposing its tool to search for other kinds of material on users’ phones and computers beyond CSAM. According to Apple, they will not broaden the scope of the scanning capabilities and will refuse any such demands from authorities.
“We have previously faced government demands to create and implement changes that degrade user privacy, and we have steadfastly refused those demands. We will continue to refuse them in the future,” the company said.
Apple explained that the features will initially launch only in the United States. The company pledged to conduct thorough legal assessments for each country before launching the tool to avoid abuses by authorities.
In addition to scanning photos that will be uploaded to iCloud in the future, Apple plans to scan all photos stored on its cloud servers.
- For the CSAM detection function to work properly, iCloud must be activated. Apple searches only content that is used with its cloud system.
- Apple does not upload CSAM images to the device for comparison. Instead, the algorithm uses a database of hashes of known CSAM images. These codes will be loaded into the phone’s operating system, allowing automatic comparison of photos uploaded to the cloud with the hashes in the database.
- iCloud will scan all photos stored in the cloud, not just new ones. In addition to scanning photos that will be uploaded to iCloud in the future, Apple plans to check all photos stored on its cloud servers.
- The Messages feature to protect children from explicit photos does not transmit data to Apple or law enforcement. Parents of young children will not be alerted when sending or receiving every explicit image. The system will first ask the child to confirm viewing. If the child consents, the parents will receive a notification. The feature is available only for Family Sharing groups and must be activated manually.
- The system will not notify parents of the viewing of explicit content in Messages for teens. For children aged 13–17, the system will blur the photos and warn about the danger of the content.
- Image analysis in Messages does not violate end-to-end encryption. The checks occur directly on the device to which the message arrives. The company does not send this data to the cloud.
Despite explanations from Apple, privacy advocates and safety researchers expressed concerns. Security expert Matthew Green suggested that the system has vulnerabilities that could be exploited by law enforcement.
Somebody proposed the following scenario to me, and I’m curious what the law is.
1. US DoJ approaches NCMEC, asks them to add non-CSAM photos to the hash database.
2. When these photos trigger against Apple users, DoJ sends a preservation order to Apple to obtain customer IDs.— Matthew Green (@matthew_d_green) August 9, 2021
Earlier in August, Apple described the tool for scanning user photos for signs of child abuse.
Earlier in June, Apple introduced passwordless authentication using Face ID and Touch ID.
In December 2020, Apple released an update for iOS that allows users to disable data collection within apps. The new rules sparked a wave of outrage from app developers, led by Facebook and Google.
Subscribe to ForkLog News on Telegram: ForkLog AI — all the news from the world of AI!
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!