
Study finds pixel-level photo manipulation does not shield against facial-recognition systems
Scientists from Stanford, Oregon State University, and Google found that pixel-level manipulation of photos does not shield against facial-recognition systems.
The researchers tested two data-poisoning tools: Fawkes and LowKey, which subtly alter images at the pixel level. Such modifications are imperceptible to the eye, yet they can confuse facial-recognition systems.
Both tools are openly available, which the authors say is their main problem. As a result, developers of facial-recognition systems could train their models to ignore the ‘poisonings’.
“Adaptive learning of a model with access to a black-box [image-modification software] can immediately train a robust model resistant to poisoning,” the scientists said.
According to them, both tools showed little effectiveness against facial-recognition software versions released within a year of their appearance online.
Researchers also fear the development of more sophisticated identification algorithms that would inherently ignore changes to images.
“There is an even simpler defensive strategy: model creators can just wait for more advanced facial-recognition systems that are no longer vulnerable to these specific attacks,” the paper says.
The authors emphasise that data modification to prevent biometric identification not only fails to provide security but also creates a false sense of protection. This could harm users who otherwise would not have posted photographs online.
The researchers say the only way to protect online users’ privacy is to enact legislation restricting the use of facial-recognition systems.
In May 2021, engineers released free tools Fawkes and LowKey to defend biometric identification algorithms.
In April, the DoNotPay developed a service to protect images from facial-recognition systems Photo Ninja.
Subscribe to ForkLog’s Telegram news: ForkLog AI — all the news from the world of AI!
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!