Developers Arthur Fortunato and Filipe Reyno have created a program built on the AI image generator DALL-E 2 to compile photofits of suspects. Experts say the technology could exacerbate existing racial and gender biases, according to Motherboard.
Engineers developed the Forensic Sketch AI-rtist program as part of a hackathon in December 2022.
According to them, the aim of the system is to shorten the time spent on creating a suspect’s photofit in a crime investigation.
The developers have not yet released the product, so the program has no active users.
“We are still trying to test the viability of the project in real-world scenarios. It is necessary to establish contact with police departments and obtain data from them that will allow us to test the technology,” the engineers said.
AI researchers and ethics experts say that time is not the bottleneck of the traditional method of creating photofits.
“Any forensic sketch of a suspect is already subject to human biases and flawed memory. Neural networks won’t solve such problems, and this program, by its design, could only amplify them,” said Jennifer Lynch, director of litigation at the Electronic Frontier Foundation.
An ethics researcher from Hugging Face Sasha Luchoni said that if AI photofits are made public, they could reinforce stereotypes and racial biases. They could also hinder investigations by attracting police attention to people from sketches rather than to actual criminals, she added.
As noted in July 2022, OpenAI announced reducing bias and improving safety of the DALL-E 2 image generator.
The second offers the user to describe the suspect in a textual prompt for the AI image generator.
After pressing the “Create profile” button, DALL-E 2 will generate a portrait that matches the description.
“Research has shown that people remember faces holistically, not by features. A photofit created from separate characteristics, as this program proposes, could lead to a portrait that differs markedly from the actual suspect,” said Lynch.
The ethics expert from Hugging Face Sasha Luchoni said that if AI photofits are made public, they could reinforce stereotypes and racial biases. They could also hinder investigations by drawing police attention to people from sketches rather than to actual criminals, she added.
As noted in July 2022, OpenAI announced reducing bias and improving safety of the DALL-E 2 image generator.
