The Australian Federal Police and Monash University asked the public to share pictures of children to train artificial intelligence to identify child abuse in images.
Monash University experts and the AFP are calling for people to contribute to a world first ethically-sourced and managed image bank for research to combat child exploitation. https://t.co/VKBbGXLeEV
— AFP (@AusFedPolice) June 3, 2022
Researchers are collecting images of people under 17 in safe contexts. They say photographs should not contain nudity, even in relatively benign scenarios such as a child taking a bath.
Images will be assembled into a dataset to train the AI model to distinguish minors in normal environments from those in exploited, unsafe situations. Researchers say the software could help law enforcement quickly identify materials involving sexual abuse of children among thousands of examined photographs, reducing the need for manual review.
According to Senior Constable Jenis Dalins of the Australian Federal Police, artificial intelligence could potentially identify victims and detect illegal materials previously unknown to officers.
«В 2021 году Австралийский центр по борьбе с эксплуатацией детей получил более 33 000 сообщений об эксплуатации детей в интернете, и каждое сообщение может содержать большие объемы изображений и видео», — сказал он.
Dalins added that reviewing such materials is laborious, and manual analysis can impose psychological stress on investigators.
Researchers say that crowdsourcing the collection of photographs will enable an unbiased dataset.
«Получение изображений из интернета проблематично, поскольку нет возможности узнать, действительно ли дети на этих фотографиях дали согласие на загрузку своих снимков или использование их для исследования», — заявил содиректор AiLECS и доцент Университета Монаша Кэмпбелл Уилсон.
The My Pictures Matter crowdsourcing campaign is open to adults who consent to the use of their photographs. Participants are also required to provide an email address.
Project lead and laboratory researcher Nina Lewis stressed that no other information identifying the user would be collected. Email addresses are stored in a separate database, she added.
«Изображения, используемые исследователями, не могут раскрывать какую-либо личную информацию о людях, которые изображены», — сказала она.
Participants will receive updates at each stage of the project, and they may request deletion of their images from the dataset if they wish.
Earlier, in November 2021, Australian authorities banned Clearview AI from collecting citizens’ data.
In August, Apple announced plans to roll out a tool to scan users’ photos for signs of child abuse in iOS, iPadOS and macOS.
The company later postponed the launch of the feature for an indefinite period.
Follow ForkLog News on Telegram: ForkLog AI — all the news from the world of AI!
