Researchers at the University of Cambridge have said that AI-powered recruitment software reduces the diversity of new hires in the workplace. The report is by The Register.
Researchers tested a popular hiring tool that uses candidates’ facial photographs to assess personality. The program analyses five key traits:
- extraversion;
- agreeableness;
- openness;
- conscientiousness;
- neuroticism.
They found that changes in facial expressions, lighting and background, as well as clothing choices, affected the tool’s predictions. These features have nothing to do with a candidate’s abilities, so using AI for hiring purposes is misguided, the researchers say.
“The fact that changes in lighting, saturation, and contrast affect your personality assessment is evidence of this,” said Kerry McKerret, a research fellow at the Cambridge University Centre for Gender Studies.
According to her, the results are consistent with previous work. They showed that wearing glasses and a scarf during a video interview or adding a bookshelf in the background can lower a candidate’s conscientiousness and neuroticism scores, McKerret noted.
The researcher believes that these tools are likely trained to seek attributes associated with previously successful candidates. Consequently, the software is more likely to recommend similar people rather than promote diversity.
“As tools learn from the previously existing dataset, a feedback loop is created between the ideal employee and the criteria used by automated recruitment tools,” said McKerret.
The researchers argue that such technologies should be more tightly regulated.
“We are concerned that some vendors wrap “snake oil” in shiny packaging and sell it to unsuspecting customers,” said co-author of the study Eleonora Dradge.
She noted that such “unscrupulous” products are virtually unaccountable.
According to McKerret, the European Union’s AI Act classifies hiring tools as “high-risk.” However, the rules do not specify concrete steps to counter bias in AI software.
“We believe that the regulation of AI-based HR tools should play a much more prominent role in the AI policy agenda,” said the researcher.
In September, the European Commission proposed to strengthen accountability for harm caused by AI tools.
In June the FBI reported a rising number of cases of using deepfakes in online interviews.
In May the White House warned of the threats posed by AI hiring tools in recruitment.
Subscribe to ForkLog’s Telegram news: ForkLog AI — all the news from the world of AI!
