Site iconSite icon ForkLog

Study finds artificial intelligence crafts phishing emails more effectively than humans

Study finds artificial intelligence crafts phishing emails more effectively than humans

The language model GPT-3, in combination with other AI services, can be used to create personalized phishing emails. This conclusion was reached by researchers at Singapore’s Government Technology Agency, Wired reports.

The researchers used the OpenAI GPT-3 platform in conjunction with other AI-based products focused on personality analysis. By running the outputs through them, the researchers developed a pipeline that processed and refined the messages before sending.

During the experiment, the researchers sent targeted phishing emails—some crafted by the researchers themselves and others by artificial intelligence—to 200 individuals. Each message contained safe links that collected click data.

They found that the majority of recipients were more likely to click on the links in emails created by the algorithm than in those written by humans.

According to Yujin Lima, a cybersecurity specialist at the Government Technology Agency, the use of AI services lowers the entry barrier for attackers and increases the pool of potential phishing targets.

“AI as a service will cost you a couple of cents and it’s really easy to use — you just input text and get results. You don’t even need to run code,” Lima said.

He argues that, in this way, any phishing email with links can be personalised for each recipient at minimal cost.

The researchers noted that the experiment was only a first step. They said the sample size was relatively small, and the target pool homogeneous. Moreover, all messages were created by office staff, not external attackers attempting to personalise texts without any information about victims.

However, the findings have prompted researchers to reflect more deeply on the role AI could play in advancing targeted phishing campaigns.

Earlier in August, researchers using a “universal” face fooled most facial recognition systems.

In July, scientists reported the discovery of a method for the covert insertion of malware into neural networks.

In May, researchers from the Maryland Cybersecurity Center identified a vulnerability in neural networks that increases their energy consumption.

Subscribe to ForkLog AI news on Telegram: ForkLog AI — all the news from the world of AI!

Exit mobile version