Site iconSite icon ForkLog

I need your clothes, your boots—and your card number: how AI helps fraudsters

I need your clothes, your boots—and your card number: how AI helps fraudsters

Artificial intelligence is reshaping most industries, cybersecurity included. The global market for AI-based defensive tools exceeded $30bn in 2024 and, according to forecasts, will reach $135bn by 2030.

To old standbys such as social engineering, criminals have added disinformation at scale and synthetic media. The Munich Security Report and the World Economic Forum have flagged this as a critical risk for the years ahead. The issue became especially salient in 2024, when about 4bn people around the world took part in elections.

Attackers are quick studies, refining their methods. The rise of large language models is sharpening threats, making scams more sophisticated.

The barrier to entry is falling. Mounting attacks now requires less specialist know-how, and modern algorithms so blur the line between synthetic and real content that even dedicated systems struggle to tell them apart.

ForkLog has analysed the key opportunities and risks of AI in digital security and prepared recommendations to defend against today’s threats.

AI’s advantages for cybersecurity

Organisations are increasingly integrating AI into traditional security toolkits—anti-virus, data-loss prevention, fraud detection, access management and intrusion detection. AI’s ability to analyse vast troves of data opens fresh possibilities:

AI is especially valuable in penetration testing—the targeted probing of software and networks. By developing specialised tools for their own infrastructure, organisations can uncover weaknesses before hackers do.

Deploying AI in security not only raises the level of data protection but also optimises IT spending by preventing potential attacks.

How hackers abuse artificial intelligence

Cybercriminals are highly inventive in adopting new technologies. Here are the main ways AI is used for illicit ends.

Social-engineering schemes

These ploys hinge on psychological manipulation to obtain confidential information or coerce actions that compromise security. The toolkit includes phishing, vishing and business email compromise.

AI helps automate the creation of personalised fraudulent messages. Attackers can scale campaigns with less time and achieve higher success rates.

Password cracking

AI is improving cracking algorithms, making the process faster and more precise. That raises the effectiveness of attacks and encourages criminals to focus on this vector.

A 2023 Security Hero study demonstrated the capabilities of generative AI in password cracking, having ‘fed’ a model 15m account records.

AI-assisted password-cracking speed by complexity. Source: Security Hero.

The result: 51% of passwords were cracked in under a minute. Within an hour 65% fell; within a day, 71%; and within a month, 81%.

The experiment also exposed clear patterns. Numeric-only combinations fell in seconds, while adding letters and mixed case increased cracking time proportionally.

Deepfakes and cyberthreats

This deception relies on AI’s ability to manipulate audio and video to produce convincing imitations. Deepfakes can spread instantly across social networks, sowing panic and confusion.

Such technologies are often combined with other scams. A telling scheme uncovered in October 2024 saw criminals use fake video calls featuring images of attractive women to dupe men. Pushing ‘investments’ in cryptocurrencies, the fraudsters stole more than $46m.

Another case occurred at the Hong Kong branch of a large company. Using deepfake technology, the perpetrators created the likeness of the chief financial officer during a video conference. An employee then transferred $25m to them.

Data ‘poisoning’

This emerging attack vector targets the corruption or compromise of training data. Adversaries deliberately distort information used by AI algorithms, leading to faulty decisions.

In 2024 researchers at the University of Texas identified a method for attacking systems such as Microsoft Copilot. By injecting specially crafted content into indexable documents, attackers can induce the system to generate disinformation with references to seemingly reliable sources. The particular danger is that, even after the malicious data are removed, the distorted information may linger in the system.

Uncovering such attacks takes time, so the damage can be substantial by the time they are detected.

Countering AI-driven threats: basic defensive principles

As the technology advances, concerns about digital security are growing. Regulators worldwide are working to build legal frameworks that maximise AI’s benefits while reducing risks, yet few have produced comprehensive legislation.

Even so, adapting to modern threats does not require a wholesale rewrite of cybersecurity principles. The focus is on strengthening existing safeguards in key areas.

Countering social engineering starts with controlling public information, exercising caution in online communications and checking suspicious links. For passwords, use unique, complex combinations and two-factor authentication.

To blunt deepfakes, verify sensitive information through alternative channels, especially for financial operations. When working with AI systems, use trusted data sources and official datasets to minimise the risk of poisoning.

Consistent adherence to basic digital-security hygiene has been timeless. The rise of AI only heightens the value of preventive measures and rapid incident response.

Exit mobile version