
AI-Driven Scams: Types and Countermeasures
Artificial intelligence significantly streamlines workflows and boosts productivity. However, it also serves as a tool for fraudsters, enabling the creation of audio, video, messages, and deepfakes both cheaply and effectively, reports TechCrunch.
We explore popular AI-driven scams and ways to combat these malicious actors.
Cloning the Voices of Family and Friends
Speech synthesis has been around for decades, but with the advent of generative AI, fraudsters can create fakes from just a few seconds of audio. Any publicly broadcast voice, such as in a news report or a YouTube video, can be cloned.
Fraudsters can generate convincing fake versions of a potential victim’s friends or relatives. Often, a plea for help is used, the publication notes. For instance, a parent might receive a voice message from an unknown number from a “son” claiming his belongings were stolen while traveling and asking for money.
In January, an unknown individual cloned the voice of U.S. President Joe Biden and made a series of calls to voters in New Hampshire, urging them to boycott the election. Consequently, the Federal Communications Commission proposed a $6 million fine for the fake calls.
Another example: in the U.S., a physical education teacher was accused of using a voice deepfake to frame a school principal and achieve his dismissal. However, independent experts determined the audio was fabricated, as it had a “consistent tone, unusually clean background, and lacked sequential breathing sounds or pauses.”
How to Combat Voice Cloning?
The technology has advanced to the point where detecting a fake voice is extremely difficult, sometimes even for experts.
Anything originating from an unknown number, email address, or account should be automatically considered suspicious, advises TechCrunch. If someone claims to be a friend or relative, it is wise to contact that person through other communication channels.
Personalized Phishing and Email Spam
AI enables the generation of texts much faster and in various languages, including those tailored to specific individuals. Security firm SlashNext noted that the number of email phishing attacks increased by 856% over the past year, and since the launch of ChatGPT in 2022, by 4151%.
According to the publication, personalized emails are particularly dangerous. As personal data leaks occur regularly, much of it is publicly accessible. Fraudsters can use information about recent locations, purchases, or habits, and a language model can write thousands of emails in seconds. Journalists provide a possible example of such a message:
“Hi, Doris! I’m from the Etsy promotion team. The item you recently showed interest in is 50% off! And delivery to your address in Bellingham will be free if you use this link.”
Such an email appears plausible, and a neural network can create them in large quantities for different people, the article states.
How to Combat Email Spam?
Avoid clicking on suspicious links and downloading strange files. If there is no absolute certainty about the authenticity and identity of the sender, it is better not to open the email or the links within it.
Identity Spoofing Attacks
Using personal data leaks, fraudsters can access an account on a platform or exchange. The publication indicates that a neural network can create a fake copy of a user’s identity and use personal information to contact support to recover an account.
Typically, data such as date of birth and phone number are required to restore access. Sometimes a selfie is needed, but even this task is within AI’s capabilities, TechCrunch emphasizes.
How to Combat Identity Spoofing?
Through multi-factor authentication. Any suspicious account activity will be communicated via email or directly to the phone.
Deepfakes and Blackmail
AI allows for the superimposition of any person’s face onto another body, creating a realistic photo (or video). This type of fraud is used for blackmail or spreading fakes.
For example, in January, an employee of a multinational company in Hong Kong transferred $25 million to fraudsters after a call with “colleagues” who turned out to be deepfakes. The victim received an invitation supposedly from the firm’s CFO for an online video meeting to discuss a confidential deal.
All the “company executives” present on the call were deepfake imitations. They convinced the employee to transfer $25.5 million of the firm’s funds to five bank accounts.
How to Combat Deepfake and Blackmail?
Instead of complying with the blackmailer’s demands, it is better to contact the police and hosting services to request the removal of the photos.
In June, Elliptic experts noted an increase in the use of AI in crypto crimes.
In early June, Dogecoin graphic designer DogeDesigner discovered a YouTube livestream featuring a deepfake of Tesla and SpaceX CEO Elon Musk promoting a cryptocurrency scam.
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!