The FBI‘s Internet Crime Complaint Center (IC3) warned of a rising number of cases involving stolen personal information and deepfakes in hiring for remote IT roles.
The #FBI has noticed an increase in complaints about the use of deepfakes and stolen personally identifiable information to apply for a variety of remote work positions. Check out our Public Service Announcement at https://t.co/DE88T7QxXI to learn more. #ReportTheCompromise pic.twitter.com/oTtVx4K4f6
— FBI (@FBI) June 28, 2022
According to the agency, some people use fake photos and video footage during online interviews.
\”In these online interviews, the candidate’s lip movements do not fully align with the sound of the speaking person. At times such as coughing, sneezing, or other audible cues do not match the visual representation,\” the agency said.
The FBI urged victims to report this activity via IC3 and include information that could help identify the scammers trying to land a job with forged data.
Earlier in March, researchers discovered over 1,000 deepfake profiles on LinkedIn.
In May, scammers distributed a fake video on Twitter in which Elon Musk “urged people to invest” in an obvious scam.
In June, Google banned training models to create deepfakes in the Colab cloud environment.
In the same month, the European Union updated its rules for combating misinformation and online forgeries. More than 30 companies joined the initiative, including Google, Meta, Microsoft, TikTok and Twitter.
Subscribe to ForkLog news on Telegram: ForkLog AI — all the news from the world of AI!
