
Fraudsters Exploit Claude for Vibe Hacking
Cybercriminals exploit Claude for large-scale attacks, bypassing security.
Cybercriminals have discovered methods to leverage the chat-bot Claude for large-scale cyberattacks, despite the presence of sophisticated security mechanisms. This was revealed by Anthropic in their Threat Intelligence report.
Alex Moix, Ken Lebedev, and Jacob Klein cited several instances where criminals employed AI. In some attacks, the ransom demands exceeded $500,000.
The chat-bot was used not only for technical consultations but also to execute hacks through so-called vibe hacking. Such attacks require only basic knowledge of programming and encryption.
Vibe hacking is a form of social engineering where perpetrators use AI agents to manipulate the emotions and trust of the victim. Unlike traditional phishing, which involves fake emails or websites, this attack appears as “friendly” or “convincing” interaction—creating the “right vibe” of communication. AI can automatically adjust to the tone, style, and mood of the interlocutor to gain trust and persuade them to perform a malicious action.
Anthropic identified a hacker who used vibe hacking and Claude to steal confidential data from at least 17 organizations, including medical institutions, emergency services, government bodies, and religious groups. He demanded ransoms ranging from $75,000 to $500,000 in Bitcoin.
The hacker trained Claude to analyze stolen financial records, calculate appropriate ransom amounts, and draft personalized ransom letters to increase psychological pressure.
“People who are unable to independently implement basic encryption or understand the mechanics of system calls are now successfully creating ransomware with capabilities to bypass defenses and use methods to counteract analysis,” company representatives stated.
The AI startup also discovered that North Korean IT specialists used Claude to create convincing fake identities, pass technical programming tests, and even secure remote positions in American Fortune 500 tech companies. They used Claude to prepare interview responses.
It is suggested that one of the perpetrators was interviewed for a programmer position at Polygon Labs. Other prepared responses indicated experience with OpenSea and Chainlink.
Earlier, Anthropic programmed the Claude Opus 4 and 4.1 chat-bots to terminate dialogues with users “in rare, extreme cases of systematically malicious or abusive interaction.”
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!