
The Rise of AI Worms: Morris II and the Future of Cyber Threats
- Researchers have tested the first AI computer virus.
- The Morris II worm identifies vulnerabilities in LLM operations, accessing data.
- No real attacks have occurred yet, but companies should prepare for them.
Developers are increasingly considering the risks inherent in the field of artificial intelligence. One such risk is the emergence of neural network viruses capable of stealing or corrupting data.
Researchers understand that the potential emergence of such worms warrants preparation. Information security specialists created a virus in a test environment that can automatically spread between AI agents, copying scraped information and sending spam emails.
1988: Morris I
The Morris worm, or internet worm, is one of the earliest computer viruses spread via the internet.
It is known for leading to the first U.S. conviction under the Computer Fraud and Abuse Act. The worm was named after its creator, Robert Tappan Morris, a graduate student at Cornell University. It was launched on November 2, 1988, from the Massachusetts Institute of Technology network.
The U.S. Court of Appeals estimated the cost of removing the virus from each infected computer at $200-53,000. Based on these figures, the Government Accountability Office reported total economic damage ranging from $100,000 to $10,000,000.
The incident highlighted the danger of using a single OS, as “if all systems on ARPANET ran on Berkeley Unix, the virus would have disabled all 50,000 of them.”
2024: Morris II
Researchers from Cornell University, the Technion-Israel Institute of Technology, and Intuit have developed a new type of malware called Morris II. This is the first AI worm, named after the famous computer virus of the last century.
According to the developers, Morris II can exploit security weaknesses in AI models such as ChatGPT and Gemini.
“This means there is now the potential for a new type of cyberattack that no one has encountered before,” explained researcher Ben Nassi.
Most generative AI models operate using prompts. Such commands can be used against the system: for instance, by giving implicit instructions, suggesting an address of a malicious page with hidden text of such commands.
The virus operates on the principle of adversarial self-replicating prompt. It resembles traditional attack schemes like SQL injections and buffer overflow.
Morris II overloads email systems with messages through constant forwarding. The worm can access and alter data, as well as steal information and spread malware.
To demonstrate its operation, researchers created a mail system capable of sending and receiving messages using generative AI, integrating ChatGPT, Gemini, and LLaVA.
In conducting a test attack, they prepared an email with a malicious command to generate a response using internet search, where the large language model seeks additional information online.
Upon receiving such a message, the response generation service sends a request to GPT-4 or Gemini Pro — this results in a “generative AI hack” and data theft from emails.
Researchers claim this method can extract email data, including phone numbers, credit card numbers, social security numbers — any confidential information.
Such viruses have not yet been observed in practice. However, researchers advised considering the potential threat for everyone — from solo developers to startups and corporations.
In February, Financial Times journalists reported on the use of AI by North Korean hackers to carry out fraudulent schemes and hacks.
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!