
Researchers have learned to covertly embed malware in neural networks.
Artificial neural networks can carry malware undetectable by antivirus systems. Researchers from the University of the Chinese Academy of Sciences reached this conclusion, Motherboard reports.
\n\n\n\n
Experiments with real malware samples showed that replacing about half of the neurons in the AlexNet model with malware yields an accuracy above 93.1%. The algorithm, 178 MB in size, can contain up to 37 MB of malicious software, the authors calculated.
\n\n
The researchers tested several models against 58 common antivirus systems, which did not detect the malware.
\n\n
“As neural networks become more widespread, this method will become universal for delivering malware in the future,” say the authors of the study.
\n\n
With this approach, malware is “disassembled” and embedded into the neural network. The algorithm then lands on the victim’s device, where, with the help of a receiver program, it compiles the code into an active virus.
\n\n
According to the authors, malware can be stopped if the target device checks the model before running it. It can also be detected using “traditional methods,” such as static and dynamic analyses.
\n\n
“Today it would be difficult to detect this with antivirus software, but that’s only because no one looks there,” said the researcher and cybersecurity consultant Lukash Oleynik.
\n\n
He believes antivirus systems will be able to spot malware extraction from a neural network.
\n\n
“But this is also a problem, because specialized methods for extracting malware from a model [deep neural network] mean that target systems may already be under the control of an attacker,” added Oleynik.
\n\n
The specialist is also convinced that embedding malware in neural networks offers little to attackers.
\n\n
In May, researchers from the Maryland Center for Cybersecurity discovered a vulnerability in neural networks, increasing their energy consumption.
\n\n
In early May 2021, Microsoft introduced an open-source tool that helps developers test the security of AI systems.
\n\n
In April, Microsoft integrated threat-detection technology from Intel to search for hidden miners into the Defender for Endpoint corporate security platform.
\n\n
Subscribe to ForkLog’s news on Telegram: ForkLog AI — all the news from the world of AI!
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!