Telegram (AI) YouTube Facebook X
Ру
Two Sides of the Same Coin: Cybersecurity and Artificial Intelligence

Two Sides of the Same Coin: Cybersecurity and Artificial Intelligence

The year 2020 is coming to an end just as it began—hackers’ attacks, breaches of computer systems, and data leaks are being recorded almost daily.

Cybercrime threatens organizations of all kinds, as well as their clients. Personal data leaks can potentially be used for fraud and other cybercrimes.

Companies and organizations are doing everything possible to fend off cyberattacks. Yet they find it difficult to predict the time, place and scale of potential threats. Even harder is foreseeing the type of attack, whether it is ransomware such as WannaCry, a botnet such as Emotet, or, worse, a previously unknown malware program.

How AI and machine learning bolster security

Cybersecurity experts place high hopes on artificial intelligence (AI) and machine learning (ML). Algorithms can recognise patterns in data usage, user-behaviour models, and trigger security systems promptly in case of suspicious activity.

Large IT security firms already to some extent use AI and ML to detect and mitigate various threats and to fend off hacker attacks.

Threat detection

Traditional threat-detection methods rely on signatures or indicators of compromise. This method is effective for previously identified exploits; however it cannot identify new malware campaigns.

Signature-based detection identifies up to 90% of threats. Replacing traditional techniques with artificial intelligence can increase effectiveness to 95%, but increases the risk of false positives. The best solution may be a combined use of both methods, which could raise detection effectiveness to 100% and minimise the number of false positives.

Vulnerability management

In 2020, 17,447 vulnerabilities were recorded and their number has been rising for the fourth year in a row. Companies and organizations are paying more attention to unprotected points discovered daily. Traditional vulnerability-management methods usually wait for hackers to exploit vulnerabilities before neutralising them.

Where existing vulnerability-detection techniques rely on a database of known flaws, modern AI- and ML-based tools, such as user- and entity-behaviour analytics, can monitor the actions of accounts, endpoints and servers and identify anomalous behaviour, signalling an unknown attack. This protects the organisation even before the vulnerability is confirmed and fixed.

Data-centre support

AI can optimise and monitor a range of data-centre operations, such as the reliability of standby power, the condition of cooling filters, energy consumption, equipment temperatures and real-time bandwidth distribution.

The computational power of AI and the ability to continuously monitor allow for understanding and timely decision-making to improve the security of equipment and infrastructure.

Moreover, AI can help reduce data-centre maintenance costs by notifying staff of necessary steps to rectify faults. Thus, in 2016 Google stated that AI helped them cut data-centre cooling costs by 40%.

However, as with other applications, AI has a number of limitations that prevent it from becoming the prevailing technology. One primary is the lack of resources. Companies and organizations need to invest substantial time and money in computing power, memory and data to create and maintain AI systems.

Security professionals need to have access to many different datasets about malware, vulnerabilities and anomalies in order to train algorithms to recognise attacks and weak spots.

AI arms race?

In Europol’s report, artificial intelligence is one of the new technologies that could make cyberattacks more dangerous and harder to detect than ever before. Moreover, there is a high likelihood that cybercriminals are already using AI to conduct hacking campaigns.

“At the moment, much of this is theoretical, but one cannot say that it has not happened. It is quite likely that it has been used (AI), but we simply have not seen or recognised it,” said Philipp Amann, head of strategy at Europol’s European Cybercrime Centre (EC3).

Perhaps with the help of machine learning, cybercriminals could create self-learning malware, ransomware or phishing campaigns. Today, hackers do not have access to the deep technologies held by cybersecurity firms. However, according to Amann, they already have access to some tools that are freely available to all, and their quality is steadily improving.

And although it remains unclear whether hackers have used machine learning to develop or distribute malware, there is evidence that AI-based tools have been used to commit cybercrimes. In 2019, criminals used a voice-synthesis algorithm to trick employees of one company into handing over more than $243,000, by impersonating the CEO of the business.

This type of fraud, where the attacker impersonates the CEO and requests an urgent transfer of funds, has occurred before. But now criminals can use deepfakes to imitate a voice to be more convincing and to bypass internal security protocols.

Moreover, in the era of digital media and social networks, public figures such as businesspeople and politicians are more exposed to attacks using [simple_tooltip content=’the method of synthesising realistic photo-, audio- and video-materials, based on deep learning technologies’]deepfakes[/simple_tooltip]. After all, training algorithms requires only a few photo or video files for a neural network to learn to create high-quality forgeries involving well-known people.

But not only voice forgery can harm companies and organisations. Cybercriminals can use machine learning to automatically send phishing emails, study patterns of employee behaviour, and craft more convincing emails to get recipients to visit a malicious site. In other words, hackers can use AI for attacks just as security professionals try to defend against them.

Artificial intelligence, machine learning, neural networks are not a panacea, but merely technologies that are responses to today’s challenges. And how they will be used depends on its developer—whether a company creating products to secure computer networks, or a criminal group planning another attack on a large corporation.

Bogdan Kaminsky

Follow ForkLog news on Telegram: ForkLog Feed — the full news feed, ForkLog — the most important news and polls.

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK