Telegram (AI) YouTube Facebook X
Ру
Study finds AI code generators create security vulnerabilities

Study finds AI code generators create security vulnerabilities

A group of researchers from Stanford said that the use of AI-based code-generation systems is more likely to introduce security vulnerabilities, according to TechCrunch.

The researchers focused primarily on the Codex, presented by OpenAI in August 2021. They recruited 47 developers of varying skill levels to tackle security problems in several programming languages, including Python, JavaScript, and C.

According to the study, participants who used Codex were more likely to write faulty and ‘unsafe’ code than the control group. Programmers using AI also expressed greater confidence in their solutions.

Experts say that developers without adequate cybersecurity knowledge should use such tools with caution.

“Those who use them to accelerate tasks in which they already have skills should carefully recheck the results and the context,” the researchers added.

Megha Shrivastava, co-author of the study, stressed that the results are not a condemnation of Codex and other code-generation systems. In her words, such tools are useful for tasks that do not involve high risk.

The researchers proposed several ways to increase the safety of AI-code generation systems, including a mechanism for prompt refinement. They say this is akin to a supervisor who reviews code drafts.

They also urged developers of cryptographic libraries to secure default settings, as current AI-system parameters are not always free from exploits.

“Our aim is to make a broader statement about the use of code-generation models. Further work is needed to study these problems and to develop methods to address them,” said co-author Neil Perry.

He said that introducing vulnerabilities into security systems is not the only drawback of AI-generated code systems. He mentioned the issue of potential copyright violations due to the use of publicly available code to train Codex.

“[For these reasons] we largely express caution about using these tools to replace teaching novice developers reliable coding practices,” added Shrivastava.

In May, one of the developers found that Copilot «leaked» private keys from crypto wallets.

In October, a group of programmers announced the filing of a class-action lawsuit against Microsoft for training AI using their code.

In July 2021, Copilot was suspected of copying copyright-protected fragments of open-source software.

Subscribe to ForkLog’s Telegram news: ForkLog AI — all the news from the world of AI!

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK