Telegram (AI) YouTube Facebook X
Ру
Scientists urge the global community to regulate deepfakes

Scientists urge the global community to regulate deepfakes

American scientists warned of growing efforts by ‘enemy states’ to create deepfakes. The Register reports this.

According to researchers at Northwestern University and the Brookings Institution, there are now a number of tools available for creating AI-generated fakes.

“The ease with which deepfakes can be developed for specific people and purposes, as well as their rapid spread, … indicate a world in which all states and non-state actors will be able to use deepfakes in security and intelligence operations,” the report authors said.

According to the report, foreign ‘adversaries’ will use such systems to conduct disinformation campaigns and spread fake news.

It is also expected that ‘hostile states’ will be able to use AI in military and intelligence operations as technologies advance.

The researchers urged governments worldwide to implement policies regulating the use of AI-generated deepfakes.

“In the long run, we need a global agreement on the use of deepfakes by defense and intelligence agencies,” said one of the study’s co-authors.

However, such a measure does not guarantee full security. Experts say that the development of international rules may be hampered by ‘nation-states with veto power’.

“Even if such an agreement is reached, some countries will likely violate it. Therefore the agreement should include a sanctions mechanism to deter and punish violators,” the researchers say.

Advancements in deepfake detection technologies are also insufficient, the scientists said. In their view, it will resemble a cat-and-mouse game similar to that observed with malware.

“When cybersecurity firms discover a new type of malware and develop signatures to detect it, malware authors introduce ‘tweaks’ to evade the detector,” the report says.

The scientists are confident that sooner or later the “detection-evasion” cycle will reach a point where deepfake detectors cannot cope with their volume.

“In January, researchers from OpenAI warned about the growing threat of using language models to spread disinformation.”

In December 2022, the Chinese regulator banned the creation and distribution of deepfakes, ‘threatening national security’. The rules took effect on January 10, 2023.

Subscribe to ForkLog news on Telegram: ForkLog AI — all the news from the world of AI!

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK