
OpenAI warns of growing threat of AI-enabled disinformation
OpenAI warned of the growing threat of using AI technologies to spread misinformation.
We’re publishing a report, co-authored with @CSETGeorgetown and @stanfordio, analyzing how language models could be misused for disinformation and how to mitigate those risks. https://t.co/nHiVp7GoxI
— OpenAI (@OpenAI) January 11, 2023
The startup conducted a joint study with Georgetown University’s Centre for Security and Emerging Technology (CSET) and the Stanford Internet Observatory.
Researchers say that wider access to language models will affect all three aspects of influence operations:
- actors;
- behaviours;
- content.
Researchers say language models could reduce the cost of running disinformation campaigns, making them accessible to a broader group of participants. In addition, they are capable of inventing new tactics, such as generating content in real time in chatbots.
Tools based on language models will also help generate more persuasive texts compared with traditional methods.
“This is especially true for those agitators who lack the necessary linguistic skills or cultural knowledge about the target. This could make influence operations less detectable, as generated content would not be as easily recognisable,” the study says.
The researchers say that the opacity of language models will not shield against misinformation. The agitators are likely to gravitate toward open-source systems or state-developed alternatives, they added.
The study underscores critical unknowns related to the use of language models. It remains unclear what opportunities for disinformation may arise in the future, even given current research and investments in the technology.
Moreover, the level of investment in language models remains unknown, as does when user-friendly tools to create and disseminate such content will become widely available.
To prevent misuse of such technologies in the future, the researchers urged the adoption of appropriate safeguards.
“It is crucial to analyse the threat of influence operations using AI and to outline a range of actions before language models are used for large-scale influence operations,” the group said.
Researchers proposed that AI developers create more fact-aware models or impose stricter restrictions on their use.
Platforms and AI providers should coordinate their actions to identify automated content, and institutions should participate in campaigns to raise media literacy.
They also suggest that governments may impose restrictions on data collection and control access to hardware for training algorithms.
Researchers call for widely applying standards of digital provenance to track the source of AI-generated content. Blockchain and digital identifiers could help, they concluded.
In December 2022, OpenAI unveiled the ChatGPT, built on a language model.
In the same month, a professor at an American university caught a student using the tool to write an essay.
In January 2023, New York schools banned ChatGPT on school equipment. Similar measures were taken by major universities in Australia.
Follow ForkLog AI on Telegram: ForkLog AI — all the news from the AI world!
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!