
Study Reveals Difficulty in Distinguishing AI-Generated Videos from Real Footage
People struggle to differentiate between camera-recorded videos and those generated by Sora from OpenAI, reports Variety.
A HarrisX survey conducted weeks after the AI model’s presentation found that most respondents incorrectly identified the video’s creator.
Survey participants expressed that the US authorities should enact a law requiring AI-generated content to be labeled. They also called for regulation across all formats, including video, images, text, music, subtitles, and sounds.
Moreover, AI-generated videos raise numerous concerns due to the potential dangers of deepfakes.
“As the complexity of generative AI grows, so does concern over its impact. There is an increasing call for proper labeling and regulation of this technology. The industry must strive harder to create barriers against fraudsters and properly inform audiences about their actions and goals,” said Dritan Nesho, head of research at HarrisX.
When respondents were informed which videos were created by the neural network, they were asked to describe their emotions. The reactions were as follows:
- curiosity — 28%;
- uncertainty — 27%;
- anxiety — 18%;
- inspiration — 18%;
- fear — 2%.
Although the commercial version of Sora has yet to be launched, it has already sparked concerns among users. According to researchers, entertainment industry workers are the most anxious, as the tool could affect job numbers.
In February, OpenAI unveiled the new generative AI model Sora, capable of converting text into video.
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!