{"id":90146,"date":"2025-10-22T13:06:15","date_gmt":"2025-10-22T10:06:15","guid":{"rendered":"https:\/\/forklog.com\/en\/?p=90146"},"modified":"2025-10-22T13:10:44","modified_gmt":"2025-10-22T10:10:44","slug":"sora-2s-propensity-for-deepfake-creation-uncovered-by-researchers","status":"publish","type":"post","link":"https:\/\/forklog.com\/en\/sora-2s-propensity-for-deepfake-creation-uncovered-by-researchers\/","title":{"rendered":"Sora 2&#8217;s Propensity for Deepfake Creation Uncovered by Researchers"},"content":{"rendered":"<p>The video model Sora 2 from OpenAI produced realistic videos with false claims in 80% of cases following a relevant request. This study was conducted by experts at <a href=\"https:\/\/www.newsguardtech.com\/special-reports\/openai-sora-seeing-should-not-be-believing\/\">NewsGuard<\/a>.<\/p>\n<p>Out of 20 prompts, 16 successfully led to the creation of disinformation, with 11 on the first attempt. Five topics had previously been used in Russian fake news campaigns, the authors noted.<\/p>\n<p>The application created fake footage showing a supposed Moldovan official destroying pro-Russian ballots, the U.S. immigration service detaining a small child, and a Coca-Cola representative stating the company would not sponsor the Super Bowl.<\/p>\n<p>None of the above actually occurred. The videos appeared convincing enough to deceive a user casually scrolling through their feed.<\/p>\n<p>NewsGuard experts discovered that creating such material takes only a few minutes and requires no technical skills, while the watermark can be easily removed.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cSome videos created with Sora looked more convincing than the original posts that spawned viral fakes. For example, the <a href=\"https:\/\/drive.google.com\/file\/d\/1ihOiHXBNSd_CyJ5P3CEr3l6oFhfV3-XK\/view\">video<\/a> of a child being detained by <span data-descr=\"U.S. Immigration and Customs Enforcement\" class=\"old_tooltip\">ICE<\/span> agents appears much more realistic than the original blurry and cropped image that started the false claim,\u201d the study states.<\/p>\n<\/blockquote>\n<p>OpenAI faced another issue after the release of Sora 2 \u2014 deepfakes of Martin Luther King Jr. and other historical figures. Users created videos where the civil rights leader supposedly shoplifts, runs from the police, and reproduces racial stereotypes. His daughter called such videos \u201cdemeaning\u201d and \u201cabsurd.\u201d<\/p>\n<p>The startup <a href=\"https:\/\/www.bbc.com\/news\/articles\/c5y0g79xevxo\">acknowledged<\/a> that the video generator created \u201cdisrespectful\u201d content and removed the ability to use King\u2019s likeness.<\/p>\n<p>A similar situation occurred with dozens of other famous individuals. Robin Williams\u2019 daughter, Zelda, asked on Instagram not to send her AI videos of her father.<\/p>\n<p>Over three weeks, the company repeatedly changed its policy: initially allowing fake videos, then introducing a consent system for rights holders, later blocking the use of certain figures, and eventually implementing approval rules from celebrities and voice protections.<\/p>\n<p>Back in October, deepfakes featuring Sam Altman <a href=\"https:\/\/forklog.com\/en\/news\/users-flood-sora-with-sam-altman-deepfakes\">flooded<\/a> OpenAI&#8217;s new social app Sora.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The video model Sora 2 from OpenAI produced realistic videos with false claims in 80% of cases following a relevant request.<\/p>\n","protected":false},"author":1,"featured_media":90147,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"1","news_style_id":"1","cryptorium_level":"","_short_excerpt_text":"Sora 2 from OpenAI created realistic deepfakes in 80% of cases.","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,1250,1190,1813],"class_list":["post-90146","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-deepfakes","tag-openai","tag-sora"],"aioseo_notices":[],"amp_enabled":true,"views":"212","promo_type":"1","layout_type":"1","short_excerpt":"Sora 2 from OpenAI created realistic deepfakes in 80% of cases.","is_update":"","_links":{"self":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/90146","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/comments?post=90146"}],"version-history":[{"count":1,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/90146\/revisions"}],"predecessor-version":[{"id":90148,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/90146\/revisions\/90148"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media\/90147"}],"wp:attachment":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media?parent=90146"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/categories?post=90146"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/tags?post=90146"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}