{"id":12708,"date":"2024-04-18T17:00:00","date_gmt":"2024-04-18T14:00:00","guid":{"rendered":"https:\/\/forklog.com\/en\/fakes-deepfakes-and-chatbots-how-to-spot-ai-manipulation-online\/"},"modified":"2024-04-18T17:00:00","modified_gmt":"2024-04-18T14:00:00","slug":"fakes-deepfakes-and-chatbots-how-to-spot-ai-manipulation-online","status":"publish","type":"post","link":"https:\/\/forklog.com\/en\/fakes-deepfakes-and-chatbots-how-to-spot-ai-manipulation-online\/","title":{"rendered":"Fakes, deepfakes and chatbots: how to spot AI manipulation online"},"content":{"rendered":"<p>Lies, deceit and manipulation have been with humanity since antiquity. With the rise of mass media, makers of fakes gained the ability to reach unprecedented audiences.<\/p>\n<p>In the era of the rapidly expanding internet, disinformation has taken new forms. Websites, social networks, image boards and even traditional media have become venues for spreading false and manipulative claims. The risk may become acute in a new technological age\u2014the age of artificial intelligence. Or does truth still have a chance to survive?<\/p>\n<p>Bohdan Kaminsky explains how disinformation spreads, the role AI plays and what to expect next. <\/p>\n<h2 class=\"wp-block-heading\"><strong>A brief history<\/strong><\/h2>\n<p>Disinformation has long been a weapon in the hands of governments and intelligence services. The problem became especially acute during the cold war. The Soviet Union and the United States did not hesitate to manipulate data and interfere with information flows in order to blacken each other in the eyes of their own citizens.<\/p>\n<p>The fall of the Berlin Wall might have eased tensions; it did not. Governments continued to distort the information space to suit their interests, using every tool at their disposal.<\/p>\n<p>The end of the cold war coincided with the rapid spread of the internet, especially in Eastern Europe. As the machinery of censorship collapsed, a torrent of information washed over the former socialist bloc, making it ever harder for ordinary citizens to tell truth from falsehood.<\/p>\n<p>The mid-2000s advent of social networks marked a new phase in the evolution of fakes and manipulation. The information space\u2014and its audience\u2014expanded so much that without careful fact-checking it became far harder to separate trustworthy reports from bunk.<\/p>\n<h2 class=\"wp-block-heading\"><strong>New capabilities <\/strong><\/h2>\n<p>Artificial intelligence developed alongside the internet, but its broad adoption in daily life came much later. One reason for the delay was high cost and the need for hefty computing power.<\/p>\n<p>The situation changed dramatically in November 2022, when OpenAI released ChatGPT for public use. Transformer-based language models existed before, but this product, built on GPT-3, gave virtually any smartphone owner access to a modern chatbot.<\/p>\n<p>Almost immediately, users began probing its limits. Fake posts, news items, songs, poems and other texts appeared online, spun up with AI. The service can generate vast volumes of prose in seconds on demand.<\/p>\n<p>It did not stop at text. OpenAI also built the DALL-E series of image generators, capable of creating realistic pictures. Although Sam Altman\u2019s tool has not yet been involved in disinformation scandals, rival platforms MidJourney and Stable Diffusion have.<\/p>\n<p>In March 2023 the internet lit up with pictures of Pope Francis supposedly strutting the streets in a Balenciaga puffer. The images, it turned out, were generated by AI.<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh7-eu.googleusercontent.com\/FXH7y9MLaDcE1axOld5upD9a0cELMF6VR8x2k-Uu1eCEonrLbFWva6VVVKHni5c1twLXVDQWC1NFFlZ-H9bU7O2xjnNIFGTbUlGXBuxU1ZTcXyoG4gmTbzgx7eLI-i8ZLVQWtzKES4sij9dgSdHClHk\" alt=\"Fakes, deepfakes and chatbots: how to spot AI manipulation online\"\/><figcaption class=\"wp-element-caption\">Pope Francis in a Balenciaga puffer, generated by AI. Source:<a href=\"https:\/\/www.forbes.com\/sites\/danidiplacido\/2023\/03\/27\/why-did-balenciaga-pope-go-viral\/?sh=7bd27ab74972\" target=\"_blank\" rel=\"noopener\" title=\"\"> Forbes<\/a>.<\/figcaption><\/figure>\n<p>A pontiff in a fashion-forward coat may seem a harmless gag. Enthusiasts went further, conjuring images of former US president Donald Trump in a prison jumpsuit under police escort. This came just as he was expected to appear in court, so the fake photos drew wide notice.<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh7-eu.googleusercontent.com\/Ioxiutfm6LSuvlBUWKoHDOQvM3wUDw8il2sjsUyqtr8EWCqt2aDasI8uiKcgUQL83sGBRxqVNjOvaitjDMBS1azzfNE3yKGr1kUbFJye1Lj28bCad51sd6VqYQMKO1-IKZ484TZBAmvr8ouV0zkTDCw\" alt=\"Fakes, deepfakes and chatbots: how to spot AI manipulation online\"\/><figcaption class=\"wp-element-caption\">Fake image of Donald Trump\u2019s arrest, generated by AI. Source:<a href=\"https:\/\/petapixel.com\/2023\/03\/22\/fake-images-of-donald-trump-being-arrested-shows-the-power-of-ai\/\" target=\"_blank\" rel=\"noopener\" title=\"\"> PetaPixel<\/a>.<\/figcaption><\/figure>\n<p>Some users took the hyper-realistic frames at face value. The fake was quickly exposed, but the real impact of the disinformation campaign is hard to assess.<\/p>\n<p>Video is different. Although today\u2019s generators still struggle to produce realistic clips, deepfake technology has for years allowed one face to be swapped for another, or a person to be made to say words he never uttered.<\/p>\n<p>One of the first high-profile deepfakes to spark heated discussion online was the 2017 programme<a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3072959.3073640\"> Synthesizing Obama<\/a>. It altered footage of former US president Barack Obama to make him appear to speak words taken from a separate audio track.<\/p>\n<p><iframe loading=\"lazy\" width=\"875\" height=\"492\" src=\"https:\/\/www.youtube.com\/embed\/9Yq67CjDqvw\" title=\"Synthesizing Obama: Learning Lip Sync from Audio\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p>Since then the technology has been used for everything from porn deepfakes to information-psychological operations.<\/p>\n<p>Criminals also forge audio to mimic other people\u2019s voices. In 2021 fraudsters fabricated the speech of a large-company executive in the UAE and used it to steal $35m.<\/p>\n<p>Synthetic voices have been used in disinformation campaigns as well. Early in 2024, in the thick of the US primary season, residents of New Hampshire<a href=\"https:\/\/forklog.com\/en\/news\/us-proposes-ban-on-ai-powered-robocalls\"> \u043f\u043e\u043b\u0443\u0447\u0438\u043b\u0438<\/a> a strange call from President Joe Biden. His synthesised voice urged voters not to back him in the upcoming primary.<\/p>\n<p>The robocalls turned out to be a fake created by a political operative for Mr Biden\u2019s intra-party rival to influence the result. The United States went on to ban AI robocalls, and the state prosecutor opened a criminal case over an attempted interference in the electoral process.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">In a unanimous decision this week, the agency moved to make AI-generated voices in scam robocalls targeting consumers illegal. This would give State AGs across the country new tools to go after bad actors behind these nefarious calls.<a href=\"https:\/\/t.co\/iFCOGsu7ue\">https:\/\/t.co\/iFCOGsu7ue<\/a><\/p>\n<p>\u2014 The FCC (@FCC) <a href=\"https:\/\/twitter.com\/FCC\/status\/1755976286491611191?ref_src=twsrc%5Etfw\">February 9, 2024<\/a><\/p><\/blockquote>\n<p> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>What helps today\u2019s mischief-makers, who run AI-driven disinformation campaigns, is the easy availability of advanced tools. Chatbots churn out text; Stable Diffusion and MidJourney fake images; platforms like ElevenLabs create synthetic voices from short samples\u2014all without deep technical skills.<\/p>\n<h2 class=\"wp-block-heading\"><strong>How the industry is fighting AI fakes<\/strong><\/h2>\n<p>The accessibility of modern AI tools has put the industry in a bind. Developers are criticised for shipping services that let almost anyone generate convincing content with minimal effort. Big players are therefore rolling out restrictions to deter manipulation.<\/p>\n<p>OpenAI and Microsoft, the startup\u2019s main investor, have installed filters that block generating content on certain topics. For example, ChatGPT will refuse to produce politically slanted or historically inaccurate material if the system suspects manipulative intent. DALL-E, for its part, will not generate images of public figures.<\/p>\n<p>Those filters\u2014perhaps others too\u2014have made their way into Microsoft products: Bing, Copilot and Designer, all built on OpenAI models.<\/p>\n<p>Google is also taking steps to combat manipulative and misleading content. At the end of 2022 the company<a href=\"https:\/\/mastercaweb.unistra.fr\/en\/actualites\/web\/how-and-why-google-plans-to-fight-against-ai-generated-content\/\"> \u0432\u044b\u043f\u0443\u0441\u0442\u0438\u043b\u0430<\/a> guidance making clear that AI-generated text is treated as spam. The search engine and its algorithms prioritise high-quality text written by humans.<\/p>\n<p>In 2023 the company also announced a tool to label AI images in search. How it will work is unclear, since many such images carry no watermarks or metadata to identify them. And if generators do add marks, they are easy to strip.<\/p>\n<p>YouTube, owned by Google, has likewise declared war on AI content. In 2024 the company<a href=\"https:\/\/techcrunch.com\/2024\/03\/18\/youtube-requires-creatorsdisclose-realistic-content-made-ai\/\" target=\"_blank\" rel=\"noopener\" title=\"\"> \u043e\u0431\u044f\u0437\u0430\u043b\u0430<\/a> creators to disclose if their work includes overt manipulation by artificial intelligence. Otherwise, the content faces removal. Again, it is unclear how YouTube plans to identify AI-laced videos if creators ignore the rule.<\/p>\n<p>More broadly, tech giants set up the Coalition for Content Provenance and Authenticity (C2PA) back in 2021. Its aim is to add labels to AI content to help identify such photos and videos.<\/p>\n<p>Meta has its own approach to AI content. After unveiling its Imagine generator, the company<a href=\"https:\/\/about.fb.com\/news\/2024\/02\/labeling-ai-generated-images-on-facebook-instagram-and-threads\/\" target=\"_blank\" rel=\"noopener\" title=\"\"> \u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u0438\u043b\u0430<\/a> a labelling system that will watermark all of that tool\u2019s outputs. Later Meta<a href=\"https:\/\/about.fb.com\/news\/2024\/04\/metas-approach-to-labeling-ai-generated-content-and-manipulated-media\/\" target=\"_blank\" rel=\"noopener\" title=\"\"> \u0437\u0430\u044f\u0432\u0438\u043b\u0430<\/a> it would add the label to AI images created by other models, too.<\/p>\n<p>Much of the responsibility will rest on publishers, but the company did not rule out applying labels itself\u2014label, not removal. Meta promised not to overuse its power to take down content that in one way or another violates rules on AI material.<\/p>\n<p>ElevenLabs, mentioned earlier and used to create the Biden deepfake, also<a href=\"https:\/\/help.elevenlabs.io\/hc\/en-us\/articles\/22584327690897-What-are-No-Go-Voices\" target=\"_blank\" rel=\"noopener\" title=\"\"> \u0437\u0430\u043f\u0440\u0435\u0442\u0438\u043b\u0430<\/a> cloning the voices of public figures. The platform\u2019s rules already forbid using third-party data without consent, but the US president\u2019s case showed the firm lacks adequate control.<\/p>\n<p>MidJourney, the tool behind the Balenciaga Pope and Trump-in-custody memes, <a href=\"https:\/\/www.bloomberg.com\/news\/newsletters\/2024-02-08\/midjourney-weighs-banning-trump-biden-images-before-us-election?cmpid%3D=socialflow-twitter-tech&#038;utm_content=tech&#038;utm_campaign=socialflow-organic&#038;utm_source=twitter&#038;utm_medium=social&#038;sref=10lNAhZ9\">\u0437\u0430\u043f\u0440\u0435\u0442\u0438\u043b\u0430<\/a> prompts about politicians to stop users generating plausible fakes. The company\u2019s head, David Holz, noted that he himself enjoys creating parodies of the former US president, but that it may be dangerous in terms of spreading disinformation.<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh7-eu.googleusercontent.com\/VVldWyKS-IXqEDUtoYt2HUdT-FR5XqV-u0tUsSJD-aUB5HRemd4xNOlmAfiNOfz4lAeUREKN4HcLV1b_-bscUOABzqRhthK7aZJrnfwkVCjdvHTtqUcRrUNXG_Gfl5TTQZKC4IEDcYP1rNeNDodBq4k\" alt=\"Fakes, deepfakes and chatbots: how to spot AI manipulation online\"\/><figcaption class=\"wp-element-caption\">Image generated by MidJourney for the prompt &#8220;fake news&#8221;. Source: ForkLog.<\/figcaption><\/figure>\n<p>Even so, as AI algorithms improve it will become harder for developers to curb manipulative content. Some tools, such as Stable Diffusion and Meta Llama 2, can already run locally on users\u2019 devices and generate material without any restrictions.<\/p>\n<p>Identifiers for AI content are therefore needed, but they are no panacea. OpenAI\u2019s detector for AI-written text, for instance, failed at its task and was shut down. It more often labelled human writing as AI-generated than it correctly spotted AI text in the first place.<\/p>\n<h2 class=\"wp-block-heading\"><strong>AI is itself a source of disinformation<\/strong><\/h2>\n<p>You do not need a malicious actor to fall for manipulation and disinformation. Because of so-called <a href=\"https:\/\/ru.wikipedia.org\/wiki\/%D0%93%D0%B0%D0%BB%D0%BB%D1%8E%D1%86%D0%B8%D0%BD%D0%B0%D1%86%D0%B8%D1%8F_(%D0%B8%D1%81%D0%BA%D1%83%D1%81%D1%82%D0%B2%D0%B5%D0%BD%D0%BD%D1%8B%D0%B9_%D0%B8%D0%BD%D1%82%D0%B5%D0%BB%D0%BB%D0%B5%D0%BA%D1%82)\" target=\"_blank\" rel=\"noopener\" title=\"\">hallucinations<\/a>, AI can mislead users on its own.<\/p>\n<p>In short, \u201clanguage-model hallucinations\u201d are the tendency of algorithms to \u201cinvent\u201d information that does not in fact exist. Large language models do not think like humans. They are advanced predictive engines that look for patterns in text and try to generate the most plausible result.<\/p>\n<p>Chatbots such as ChatGPT, Copilot, Gemini and Grok are prone to hallucinations. They make up statistics, facts and historical events.<\/p>\n<p>A recent example is a news article,<a href=\"https:\/\/gizmodo.com\/grok-ai-creates-bizarre-fake-news-about-the-solar-eclip-1851396186\" target=\"_blank\" rel=\"noopener\" title=\"\"> \u0441\u0433\u0435\u043d\u0435\u0440\u0438\u0440\u043e\u0432\u0430\u043d\u043d\u0430\u044f<\/a> by the Grok chatbot from posts on X. It claims that the public is \u201calarmed by the Sun\u2019s strange behaviour\u201d and that \u201cscientists cannot explain what is happening.\u201d The reference was to the solar eclipse on 8 April 2024. Most likely, the chatbot took jokes on X\u2014\u201c\u043a\u0443\u0434\u0430 \u043f\u0440\u043e\u043f\u0430\u043b\u043e \u0421\u043e\u043b\u043d\u0446\u0435\u201d (\u201cwhere did the Sun go?\u201d)\u2014at face value; scientists, of course, understand eclipses perfectly well.<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh7-eu.googleusercontent.com\/woZsaIJbDAVAd3oIAhHIoBIiPWoQkoiX5NBNqUS3BbPC3tCP6TGyXS6DxQRwq7XZBRdZtpRsgVLuzFjn_uts6s9B2arGSWujH9Qts_BhNmI_gaVvCSeE-5fC3x7gCiPbxae1X5xHGTBGxoTNEHI9_Vk\" alt=\"Fakes, deepfakes and chatbots: how to spot AI manipulation online\"\/><figcaption class=\"wp-element-caption\">The Grok chatbot generated a fake story about the 8 April solar eclipse. Source:<a href=\"https:\/\/gizmodo.com\/grok-ai-creates-bizarre-fake-news-about-the-solar-eclip-1851396186\" target=\"_blank\" rel=\"noopener\" title=\"\"> Gizmodo<\/a>.<\/figcaption><\/figure>\n<p>It is important to understand that a chatbot is not a search engine, and you should not take it at its word. If you use such tools for research, double-checking model outputs will not go amiss. <\/p>\n<p>NVIDIA chief Jensen Huang offered a<a href=\"https:\/\/techcrunch.com\/2024\/03\/19\/agi-and-hallucinations\/?guccounter=1\" target=\"_blank\" rel=\"noopener\" title=\"\"> recommendation<\/a> to help avoid AI hallucinations. He advised applying a journalist\u2019s approach to verification:<\/p>\n<ul class=\"wp-block-list\">\n<li>examine the source and its context;<\/li>\n<li>compare the source\u2019s claims with established truths;<\/li>\n<li>if any part of the answer is untrue, discard the source and move on.<\/li>\n<\/ul>\n<h2 class=\"wp-block-heading\"><strong>How to avoid falling for AI manipulation<\/strong><\/h2>\n<p>If companies and detectors cannot yet reliably identify AI forgeries, does that mean all is lost? Hardly. As before, critical thinking remains the most reliable way to resist manipulation by AI-generated content.<\/p>\n<p>Most fakes are debunked thanks to \u201chuman intelligence\u201d. It is vital to treat any information on social media with scepticism. Saw the pope in a Balenciaga jacket? Check the Vatican\u2019s official pages online.<\/p>\n<p>Fact-checking has long been one of the main tools for debunking disinformation. It applies to AI manipulation, too. Fact-checking teams at major media outlets have increasingly<a href=\"https:\/\/www.bbc.com\/news\/world-us-canada-68440150\" target=\"_blank\" rel=\"noopener\" title=\"\"> \u0437\u0430\u043d\u0438\u043c\u0430\u0442\u044c\u0441\u044f<\/a> verifying content for AI tampering.<\/p>\n<p>A range of tools can also help check for AI involvement. For text, there are free tools such as<a href=\"https:\/\/gptzero.me\/\" target=\"_blank\" rel=\"noopener\" title=\"\"> GPTZero<\/a> and<a href=\"https:\/\/detecting-ai.com\/\" target=\"_blank\" rel=\"noopener\" title=\"\"> Detecting-AI<\/a>; for images, try<a href=\"https:\/\/huggingface.co\/spaces\/umm-maybe\/AI-image-detector\" target=\"_blank\" rel=\"noopener\" title=\"\"> Maybe\u2019s AI Art Detector<\/a> on Hugging Face. They are imperfect, but can offer a sense of whether AI fingerprints are present.<\/p>\n<p>There is no shortage of manipulation and disinformation online even without artificial intelligence. Public figures are often credited with quotes they never uttered; photoshopped documents circulate, and so on.<\/p>\n<p>AI will undoubtedly play a large role in increasing the volume of fakes online\u2014if it is not doing so already. Yet the credulous are likely to fall for fakes anyway, as they did before the technology\u2019s boom.<\/p>\n<p>At least fact-checkers now have plenty more work. <\/p>\n","protected":false},"excerpt":{"rendered":"<p>Lies, deceit and manipulation have been with humanity since antiquity. With the rise of mass media, makers of fakes gained the ability to reach unprecedented audiences. In the era of the rapidly expanding internet, disinformation has taken new forms. Websites, social networks, image boards and even traditional media have become venues for spreading false and [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":12707,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"","news_style_id":"","cryptorium_level":"","_short_excerpt_text":"","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[1144],"tags":[438,1250,1209,286],"class_list":["post-12708","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-longreads","tag-artificial-intelligence","tag-deepfakes","tag-fake-news","tag-society"],"aioseo_notices":[],"amp_enabled":true,"views":"41","promo_type":"","layout_type":"","short_excerpt":"","is_update":"","_links":{"self":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/12708","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/comments?post=12708"}],"version-history":[{"count":0,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/12708\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media\/12707"}],"wp:attachment":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media?parent=12708"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/categories?post=12708"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/tags?post=12708"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}