{"id":94418,"date":"2026-02-18T17:03:17","date_gmt":"2026-02-18T14:03:17","guid":{"rendered":"https:\/\/forklog.com\/en\/?p=94418"},"modified":"2026-02-18T17:07:47","modified_gmt":"2026-02-18T14:07:47","slug":"the-big-bad-ai-that-isnt","status":"publish","type":"post","link":"https:\/\/forklog.com\/en\/the-big-bad-ai-that-isnt\/","title":{"rendered":"The big, bad AI that isn\u2019t"},"content":{"rendered":"<p>How often do you hear end-of-the-world prophecies starring an all-powerful artificial intelligence? At least once a week some businessperson or celebrity frets about a grim future under its yoke.<\/p>\n<p>A famous face plus a gloomy forecast is a ready-made headline. In the past, such stories often reflected real, sometimes alarming, technological progress; now they increasingly resemble hollow marketing\u2014or simple confusion about what is actually happening.<\/p>\n<p>So why are we still being spooked by bad retellings of Terminator when today\u2019s chatbots brazenly lie and cannot remember five lines of dialogue? And, crucially, who stands to gain?<\/p>\n<h2 class=\"wp-block-heading\">Unimpressed<\/h2>\n<p>First, the obvious: AI technologies have leapt forward over the past decade. Modern systems can write coherent text, spot patterns in large datasets and generate visual content. Not long ago, machines could not substitute for so much human work.<\/p>\n<p>The prospects are unsettling. Yet for now the mass-market story has stalled at chatter about so-called artificial general intelligence and the release of near-identical language models (sometimes the new ones are even <a href=\"https:\/\/forklog.com\/en\/news\/gpt-5-breakthrough-flop-or-mediocrity\">worse than their predecessors<\/a>).<\/p>\n<p>What we have, in the end, is a helper tool trained to handle simple text\u2014and sometimes image\u2014tasks. People use it for vibe-coding or drafting social-media posts. The output often needs checking: for more complex work, neural nets fall short.<\/p>\n<p>Ask your favourite chatbot to write a PhD dissertation on \u201cX\u201d and you will get a barely coherent text with citations from the first or second page of a search engine. To improve results, people recommend elaborate prompts, but that is merely finer tuning in \u201cmachine-speak\u201d and extra training.<\/p>\n<p>With prolonged use, most users grasp the limits of today\u2019s models. Progress has hit the walls of training data and compute, while the \u201cintelligence\u201d factor has faded into the background.<\/p>\n<h2 class=\"wp-block-heading\">Brainless intelligence<\/h2>\n<p>For context, consider how AI works. In brief, the big language models behind classic chatbots operate like this:<\/p>\n<ol class=\"wp-block-list\">\n<li>Input text is split into tokens (parts of words, symbols).<\/li>\n<li>Each token is assigned a numerical vector.<\/li>\n<li>The model analyses relationships among tokens and identifies which words matter most to understanding the context.<\/li>\n<li>On that basis the <span data-descr=\"large language model\" class=\"old_tooltip\">LLM<\/span> \u201cpredicts\u201d each next token, forming a reply.<\/li>\n<\/ol>\n<p>The model does not \u201cpredict\u201d from thin air. It is pre-trained on a huge corpus, typically scraped from public internet sources. That is where the neural net gets all its \u201cintelligence\u201d.<\/p>\n<p>Language models do not \u201cunderstand\u201d text in the human sense; they compute statistical regularities. All leading chatbots today use the same basic architecture\u2014aptly named \u201cTransformer\u201d\u2014which works on this principle.<\/p>\n<p>Crude though the analogy is, you can think of an LLM as a very powerful calculator atop a large database. It is a strong, useful tool that simplifies many aspects of life, but it is premature to ascribe full-fledged intelligence to it.<\/p>\n<p>Modern chatbots look more like the next iteration of search (hello, Gemini in Google) than a pocket, all-knowing assistant.<\/p>\n<p>Reliability remains a concern, too. After glancing at statistics on <a href=\"https:\/\/forklog.com\/en\/news\/hallucinations-remain-ais-central-problem\">hallucination and fibbing<\/a> by neural nets, one is tempted to go back to plain old \u201cto google\u201d.<\/p>\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/forklog.com\/wp-content\/uploads\/img-82c7e331ffbe8ea9-6638612098704920.webp\" alt=\"image\" class=\"wp-image-275363\"\/><figcaption class=\"wp-element-caption\">Comparison of answer accuracy between GPT-5 and o4-mini. Source: OpenAI.<\/figcaption><\/figure>\n<h2 class=\"wp-block-heading\">Boo\u2014scared yet?<\/h2>\n<p>The doomsayers\u2019 main thesis is that \u201cAI is getting smarter exponentially\u201d, so once it surpasses human intelligence, humanity is finished.<\/p>\n<p>Modern AI certainly outpaces us at processing and transforming data. A neural net can, for instance, retell Wikipedia in some detail. That is roughly where its \u201cknowledge\u201d ends. More precisely, the model cannot apply it to \u201cpersonal goals\u201d, because it has none\u2014and was not built to.<\/p>\n<p>It is also clear that artificial intelligence does not understand the world around us. The laws of physics are a dark forest to it.<\/p>\n<p>Progress in language models has largely meant widening the scope of prediction (guessing tokens). But AI is fast approaching the limits of text-only training, and the chorus calling for \u201cspatial\u201d intelligence is getting louder.<\/p>\n<p>Even if the technology\u2019s weak spots can be identified\u2014and work on them is under way\u2014harder questions remain.<\/p>\n<p>Even for humanity, many aspects of the brain\u2019s workings are still a mystery. Recreating such a complex structure in digital form is another matter entirely.<\/p>\n<p>Another near-insurmountable obstacle for AI is creativity\u2014the ability to originate something new. LLMs are, technically, unable to step beyond their architectural constraints, because they operate by reprocessing existing data.<\/p>\n<p>Thus AI\u2019s fate depends directly on the information we put into it\u2014and, for now, training material is oriented entirely toward human benefit.<\/p>\n<p>For fairness, consider Elon Musk and Grok. At one point users <a href=\"https:\/\/forklog.com\/en\/news\/grok-ranks-musk-above-celebrities-in-looks-humour-and-athleticism\">noticed<\/a> the chatbot\u2019s bias and a tendency to overrate the billionaire\u2019s abilities. That is a worrying ethical signal, but a potential \u201cneuro-Elon\u201d is unlikely to harm humanity physically.<\/p>\n<p>AI applications have one aim: to obey the user\u2019s request. A chatbot has no will or desires, and that paradigm is unlikely to change any time soon.<\/p>\n<h2 class=\"wp-block-heading\">The anatomy of fear<\/h2>\n<p>So why are we still being scared by an AI that turns out not to be very \u201cintelligent\u201d? The main answers are obvious.<\/p>\n<p>Setting aside ignorance of the technology, the simplest reason is greed\u2014for money or for fame.<\/p>\n<p>Consider one of the apocalypse\u2019s \u201cprophets\u201d, Eliezer Yudkowsky. The AI researcher and co-author of If Anyone Builds It, Everyone Dies has, since the 2000s, warned of a superintelligent AI supposedly alien to human values.<\/p>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"614\" height=\"1024\" src=\"https:\/\/forklog.com\/wp-content\/uploads\/img-8a8c11c0159b7fcd-6638612537341963-614x1024.png\" alt=\"image\" class=\"wp-image-275365\" srcset=\"https:\/\/forklog.com\/wp-content\/uploads\/img-8a8c11c0159b7fcd-6638612537341963-614x1024.png 614w, https:\/\/forklog.com\/wp-content\/uploads\/img-8a8c11c0159b7fcd-6638612537341963-180x300.png 180w, https:\/\/forklog.com\/wp-content\/uploads\/img-8a8c11c0159b7fcd-6638612537341963-768x1280.png 768w, https:\/\/forklog.com\/wp-content\/uploads\/img-8a8c11c0159b7fcd-6638612537341963-922x1536.png 922w, https:\/\/forklog.com\/wp-content\/uploads\/img-8a8c11c0159b7fcd-6638612537341963.png 960w\" sizes=\"auto, (max-width: 614px) 100vw, 614px\" \/><figcaption class=\"wp-element-caption\">Book cover. Source: <a href=\"https:\/\/instaread.co\/insights\/science-philosophy\/if-anyone-builds-it-everyone-dies-book\/l62n9mgiaf\">Instaread<\/a>.<\/figcaption><\/figure>\n<p>\u201cSuperintelligence\u201d has yet to appear, a point Yudkowsky often concedes. That does not stop him from touring podcasts with sweeping claims and selling books.<\/p>\n<p>The noted physicist and \u201cgodfather of AI\u201d Geoffrey Hinton has also voiced near-apocalyptic concerns. He <a href=\"https:\/\/www.jpost.com\/science\/science-around-the-world\/article-835354\">estimated<\/a> a 10\u201320% chance that the technology will cause human extinction in the next 30 years.<\/p>\n<p>In Hinton\u2019s view, as capabilities grow, a strategy of \u201ckeeping artificial intelligence under control\u201d may stop working, and agentic systems will strive for survival and an expansion of control.<\/p>\n<p>It remains unclear who, and for what purpose, would grant neural nets a \u201cwill to live\u201d. Hinton continues to work on training neural networks and in 2024 was nominated for a Nobel prize for achievements in this area; in early 2026 he became the second scientist in history, after the cyberneticist Yoshua Bengio, to reach 1m citations.<\/p>\n<p>Surprisingly, the <a href=\"https:\/\/forklog.com\/en\/news\/nobel-laureate-predicts-ai-induced-unemployment-surge-by-2026\">forecasts<\/a> of Andrew Ng, co-founder of Google Brain, sound more down to earth. He has called AI an \u201cextremely limited\u201d technology and is confident that algorithms will not replace people any time soon.<\/p>\n<p>Every field has sharp-tongued forecasters. Their presence in AI can be excused by the public\u2019s affection for science fiction. Who doesn\u2019t like to tickle their nerves with Philip K. Dick or Robert Sheckley\u2013style tales, now set in the present?<\/p>\n<p>More troubling are statements by big corporations that, almost in passing, warn of threats to jobs and predict rapid AI advances. The second claim often explains a need to cut costs; the first invites more conspiratorial readings.<\/p>\n<p>One of the world\u2019s largest companies\u2014Amazon\u2014has, over the past six months, <a href=\"https:\/\/www.reuters.com\/legal\/litigation\/amazon-cuts-16000-jobs-globally-broader-restructuring-2026-01-28\/\">laid off<\/a> more than 30,000 employees. Management cites plans for optimisation and the impact of automation, including AI.<\/p>\n<p>Work on warehouse robots continues apace. But cynics <a href=\"https:\/\/www.cnbc.com\/2025\/12\/21\/ai-job-cuts-amazon-microsoft-and-more-cite-ai-for-2025-layoffs.html\">argue<\/a> the problem is more prosaic: mass culls reflect poor pandemic-era HR.<\/p>\n<p>Amazon is hardly alone. AI firms in Silicon Valley <a href=\"https:\/\/www.businessinsider.com\/ai-supposed-kill-jobs-instead-hiring-office-boom-2025-10\">continue to expand<\/a> headcount and rent new office space.<\/p>\n<p>Yet back in 2023 many of these same companies <a href=\"https:\/\/forklog.com\/en\/news\/experts-compare-future-ai-to-pandemics-and-nuclear-wars\">signed<\/a> a Center for AI Safety statement calling to slow development\u2014supposedly AI poses \u201cexistential risks\u201d on a par with pandemics and nuclear war.<\/p>\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/forklog.com\/wp-content\/uploads\/img-6eade4f81c71de3e-6638612235644783.webp\" alt=\"image\" class=\"wp-image-275364\"\/><figcaption class=\"wp-element-caption\">Statement by the Center for AI Safety. Source: <a href=\"http:\/\/aistatement.com\">aistatement.com<\/a>.<\/figcaption><\/figure>\n<p>Over time the letter was forgotten, work carried on, and no visible threat emerged.<\/p>\n<p>From a corporate vantage point, in an era of <a href=\"https:\/\/forklog.com\/en\/news\/alphabets-chief-sees-irrationality-in-the-ai-market\">talk of an overinflated AI bubble<\/a>, appeals to technological change are a more convenient cover story than admitting structural failures in people management. But such claims create a false picture and distract from real problems\u2014disinformation and deepfakes.<\/p>\n<p>AI does not steal jobs; it changes how work is done, sometimes simplifying it. A narrow Harvard study <a href=\"https:\/\/forklog.com\/en\/news\/study-reveals-ai-increases-employee-workload\">shows<\/a> that AI can, on the contrary, complicate and slow internal processes.<\/p>\n<p>The technology will seep into every area of life: education, science, commerce, politics. How it shows up there will be determined only by people. For now, neural nets do not have a vote.<\/p>\n<h2 class=\"wp-block-heading\">Out of our reach<\/h2>\n<p>Everything above concerns public-facing AI such as chatbots and generative image tools. Of course, more serious work exists behind closed doors.<\/p>\n<p>Among the relatively simple cases are domain LLMs in medicine or archaeology. The former help synthesise new proteins; the latter decipher ancient documents that defy traditional analysis.<\/p>\n<p>But you have to track results through hard-to-access internal reports or in niche specialist media, so public awareness is close to zero. It is quite possible the biggest breakthroughs are happening here.<\/p>\n<p>A \u201cdoomsday AI machine\u201d is unlikely to appear even in closed labs. Such models are highly specialised and do only what they are required to do.<\/p>\n<p>Fears of AI running amok merely mirror our own anxieties\u2014whether about losing jobs or thornier ethical questions. So long as we humans set the direction and goals, AI remains a tool, not a separate subject with a will of its own.<\/p>\n<p>Discussing potential risks is right. Spinning apocalyptic theories is human nature. But these things demand scepticism\u2014and even a touch of irony. If we have an \u201coff\u201d button to hand, no digital supermind threatens our world.<\/p>\n<p><em>Vasily Smirnov<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Why are we still being spooked by bad retellings of Terminator\u2014and, above all, who stands to gain?<\/p>\n","protected":false},"author":1,"featured_media":94419,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"1","news_style_id":"1","cryptorium_level":"","_short_excerpt_text":"Why apocalyptic AI tales persist\u2014and who stands to gain.","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[1144],"tags":[438,1138],"class_list":["post-94418","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-longreads","tag-artificial-intelligence","tag-opinions"],"aioseo_notices":[],"amp_enabled":true,"views":"127","promo_type":"1","layout_type":"1","short_excerpt":"Why apocalyptic AI tales persist\u2014and who stands to gain.","is_update":"","_links":{"self":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/94418","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/comments?post=94418"}],"version-history":[{"count":1,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/94418\/revisions"}],"predecessor-version":[{"id":94420,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/94418\/revisions\/94420"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media\/94419"}],"wp:attachment":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media?parent=94418"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/categories?post=94418"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/tags?post=94418"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}