How often do you hear end-of-the-world prophecies starring an all-powerful artificial intelligence? At least once a week some businessperson or celebrity frets about a grim future under its yoke.
A famous face plus a gloomy forecast is a ready-made headline. In the past, such stories often reflected real, sometimes alarming, technological progress; now they increasingly resemble hollow marketing—or simple confusion about what is actually happening.
So why are we still being spooked by bad retellings of Terminator when today’s chatbots brazenly lie and cannot remember five lines of dialogue? And, crucially, who stands to gain?
Unimpressed
First, the obvious: AI technologies have leapt forward over the past decade. Modern systems can write coherent text, spot patterns in large datasets and generate visual content. Not long ago, machines could not substitute for so much human work.
The prospects are unsettling. Yet for now the mass-market story has stalled at chatter about so-called artificial general intelligence and the release of near-identical language models (sometimes the new ones are even worse than their predecessors).
What we have, in the end, is a helper tool trained to handle simple text—and sometimes image—tasks. People use it for vibe-coding or drafting social-media posts. The output often needs checking: for more complex work, neural nets fall short.
Ask your favourite chatbot to write a PhD dissertation on “X” and you will get a barely coherent text with citations from the first or second page of a search engine. To improve results, people recommend elaborate prompts, but that is merely finer tuning in “machine-speak” and extra training.
With prolonged use, most users grasp the limits of today’s models. Progress has hit the walls of training data and compute, while the “intelligence” factor has faded into the background.
Brainless intelligence
For context, consider how AI works. In brief, the big language models behind classic chatbots operate like this:
- Input text is split into tokens (parts of words, symbols).
- Each token is assigned a numerical vector.
- The model analyses relationships among tokens and identifies which words matter most to understanding the context.
- On that basis the LLM “predicts” each next token, forming a reply.
The model does not “predict” from thin air. It is pre-trained on a huge corpus, typically scraped from public internet sources. That is where the neural net gets all its “intelligence”.
Language models do not “understand” text in the human sense; they compute statistical regularities. All leading chatbots today use the same basic architecture—aptly named “Transformer”—which works on this principle.
Crude though the analogy is, you can think of an LLM as a very powerful calculator atop a large database. It is a strong, useful tool that simplifies many aspects of life, but it is premature to ascribe full-fledged intelligence to it.
Modern chatbots look more like the next iteration of search (hello, Gemini in Google) than a pocket, all-knowing assistant.
Reliability remains a concern, too. After glancing at statistics on hallucination and fibbing by neural nets, one is tempted to go back to plain old “to google”.
Boo—scared yet?
The doomsayers’ main thesis is that “AI is getting smarter exponentially”, so once it surpasses human intelligence, humanity is finished.
Modern AI certainly outpaces us at processing and transforming data. A neural net can, for instance, retell Wikipedia in some detail. That is roughly where its “knowledge” ends. More precisely, the model cannot apply it to “personal goals”, because it has none—and was not built to.
It is also clear that artificial intelligence does not understand the world around us. The laws of physics are a dark forest to it.
Progress in language models has largely meant widening the scope of prediction (guessing tokens). But AI is fast approaching the limits of text-only training, and the chorus calling for “spatial” intelligence is getting louder.
Even if the technology’s weak spots can be identified—and work on them is under way—harder questions remain.
Even for humanity, many aspects of the brain’s workings are still a mystery. Recreating such a complex structure in digital form is another matter entirely.
Another near-insurmountable obstacle for AI is creativity—the ability to originate something new. LLMs are, technically, unable to step beyond their architectural constraints, because they operate by reprocessing existing data.
Thus AI’s fate depends directly on the information we put into it—and, for now, training material is oriented entirely toward human benefit.
For fairness, consider Elon Musk and Grok. At one point users noticed the chatbot’s bias and a tendency to overrate the billionaire’s abilities. That is a worrying ethical signal, but a potential “neuro-Elon” is unlikely to harm humanity physically.
AI applications have one aim: to obey the user’s request. A chatbot has no will or desires, and that paradigm is unlikely to change any time soon.
The anatomy of fear
So why are we still being scared by an AI that turns out not to be very “intelligent”? The main answers are obvious.
Setting aside ignorance of the technology, the simplest reason is greed—for money or for fame.
Consider one of the apocalypse’s “prophets”, Eliezer Yudkowsky. The AI researcher and co-author of If Anyone Builds It, Everyone Dies has, since the 2000s, warned of a superintelligent AI supposedly alien to human values.
“Superintelligence” has yet to appear, a point Yudkowsky often concedes. That does not stop him from touring podcasts with sweeping claims and selling books.
The noted physicist and “godfather of AI” Geoffrey Hinton has also voiced near-apocalyptic concerns. He estimated a 10–20% chance that the technology will cause human extinction in the next 30 years.
In Hinton’s view, as capabilities grow, a strategy of “keeping artificial intelligence under control” may stop working, and agentic systems will strive for survival and an expansion of control.
It remains unclear who, and for what purpose, would grant neural nets a “will to live”. Hinton continues to work on training neural networks and in 2024 was nominated for a Nobel prize for achievements in this area; in early 2026 he became the second scientist in history, after the cyberneticist Yoshua Bengio, to reach 1m citations.
Surprisingly, the forecasts of Andrew Ng, co-founder of Google Brain, sound more down to earth. He has called AI an “extremely limited” technology and is confident that algorithms will not replace people any time soon.
Every field has sharp-tongued forecasters. Their presence in AI can be excused by the public’s affection for science fiction. Who doesn’t like to tickle their nerves with Philip K. Dick or Robert Sheckley–style tales, now set in the present?
More troubling are statements by big corporations that, almost in passing, warn of threats to jobs and predict rapid AI advances. The second claim often explains a need to cut costs; the first invites more conspiratorial readings.
One of the world’s largest companies—Amazon—has, over the past six months, laid off more than 30,000 employees. Management cites plans for optimisation and the impact of automation, including AI.
Work on warehouse robots continues apace. But cynics argue the problem is more prosaic: mass culls reflect poor pandemic-era HR.
Amazon is hardly alone. AI firms in Silicon Valley continue to expand headcount and rent new office space.
Yet back in 2023 many of these same companies signed a Center for AI Safety statement calling to slow development—supposedly AI poses “existential risks” on a par with pandemics and nuclear war.
Over time the letter was forgotten, work carried on, and no visible threat emerged.
From a corporate vantage point, in an era of talk of an overinflated AI bubble, appeals to technological change are a more convenient cover story than admitting structural failures in people management. But such claims create a false picture and distract from real problems—disinformation and deepfakes.
AI does not steal jobs; it changes how work is done, sometimes simplifying it. A narrow Harvard study shows that AI can, on the contrary, complicate and slow internal processes.
The technology will seep into every area of life: education, science, commerce, politics. How it shows up there will be determined only by people. For now, neural nets do not have a vote.
Out of our reach
Everything above concerns public-facing AI such as chatbots and generative image tools. Of course, more serious work exists behind closed doors.
Among the relatively simple cases are domain LLMs in medicine or archaeology. The former help synthesise new proteins; the latter decipher ancient documents that defy traditional analysis.
But you have to track results through hard-to-access internal reports or in niche specialist media, so public awareness is close to zero. It is quite possible the biggest breakthroughs are happening here.
A “doomsday AI machine” is unlikely to appear even in closed labs. Such models are highly specialised and do only what they are required to do.
Fears of AI running amok merely mirror our own anxieties—whether about losing jobs or thornier ethical questions. So long as we humans set the direction and goals, AI remains a tool, not a separate subject with a will of its own.
Discussing potential risks is right. Spinning apocalyptic theories is human nature. But these things demand scepticism—and even a touch of irony. If we have an “off” button to hand, no digital supermind threatens our world.
Vasily Smirnov
