In late June, OpenAI co-founder Ilya Sutskever shared his vision for artificial intelligence. He believes AI agents will soon complete a month’s worth of human work in hours and become fully fledged virtual colleagues, scalable to millions. The shift to models with genuine “reasoning” that “change everything” unnerves many. Sutskever urged people not to fear the future but to shape it by adapting to change.
In this ForkLog piece, we examine how cognitive AI systems are evolving—and the challenges they pose not only to developers but also to users.
Think about thinking
Over the past two years the AI community has increasingly debated the consequences of parabolic progress, the arrival of artificial general intelligence (AGI) and even superintelligent ASI. In that context, researchers are trying to define the next step.
Most corporate “seers” predict that the next generation of AI will take the form of various flavours of cognitive artificial intelligence (CAI).
Google DeepMind chief executive Demis Hassabis told Wired there is a 50% chance of achieving AGI within 5–10 years. Beyond that, he said, lies an “era of maximum human flourishing”, with the colonisation of the galaxy starting as early as 2030.
Google’s Gemini chatbot’s Deep Think mode may be the next step in “metacognition”, in which a machine performs multi-layer logic and planning, edging closer to cognitive thought.
Meta AI’s chief scientist Yann LeCun a year ago urged the next generation to refrain from studying LLMs. He argues that big companies already dominate the field and that it is not the path to advancing AI.
“If you’re a student who is interested in building the next generation AI systems, don’t work on LLMs,” he said.
Instead, Meta’s team has focused on “world models” like those in animals and humans—the V-JEPA system, designed to form a genuine understanding of the environment. It learns intuitive reasoning from video using self-supervised learning. Compared with multimodal LLMs, it has shown a deeper grasp of physics while requiring less training data. The model makes predictions in an abstract representation space—akin, in LeCun’s view, to how the human brain processes information.
Since early 2024, OpenAI has begun introducing long-term memory into ChatGPT, a step toward modelling a persistent “self” and an individual thinking style. The firm ultimately aims to build a network of AI agents with deep user understanding.
One of the main infrastructure suppliers for global AI—Nvidia—has launched a set of initiatives to advance robot spatial orientation technology—the DePAI stack. Chief executive Jensen Huang regards modelling the real world and its digital replicas, including people, as a precursor to reasoning AI.
In short, CAI is an attempt to embed in systems the following qualities:
- causal reasoning;
- prediction of consequences;
- respect for moral and legal norms;
- maintaining the coherence of judgements over time;
- contextual memory and modelling of a user’s identity.
Such models remain far from consciousness, but they already mark a structural shift toward simulating thought. If LLMs are syntax and rhetoric, CAI is closer to logic and philosophy.
Cognitive bombs
What was once the preserve of sci‑fi action films 20 years ago is now embedded across the defence apparatuses of leading states.
Given the heightened tensions of recent years, defence budgets are ballooning. AI—especially advances in autonomous decision-making and consequence modelling—is the main driver of progress in protecting citizens.
The US Army was testing cognitive interaction long before today’s “thinking” models emerged.
In 2012 America’s Defense Advanced Research Projects Agency (DARPA) unveiled a novel computer-vision system.
CT2WS—a brain–computer system using neural interfaces and neural networks—warned infantry of threats. It combined 120‑megapixel cameras, radar, an object-recognition compute cluster and operators wearing brain-activity scanners.
To minimise false alarms, the operator’s brain was brought into the loop. People were connected to the video-surveillance system and shown suspicious frames at high speed while their electroencephalograms were recorded. If a human spotted danger, the neurointerface registered the reaction instantly. False positives fell from 810 to five per hour.
In recent years, computer-vision systems have become available to ordinary users—most notably in Tesla cars for Autopilot.
Given the secrecy and priority of government programmes, it is hard to gauge how far military technology has advanced since. Publicly available information over the past year about AI’s militarisation is disquieting.
In March it emerged that the US Department of Defense was using AI agents to model clashes with foreign armies. As part of a collaboration with Scale AI, the Thunderforge platform is planned to improve battlefield decision-making, in partnership with Anduril and Microsoft.
This is not the first time AI firms have worked in defence. Microsoft’s long-running collaboration with the Israel Defense Forces triggered protests from employees in February 2025.
In January, OpenAI struck an agreement with US National Laboratories to apply the startup’s models to nuclear-weapons research. Earlier, Meta made available its technologies to US government agencies and defence contractors, and to allies. Anthropic followed suit.
There is also ample evidence of misfiring militarised AI leading to unwarranted destruction and civilian deaths.
All this, alongside corporate centralisation of the technology, is pushing engineers to develop methods for ethical AI.
A moral compromise
AI pioneer Yoshua Bengio has criticised the multibillion-dollar race to build cutting-edge systems. He warned of dangerous properties in recent models and of users being misled.
He issued his cautions in early summer as he launched LawZero, a new non-profit. The group, he said, will focus on safer systems and “insulate research from commercial pressure”. His move coincides with OpenAI’s intention to step away from its “charitable” roots and become a commercial company.
LawZero’s models are expected not to give definitive answers, as current systems do. Instead they will provide the probability that a response may be correct.
In an interview with The Guardian, Bengio noted that his models will have “a sense of humility—an awareness that they are not sure about their answer”.
He hopes to create a model that can refine the offerings of leading AI groups and prevent actions that conflict with human interests.
“The worst-case scenario is the extinction of humanity. If we build AI that is smarter than us, not aligned with us, and competitive, then basically we’re done,” he said.
Among new startups are those that claim to defend humanity’s future by building rational and ethical AI models.
MindCast AI (MCAI) is a conceptually developed example of CAI. The system models decision-making with an emphasis on the ethical integrity of judgements and the predictability of future behaviour.
The company’s founder, Noel Lee, surfaced on Substack about three months ago with a starkly clear, fully packaged proposal. Although practical demonstrations are still lacking, the concept itself is intriguing.
MCAI is CAI designed to simulate the judgements and behaviour of human agents. It focuses on outcomes rather than predictions from historical data. The system uses modelling to anticipate real‑world behaviour.
According to its technical description, MCAI lets users:
- model the behaviour of key figures under uncertainty or pressure;
- anticipate legal, reputational and emotional reactions before they occur;
- identify gaps between words and deeds;
- track how institutional incentives shape decision trajectories;
- forecast moral risks and social reaction;
- test foresight strategies before implementation.
Today’s AI models are optimised for speed and scale, but ignore deeper questions that define sound choices: trust, human well‑being, predictability.
“Billions are being directed to startups on the basis of founders’ reputations rather than demonstrated capability,” the MCAI manifesto says.
The MCAI system comprises three core capabilities:
- foresight integration. It models the future via a proprietary probabilistic forecasting engine, layers of Bayesian inference, scenario analysis and assessment of unintended consequences;
- trust calibration. All data pass through a model of trust signals that distinguishes correlation from causation and filters statistical noise before decision logic is applied;
- a coherence architecture. It ensures logical and moral consistency via an ALI (action‑language integrity) engine and a CMF (cognitive motor fidelity) system for domain‑credible insights.
The patented cognitive digital twin (CDT) structure was registered in April 2025. The system is already used in corporate‑strategy analysis, policy design and antitrust investigations.
Trust underpins the startup’s approach to building a personal AI agent.
“Without trust‑based cognitive architectures, AI companions turn into systems of total surveillance,” the manifesto says.
MCAI addresses this with memory, integrity and identity.
Noel Lee says their CDT models decisions, anchors them to a moral architecture and maintains consistency over time, evolving with history and values.
Fathers and children
Ben Goertzel, founder of the AI‑crypto startup SingularityNET, is considered one of AI’s fathers and a leading evangelist of AGI—hence his zeal in defending its future.
The researcher rejects the centralisation and militarisation of technologies that could harm human choice and agency. Founded in 2017, the blockchain platform SingularityNET extends his vision of achieving AGI. At its core is a decentralised infrastructure for AI development, with a special emphasis on systems like Hyperon.
Together with Fetch.ai, CUDOS and Ocean Protocol, SingularityNET formed a coalition called the Artificial Superintelligence Alliance, with the ASI token. The goal is to pool resources to compete with industry leaders.
In September 2024, Goertzel said that an OpenCog Hyperon‑based system shows signs of partial self‑awareness: an autonomous agent with “its own goals”—not just a chatbot. An experimental Baby AGI release is slated for 2025.
Another AI father is Jeff Hawkins, author of the “memory‑prediction” framework. He is working on “biologically plausible” AI based on neocortical principles. A world model, memory and a hierarchy of representations bring the non‑profit launched early this year, the Thousand Brains Project at Numenta, closer to cognitive AI.
The decentralised ethos of Web3 offers what AI needs to develop freely, allowing the technology’s trajectory to be tracked and corrected. The Artificial Superintelligence Alliance, built on similar principles, has made good use of blockchain’s advantages.
CAI is advancing quickly, with first working models expected this year. While giant AI firms compete for supremacy and defence contracts, the rise of decentralisation‑minded startups is encouraging.
