
Anthropic co-founder forecasts ‘self-developing AI’ by 2028
Anthropic’s Jack Clark sees “self-developing” AI arriving by 2028.
By 2028, AI systems capable of designing and training their own successors without human involvement could reach the market, Anthropic co-founder Jack Clark said.
“This is very important. I don’t know how to process it. I come to this conclusion reluctantly, because the consequences are so great that I feel overwhelmed by them, and I’m not sure society is ready for the changes implied by automated AI development,” he noted.
Clark outlined a scenario of fully automated AI research—where a model independently:
- sets research goals;
- designs experiments;
- writes and tests code;
- optimises training;
- improves the architecture of the next version of the AI.
He called this a “Rubicon into an almost unpredictable future” and put the probability of such a scenario at 60% over the next two years.
What the estimate rests on
Clark’s conclusion rests on the trajectory of several benchmarks:
- SWE-Bench — a test of solving real engineering tasks from a GitHub repository. At the end of 2023 the best models handled roughly 2% of cases; by spring 2026 the figure had reached 94%;
- CORE-Bench — reproducing the results of scientific AI papers with environment setup, code execution and analysis of conclusions. According to Clark, the benchmark is effectively “closed”: modern agents score about 95.5%;
- MLE-Bench — performing ML tasks at Kaggle level. The best agentic systems already reach 64–65%.
According to the Anthropic co-founder, all three metrics show one thing: AI is rapidly shifting from writing snippets of code to fully carrying out engineering and research tasks.
Rising autonomy
Another argument is the increasing duration of tasks that AI models can perform without human intervention.
According to METR, in 2022 systems managed tasks that took humans tens of seconds. In 2024 the figure rose to about 40 minutes; in 2025, to six hours. Today, leading models can sustain engineering work for around 12 hours straight.
Clark linked this to the spread of agentic programming tools. The longer a model holds a goal, checks intermediate results and fixes errors, the more stages of the research cycle can be delegated to it.
Why this matters for AI development
The current AI development loop follows a familiar pattern: study prior work, reproduce the result, set up an experiment, train or fine-tune a model, check metrics, find bottlenecks and repeat. Gains on SWE-Bench, CORE-Bench and MLE-Bench indicate that models already handle whole chunks of that loop.
Clark also pointed to progress on more specialised tasks. For example, AI is beginning to be used to design GPU kernels—code that determines how efficiently models train and infer on specific hardware.
Another strand is post-training models. In the PostTrainBench benchmark, AI systems improve small open-source LLMs.
As of spring 2026, the best networks achieve 25–28% of the target gain (human teams, 51%). Clark deems the result significant: the target is set by real instruction-following models built by experienced researchers.
Anthropic measured how its models optimise LLM training on the CPU. Over a year the speed-up rose from 2.9x (Claude Opus 4) to 52x (Claude Mythos Preview). Humans typically need four to eight hours for a comparable task.
AI is already learning to manage AI
Clark noted that modern systems are starting to coordinate the work of other agents. The approach is already used in products such as Claude Code or OpenCode: one assistant parcels out work among multiple sub-assistants, supervises them and aggregates results.
For AI development this matters: it is rarely a single linear task — usually it is dozens of parallel processes, including writing code and configuring the environment. If a model starts managing such loops on its own, the degree of human involvement will drop sharply.
Do neural networks need creativity
In Clark’s view, one key question is what AI development most resembles: the discovery of general relativity or assembling Lego.
He acknowledged that today’s LLMs are not yet able to generate fundamentally new scientific ideas. But that may not be necessary to automate a large share of AI R&D.
“For the most part AI moves forward through people methodically running a certain cycle: take a well-performing system, scale some aspect of it, look at the errors when scaling and fix them. This requires very few out-of-the-box ideas, and most of this process looks like unglamorous rough engineering work,” the expert noted.
Early signs of scientific contribution
Clark believes AI models are beginning to show early signs of scientific intuition. He cited several examples from mathematics and computer science:
- a team of mathematicians, using Gemini, checked around 700 Erdős problems and obtained 13 solutions, one of which researchers called a “slightly nontrivial” contribution to an open problem;
- scholars from the University of British Columbia, the University of New South Wales, Stanford and Google DeepMind published a mathematical proof found with substantial help from Gemini-based tools.
What if the forecast is right
Clark noted that the largest AI labs are already moving towards automating research. OpenAI intends to build an AI intern for independent scientific work, and Anthropic is publishing work on automatic alignment to human values.
If the current pace holds, the industry will shift into a phase of fully automated AI development, he forecast — triggering a cycle in which each new generation of AI accelerates the arrival of the next.
If the transition happens by the end of 2028, the world will face not only a technological leap, he said. Fundamental questions of safety, capital distribution, the role of human labour and control over systems that start developing faster than their creators will also come to the fore.
“If you forced me to name a probability for 2027, I would say 30%. If we do not see this by the end of 2028, then I think we will have found some deficiency in the current technological paradigm, and human invention will be required to move forward,” Clark concluded.
In January, Anthropic CEO Dario Amodei predicted the imminent arrival of AGI and job losses.
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!