Site iconSite icon ForkLog

General Algorithm

General Algorithm

Artificial intelligence has become an indispensable aide across fields that improve daily life—from automating web search to education and medicine. Yet, like any technology, it has another side.

In recent months, engineers at leading corporations have resigned in protest at the use of their work in the MIC; musicians remove entire catalogues from streaming platforms that finance military AI; and reports of an autonomous drone that independently decided to eliminate a target scarcely raise eyebrows anymore. 

ForkLog examined how, in just a decade, AI moved from a futurist project to a part of daily reality for armed forces worldwide, who leads the new arms race and where the militarisation of neural networks may lead.

How artificial intelligence became a weapon

In the early 2000s, America’s DARPA began experimenting with autonomous systems that allowed machines not just to execute commands but to make decisions on the battlefield. Prototypes such as Crusher—a six‑ton reconnaissance robot—could traverse rough terrain without human input. In parallel, trials were run with the ARSS automatic sniper drone. These efforts laid early foundations for military AI.

In April 2017, the U.S. Department of Defense launched Project Maven, a large‑scale programme to bring AI to combat. Its task was to analyse drone video feeds with machine‑learning algorithms. Development raced ahead; within months the technology was added to Predator and Reaper drones. 

Inside the industry, this caused shock. Thousands of Google employees—the project’s main partner—signed a petition against participation in military initiatives; dozens of engineers resigned. In 2018 the company formally declined to continue.

During the Second Karabakh war Azerbaijan widely used Turkish Bayraktar TB2 and Israeli Harop UAVs, as well as experimental Kargu‑2 loitering munitions. These systems incorporated elements of autonomy, such as aiming, vehicle recognition and target tracking. The precise degree of independent decision‑making, including the ability to strike without an operator, has not been disclosed and remains a matter of expert assessment and conjecture. Even so, the conflict showcased broad deployment of AI‑enabled unmanned systems.

China, meanwhile, is pushing a strategy of military‑civil fusion, knitting together the defence sector, academia and technology giants. Investment focuses on autonomous systems for monitoring, interception, reconnaissance and maritime operations. The central wager is to make AI not just a weapon but a key infrastructure layer of a modern military.

Israel took a parallel course. Since 2021 its military has actively deployed AI systems to automate targeting in Gaza. Algorithms analysed vast data—phone calls, movements, digital activity—to help generate target lists. Systems dubbed Lavender and Gospel were used in large‑scale operations in 2023. That drew international criticism, with rights groups and the UN questioning the legality of strikes based on AI recommendations. 

The true war of algorithms began in 2022 with Russia’s full‑scale invasion of Ukraine. From then on AI became an active participant in combat. American firm Palantir provided Ukraine with tactical platforms for intelligence analysis and strike planning.

At the same time, Ukrainian start‑ups including Gogol Brain developed AI modules for drones, including semi‑autonomous FPV craft capable of identifying targets and operating amid electronic warfare. AI shifted from helper tool to the core of reconnaissance‑strike systems. 

Russia uses Iranian Shahed‑136s, modified for greater accuracy and resilience to electronic interference. Some sources report possible AI integration in these drones, though there is no official confirmation. 

By 2025, the world no longer debates whether AI will become a weapon—it already is. Taiwan launched a national programme to produce 25,000 AI‑navigated FPV drones, building a layered, autonomous defence against a potential Chinese invasion. 

India, mindful of rising tensions along the borders with China and Pakistan, has also begun to deploy AI in security systems. Since 2022, roughly 140–145 AI‑enabled surveillance platforms have been fielded, combining cameras, thermal imagers, radars and drones for facial recognition. 

In scarcely a decade, AI has evolved from software that parses video to autonomous weapons that scout, select targets and, not infrequently, pull the trigger. A new era has begun—and it will not be halted.

How much is invested in “smart” war?

Pinning down how much militaries spend—especially on AI—is near impossible: most programmes are classified. Open sources, however, sketch the outlines.

Global defence‑AI market size

Source: author.

The global military‑AI market was estimated at $4.8bn in 2022 and $9.3bn in 2024. Under the most conservative scenario, it could reach $13bn–$19bn by 2030; a moderate path suggests $25bn–$30bn; in the event of further global escalation, $35bn–$40bn and above. As much as 50% of funding may flow through the private sector: start‑ups, venture capital and big tech. 

Source: author. 

United States

America currently leads the race in defence AI, thanks in part to the Pentagon’s deep ties with major tech firms: Palantir, Anduril, Shield AI and Scale AI have secured contracts worth billions. In 2023 the Department of Defense created Task Force Lima to develop and field generative AI, and launched the expansive Replicator programme.

The Pentagon’s AI‑technology budget rose from $600 in 2018 to $1.8bn in 2024, while total military‑AI outlays are estimated at $4bn–$6bn annually. 

China

Through military‑civil fusion, leading AI firms—SenseTime, Megvii, iFlytek—receive state backing for R&D. According to CIGI, funding for dual‑use technologies reached $2.5bn–$3bn a year in 2023.

China is developing autonomous drones (the Wing Loong and CH‑5 series) and unmanned undersea systems. Trials of AI fleets, including drone swarms, are under way in the South China Sea. AI is also being integrated into cyber operations, air defence, ISR and forecasting. 

Russia

Russia prioritises mass and cost, expanding cheap UAVs, AI for munitions, electronic‑warfare systems and autonomous navigation. Most spending is classified, but independent estimates point to $300m–$500m a year (2023). A rising share of AI integration is suggested by the overall increase in Russia’s military budget (+30% in 2025).

EU and United Kingdom

Since 2021 the EU, via the European Defence Fund, has invested €1.5bn in AI, robotics and autonomy. The 2025 defence review names AI as a crucial building block of tomorrow’s forces. Priorities include target recognition, GPS‑denied navigation and interoperability of AI systems across NATO militaries. Autonomous systems, threat‑analysis modules, predictive analytics and counter‑drone measures are being tested. The UK is building defence‑AI hubs and works closely with the U.S. and Israel. Full AI integration is planned by 2030.

Israel

One of the few countries already employing AI in real‑time combat, including in missile defence, drones and ground systems. Estimated budget: $0.5bn–$1.5bn per year.

India

India is leaning on partnerships with private firms. In 2024 it announced $750m for defence AI through 2027. A Centre for AI and Robotics has been set up, and drone trials are under way. AI platforms for unmanned logistics and reconnaissance in the Himalayas are also being tested.

Ukraine

In 2022 Ukraine became the first country to deploy AI and autonomous technologies at scale in a full‑scale war. In 2024 the government announced a defence‑AI centre with international partners. Yet the state budget for these tasks remains constrained (variously put at $200m–$400m a year). 

Ukraine also collaborates extensively with start‑ups and volunteer projects on AI analytics, computer vision and tactical navigation. Despite limited funding, flexibility, speed of adoption and field adaptation make the Ukrainian experience unique—studied worldwide, including within NATO.

South Korea

South Korea is developing predictive threat analytics and military intelligence systems. SGR‑A1 AI turrets have already been installed along the North Korean border; they operate in semi‑automatic mode (with human confirmation). Annual spending on military AI is estimated at $500m–$600m, with plans to rise to $700m by 2025.

Taiwan

Taiwan is preparing for a possible defence against a Chinese invasion, building infrastructure for mass production of AI‑controlled drones. Investments in defence AI amount to $600m–$1bn a year. 

Core lines of defence AI

Unmanned aerial vehicles. AI is used for autonomous navigation, target acquisition, threat identification and swarming. It underpins reconnaissance, strike missions, EW and surveillance. Drones coordinate like bees. One craft can be shot down; a swarm is far harder to stop.

Autonomous control of combat platforms (land, sea, air). AI operates tanks, combat UAVs and crewless vessels. 

Military logistics and supply. Models forecast ammunition shortages, optimise supply routes and manage field repairs. Predictive maintenance, warehouse automation and routing speed up response and cut costs. 

Intelligence, surveillance and data analysis. AI processes satellite, radar, video and audio data, including open‑source information, and applies facial and behaviour recognition, behavioural analysis and threat prediction. Satellite imagery that once took analysts hours is now parsed in seconds.

Cybersecurity and cyberwarfare. Adaptive protection for air‑defence systems, UAVs and communications. AI is used both to detect and block cyberattacks and to conduct them. It can breach, defend and falsify data faster than humans. Cyberattacks no longer require an army of hackers—models can do the job.

Command and control. AI helps assess the operational picture, recommend actions and model scenarios. It supports decision‑makers rather than replacing them.

Fire‑control and guidance algorithms. Target identification and tracking, ballistic computation and targeting assistance. 

Simulation and training. Instruction in synthetic environments, generation of combat scenarios and forecasting adversary behaviour. Also used to test new strategies. 

Psychological operations and information warfare. AI generates deepfakes, analyses social networks and drives targeted disinformation—aimed at civilians and soldiers alike.

AI for soldier support. Includes “smart” infantry aids: AI‑enabled helmets, exoskeletons, HUD interfaces and individual combat companions.

Satellite operations. AI helps track objects in space and pilot craft autonomously. Example: the U.S. SDA system with AI elements.

Nuclear early‑warning and automatic‑response systems. Strategic systems with AI‑assisted analysis and strict limits on autonomy. Used only as assistants; full autonomy is avoided because of escalation risks.

Neural networks for diplomacy. Predicting states’ reactions to strikes, threats or cyberattacks—a new class of AI operating at the intersection of military logic and psychology.

In lieu of a conclusion

A new world war is already under way—waged by algorithms vying for dominance on land, at sea, in the air, in space, in cyberspace and over the human mind. The chief difference is less infantry and more technology. And increasingly, machines—not generals—are making the calls.

AI is not the future of war; it is its present. While China and the United States are leading, more countries are entering the fray. In 2023 and 2024 summits of REAIM were held in The Hague and Seoul. Over 50 countries signed a declaration on responsible military AI, and the UN is calling for a ban on fully autonomous systems that can kill without human involvement.

The central question is whether the international community can write the rules before AI does.

Text: VGI666

Exit mobile version