The U.S. Army deployed Anthropic’s Claude in an operation to capture Venezuela’s president, Nicolás Maduro, the WSJ reports, citing sources.
The mission included bombing several targets in Caracas.
Such use runs counter to Anthropic’s public policy. The company’s rules explicitly forbid employing AI for violence, weapons development or surveillance.
“We cannot comment on whether Claude or any other model was used in a specific operation—classified or otherwise. Any application of a LLM—whether in the private sector or government—must comply with our policy governing how the system is deployed. We work closely with partners to ensure adherence,” a representative of Anthropic said.
Claude’s deployment within the Defense Department was enabled by Anthropic’s partnership with Palantir Technologies. The latter’s software is widely used by the military and federal law-enforcement agencies.
After the raid, an Anthropic employee asked a Palantir colleague what role the model played in the operation to capture Maduro, the WSJ writes. A spokesperson for the startup said the company had not discussed using its models in specific missions “with any partners, including Palantir,” limiting conversations to technical matters.
“Anthropic is committed to using advanced AI in support of U.S. national security,” the representative added.
Anthropic versus the Pentagon?
Pentagon spokesman Sean Parnell said the department is reassessing its relationship with the AI lab.
“Our country needs partners willing to help warfighters win any war,” he said.
In July 2025, the U.S. Department of Defense signed contracts worth up to $200m with Anthropic, Google, OpenAI and xAI to develop AI solutions in security. The department’s Chief Digital and AI Office planned to use their work to build agentic security systems.
But by January 2026 the WSJ reported the deal with Anthropic was at risk of being terminated. The rift stemmed from the startup’s strict ethics policy. The rules prohibit using Claude for mass surveillance and autonomous lethal operations, limiting its utility for agencies such as ICE and the FBI.
Officials’ frustration deepened amid the integration of the Grok chatbot into the Pentagon’s network. Defense Secretary Pete Hegseth, commenting on the xAI partnership, stressed the department “will not use models that do not allow wars to be waged.”
Pressure on developers
Axios, citing sources, wrote that the Pentagon is pressuring four major AI companies to let the U.S. military use their technologies for “all lawful purposes.” That includes weapons development, intelligence collection and combat operations.
Anthropic refuses to lift restrictions on surveillance of U.S. citizens and on fully autonomous weaponry. Talks have stalled, yet quickly replacing Claude is difficult given the model’s technological edge in specialised government tasks.
Beyond Anthropic’s chatbot, the Pentagon uses OpenAI’s ChatGPT, Google’s Gemini and xAI’s Grok for unclassified work. All three agreed to relax the restrictions that apply to ordinary users.
Discussions now centre on moving LLMs into classified environments and using them “for all lawful purposes.” One of the three companies has already agreed to do so; the other two are “showing more flexibility” than Anthropic.
The militarisation of AI
The U.S. is not the only country rapidly embedding artificial intelligence in defence.
China
In June 2024, China unveiled an AI commander for large-scale military simulations involving all branches of the PLA. The virtual strategist wields broad authority, learns quickly and refines tactics during digital exercises.
In November, media reported that Chinese researchers had adapted Meta’s Llama 13B to create a tool called ChatBIT. The model was optimised for gathering and analysing intelligence, and for supporting operational decision-making.
India
New Delhi has also bet on AI as a driver of national security. The government has crafted national strategies and programmes, created dedicated institutions and bodies for deployment, and launched projects applying the technology across sectors.
United Kingdom
London has made artificial intelligence a priority. In the Defence AI Strategy (2022) the ministry treats AI as a key component of the armed forces of the future. In the Strategic Defence Review (2025) the technology is described as a fundamental element of modern war.
Where AI was once viewed as an auxiliary tool, the British armed forces now plan a shift to “technology-integrated forces”, with AI systems used at every level—from staff analytics to the battlefield.
In March 2025, the Pentagon announced the use of AI agents to simulate clashes with foreign adversaries.
