Site iconSite icon ForkLog

SpaceX and xAI to Compete in Pentagon’s Autonomous Drone Software Contest

SpaceX and xAI to Compete in Pentagon's Autonomous Drone Software Contest

Elon Musk’s space company SpaceX and its recently established AI subsidiary xAI are participating in a secret new Pentagon competition to develop autonomous swarms of voice-controlled drones. This was reported by Bloomberg, citing sources.

The entry of the entrepreneur’s two startups into the field of AI-based weapons development marks a new and potentially controversial turn for the billionaire, the agency notes.

SpaceX has long been a defense contractor, and Musk actively supports AI development, although he has opposed creating “new tools for killing people.”

In 2015, the billionaire signed an open letter warning of the dangers of autonomous weapons. He called for a ban on weapons that can independently select targets and operate without meaningful human control.

Project Overview

A limited number of companies have been selected to participate in the six-month tender with a $100 million fund. It aims to create advanced “swarm technology capable of translating voice commands into digital instructions and controlling multiple drones.”

The ability to coordinate the actions of multiple drones simultaneously already exists. However, developing software for piloting several dozen devices at once remains a complex task.

The project will progress through five stages, starting with software development and ending with real-world testing. UAVs will be used for offensive purposes, Bloomberg claims.

The startup xAI has begun actively hiring engineers in Washington and on the US West Coast with current security clearance to work with federal contractors.

In January, the AI bot Grok was integrated with the Pentagon’s network. US Secretary of Defense Pete Hegseth stated that he “will provide all necessary data” from military IT systems, including information from the intelligence department.

Previously, the US Department of Defense allocated $200 million to companies Anthropic, Google, OpenAI, and xAI for developing AI solutions in the security sector.

The department’s Chief Digital and AI Office stated that the funds would help accelerate the integration of advanced neural network capabilities into national security tools.

OpenAI Also Involved

SpaceX has long been a defense contractor, but the company primarily focused on reusable rockets and satellites for space exploration, military communications, and intelligence. Offensive weapon software was not part of the agenda.

It is not the only participant in the project. OpenAI supports the bid of Applied Intuition. However, it will be limited to the “mission control center” element, which will translate commanders’ voice commands into digital instructions.

Sam Altman’s company’s technologies will not be used for drone control, weapon integration, or target selection.

Pentagon Advances

The Pentagon is increasing the use of artificial intelligence on the battlefield. In January, it published a new AI strategy that envisions the use of agents—from operational planning to targeting, potentially including lethal strikes.

In February, media reported on the use of Claude from Anthropic in an operation to capture Venezuelan President Nicolás Maduro. However, the department’s contract with the AI startup is under review. The parties have disagreements due to Anthropic’s strict ethical policy. The rules prohibit using the Claude model for mass surveillance and autonomous lethal operations.

“Our country needs partners ready to help fighters win any war,” said Pentagon representative Sean Parnell.

The US Army is pressuring four major AI companies to allow the use of technologies for “all lawful purposes.” This includes weapon development, intelligence gathering, and combat operations.

Anthropic refuses to lift restrictions on surveillance of US citizens and the creation of fully autonomous weapons. Negotiations have stalled, but quickly replacing Claude is challenging due to the model’s technological superiority in specific government tasks.

Besides Anthropic’s chatbot, the Pentagon uses ChatGPT from OpenAI, Gemini from Google, and Grok from xAI for non-classified tasks. All three have agreed to relax restrictions that apply to regular users.

Discussions are underway to transfer the LLM into a classified environment and use them “for all lawful purposes.” One of the three companies has already agreed to do so, while the other two “show greater flexibility” compared to Anthropic.

Axios reported that Hegseth is “close” to severing business ties with Anthropic and declaring the company a “supply chain risk”—meaning any firm wishing to work with the US Army would have to cease collaboration with Claude’s creator.

“It will be damn hard to untangle. We’ll make sure they pay the price for forcing us to take such a step,” said a senior Pentagon official.

Recognizing Anthropic as a supply chain risk will require Pentagon contractors to confirm that they do not use Claude in their processes. This is likely to affect many companies. Previously, Dario Amodei’s startup stated that eight out of the 10 largest US firms use the chatbot.

Back in March 2025, the Pentagon announced the use of AI agents for simulating confrontations with foreign adversaries.

Exit mobile version