Anthropic has relaxed its key safety policy, citing the need to remain competitive. Simultaneously, the Pentagon is pressuring the company to allow the use of AI for surveillance of U.S. citizens and the creation of autonomous weapons.
Previously, the firm positioned itself as one of the most safety-focused AI labs in the market. However, the leadership has now abandoned a key commitment in this area.
In 2023, the company introduced the Responsible Scaling Policy (RSP)—a set of voluntary rules aimed at reducing catastrophic risks when working with neural networks. Initially, the document stipulated that the developer would halt AI development if it deemed it potentially dangerous.
“We decided that we wouldn’t help anyone by simply stopping the training of AI models. Given the rapid progress in the industry, it seemed unreasonable to take on unilateral commitments while competitors forge ahead,” said Anthropic’s Chief Scientist Jared Kaplan.
The third version of the RSP states that Anthropic will continue to improve AI if it believes it does not have significant advantages over competitors.
“The political environment has shifted towards prioritizing competitiveness and economic growth, while discussions on AI safety have not received substantial support at the federal level,” the startup’s blog states.
The new version of the policy provides for greater transparency, including detailed publication of model testing results. The company promises not to lag behind competitors in system control issues. Anthropic is prepared to slow development only if it becomes the undisputed leader in the race and simultaneously sees a significant risk of catastrophe.
When Anthropic introduced the RSP in 2023, it hoped competitors would follow its lead. However, no one made such an unequivocal promise to halt AI development.
Anthropic Under Pentagon Pressure
Disagreements have arisen between Anthropic and the U.S. Department of Defense over military plans to use AI for domestic surveillance and autonomous weapons development. The startup opposes this approach, but Pentagon representatives have stated their intention to use LLM “for all lawful scenarios” without restrictions and suggested the possibility of terminating the contract.
The startup’s CEO, Dario Amodei, met with Defense Secretary Pete Hegseth to discuss the situation. The department issued an ultimatum: the startup must accept government terms by February 27.
If refused, authorities may deem the company a threat to supply chains, jeopardizing Anthropic’s business with other U.S. government contractors. An alternative is the application of the Defense Production Act, allowing the Pentagon to forcibly use the startup’s technologies.
“This scenario is unprecedented and will almost certainly lead to a wave of litigation if the administration takes adverse actions against Anthropic,” said Franklin Turner, a government contracts attorney at McCarter & English.
In a response, representatives of the AI company emphasized that the parties “continue to engage in good-faith dialogue.”
Anthropic Stands Firm
Reuters learned that Anthropic does not intend to relax its restrictions on the use of technology for military purposes.
The startup’s stance was supported by Ethereum co-founder Vitalik Buterin.
It will significantly increase my opinion of @Anthropic if they do not back down, and honorably eat the consequences.
(For those who are not aware, so far they have been maintaining the two red lines of “no fully autonomous weapons” and “no mass surveillance of Americans”.…
— vitalik.eth (@VitalikButerin) February 24, 2026
“If they do not back down and honorably accept the consequences, it will significantly improve my opinion of Anthropic,” he stated.
In February, it was revealed that the AI model Claude was used in an operation to capture Venezuelan President Nicolás Maduro. According to the agency, at a meeting with Hegseth, Anthropic’s CEO did not express concern over the use of the company’s products in this initiative.
Such a serious threat from the Pentagon indicates a reluctance to abandon Claude in the defense sector, writes The Economist. Journalists note that the startup’s technologies could prove indispensable for military tasks.
SpaceX and its AI subsidiary xAI are participating in a secret Pentagon tender to develop autonomous drone swarms with voice control.
