
Anthropic Refuses Pentagon Demand to Lift AI Safeguards, Risking Ban
WASHINGTON, D.C. – In an escalating standoff over the ethical boundaries of artificial intelligence in warfare, AI company Anthropic has publicly refused demands from the Department of War to remove safeguards on its models, risking being barred from government contracts.
In a statement released Thursday, Anthropic CEO Dario Amodei revealed that the Department has threatened to designate the company a “supply chain risk”—a label typically reserved for foreign adversaries—and potentially invoke the Defense Production Act to force compliance. The conflict centers on two specific use cases Anthropic has excluded from its contracts: mass domestic surveillance and fully autonomous weapons.
While Amodei emphasized his company’s deep commitment to national security—noting that Anthropic was the first frontier AI firm to deploy models in classified networks and at National Laboratories—he drew a firm line at these two exceptions.
“We cannot in good conscience accede to their request,” Amodei wrote.
The company currently provides its Claude models to the Department for intelligence analysis, operational planning, and cyber operations. However, Anthropic argues that using AI for mass domestic surveillance “is incompatible with democratic values,” warning that powerful AI tools can now assemble detailed profiles of citizens from publicly available data at an unprecedented scale.
Regarding autonomous weapons, Amodei stated that current frontier AI systems are “simply not reliable enough” to safely remove humans from the loop when selecting and engaging targets. He noted that Anthropic has offered to collaborate with the Pentagon on research to improve reliability, but that the offer was not accepted.
According to the statement, the Department of War has informed Anthropic that it will only contract with AI companies willing to accede to “any lawful use” and strip away safeguards. The government has allegedly threatened to remove Anthropic from its systems if it maintains the current restrictions.
Amodei framed the government’s dual threats as contradictory, arguing that one labels the company a security risk while the other deems its technology essential. Despite the pressure, he expressed hope that the Department would reconsider, citing the “substantial value” Claude provides to the armed forces. He pledged to ensure a smooth transition to another provider to avoid disrupting military operations if the company is offboarded.
The dispute marks one of the most public clashes between a major AI developer and the U.S. military over the ethical deployment of rapidly advancing technology.








