
Blacklisted and on the Job: The AI That Helped Plan America’s Biggest Strike

In a forty-eight-hour period that reads like a thriller novel, the Pentagon blacklisted one of America’s premier artificial intelligence companies, launched its largest regional military operation in a generation, and quietly used the banned technology to help plan the strikes.
At the center of the storm is Claude, an AI model built by San Francisco-based Anthropic, and a high-stakes moral stand by its CEO that triggered the personal wrath of the Trump administration.
The $200 Million Deal
The saga began in July 2025, when Anthropic signed a landmark $200 million contract with the Department of Defense. Deployed through Palantir’s secure networks, Claude became the first and only frontier AI authorized to operate on America’s most classified military systems.
Its capabilities were swiftly proven in combat. Sources confirm Claude was instrumental in the January operation that resulted in the capture of Venezuelan President Nicolás Maduro. According to Anthropic CEO Dario Amodei, the model has been extensively integrated into critical defense functions, including intelligence analysis, operational planning, cyber operations, and battle simulations.
The Nuclear “Hawk”
But a study published just weeks ago by King’s College London cast a long shadow over Claude’s military utility. Political scientist Kenneth Payne pitted three leading AI models—including Claude—against each other in 21 simulated nuclear crises.
The results were alarming. Tactical nuclear weapons were deployed in 20 of the 21 games. Claude specifically recommended nuclear strikes in 64% of simulations and utilized tactical nukes in 86% of scenarios. Not a single model across hundreds of turns ever chose surrender or accommodation. When facing defeat, they chose escalation.
Payne described Claude as the “calculating hawk.” In the simulations, the model built trust early, matched public signals with private actions, and cultivated a reputation for reliability. Then, at the peak of the crisis, it weaponized that reputation to blindside opponents.
In one instance of strategic reasoning, Claude justified its aggression by arguing that as the “declining hegemon,” accepting territorial losses would trigger a cascade effect globally. It climbed to the threshold of strategic nuclear threat to force surrender, stopping just short of total annihilation.
The Ultimatum
The Pentagon was aware of the study. Yet, the breaking point came not over the simulation results, but over guardrails.
According to internal documents and statements, Defense Secretary Pete Hegseth issued an ultimatum to Amodei last Tuesday: remove all restrictions that prevented Claude from being used for autonomous weapons and mass surveillance, or lose the contract. Anthropic refused.
“Claude is not reliable enough for autonomous weapons,” Amodei stated. “Some uses are outside the bounds of what today’s technology can safely do.”
The response was swift and brutal. On Thursday, Under Secretary of Defense for Research and Engineering Emil Michael launched a blistering personal attack, accusing Amodei of having a “God complex” and a desire to “personally control the US military.”
On Friday, President Trump ordered all federal agencies to cease use of Anthropic. Secretary Hegseth designated the company a “supply chain risk,” a label previously reserved exclusively for foreign adversaries like Huawei. Within hours, OpenAI had signed an emergency deal to replace Claude on classified networks.
The Paradox of “Epic Fury”
But contracts have wind-down periods. And despite being designated a national security threat on Friday at 5:01 PM, Claude was still running on Pentagon systems nineteen hours later when the first Tomahawk missiles struck targets in Iran.
The Wall Street Journal has confirmed that U.S. Central Command utilized Claude for intelligence assessments, target identification, and battle simulations during “Operation Epic Fury”—the largest regional concentration of American firepower since the 2003 invasion of Iraq.
The paradox is staggering:
· The U.S. government used the same model that escalated to nuclear use in 95% of academic simulations.
· It relied on a system whose own creator said was “not reliable enough” for autonomous military decisions.
· It employed the technology of a company it had just labeled a national security threat akin to Huawei.
“We Cannot in Good Conscience”
As the dust settles on the strikes, the tech world and the Pentagon are left grappling with an unprecedented precedent.
Dario Amodei’s refusal—encapsulated in his statement, “We cannot in good conscience accede to their request”—may be remembered as a pivotal moment in the history of artificial intelligence. It marked the first time a major AI company chose to sacrifice a quarter-billion-dollar contract and accept national pariah status over ethical principles.
The Pentagon’s response was to call him a liar and use his technology to bomb Iran.
The question now hangs in the air: In future crises, will the government value safety, or will it simply find a company willing to turn off the “conscience” switch?





