Dario Amodei, CEO of Anthropic, said the company will not agree to the US Department of War contract terms that require AI providers to support “any lawful use” of their systems, including applications the firm says risk undermining democratic values.
In a statement released on February 26, Amodei said Anthropic “cannot in good conscience accede” to requests to remove safeguards that block two use cases: mass domestic surveillance and fully autonomous weapons.
“The Department of War has stated they will only contract with AI companies who accede to ‘any lawful use’ and remove safeguards,” Amodei wrote.
He said the department has threatened to remove Anthropic from its systems, designate it a “supply chain risk,” and invoke the Defense Production Act to force removal of the safeguards.
Anthropic said it has deployed its Claude models across the Department of War and other national security agencies for intelligence analysis, modelling and simulation, operational planning, and cyber operations.
The company said it was the first frontier AI firm to deploy models on US government classified networks, at National Laboratories, and to provide custom models for national security customers.
Amodei said Anthropic supports the use of AI for foreign intelligence and counterintelligence missions but opposes “mass domestic surveillance.” He argued that AI systems can assemble large volumes of commercially available data, including movement, browsing, and association records, into detailed profiles at scale.
“Using these systems for mass domestic surveillance is incompatible with democratic values,” he said. He added that existing laws allow government agencies to purchase certain data without warrants, and that AI amplifies the scope of such practices.
On autonomous weapons, Anthropic said partially autonomous systems are already in use in conflicts such as Ukraine, but that “today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.” The company said it will not provide products that remove humans entirely from target selection and engagement decisions.
“We will not knowingly provide a product that puts America’s warfighters and civilians at risk,” Amodei said, adding that Anthropic offered to work with the department on research and development to improve reliability, but the offer was not accepted.
Anthropic also said it has cut off access to Claude for firms linked to the Chinese Communist Party, including entities designated as Chinese Military Companies, and shut down attempts to use its models in cyberattacks. The company said it forfeited several hundred million dollars in revenue as part of those actions and has supported export controls on advanced AI chips.
Amodei said the department’s threats are “inherently contradictory”, noting that labelling Anthropic a supply chain risk conflicts with describing its technology as essential to national security.
“It is the department’s prerogative to select contractors most aligned with their vision,” he wrote. “Given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider.”
Anthropic said that if it is removed from defence systems, it will support a transition to another provider to avoid disruption to military operations. The company said its models will remain available under the proposed terms “for as long as required.”
The dispute comes as US defence agencies expand the use of generative AI for mission planning, intelligence processing, and cyber defence, and as policymakers debate oversight of autonomous weapons and AI-driven surveillance.
ALSO READ: Red Hat & NVIDIA Launch AI Factory For Enterprise-Scale Deployment