A panel of US federal judges has rejected Anthropic’s request to prevent the Department of Defence (now Department of War) from classifying it as a security threat. This is a setback for the AI startup in loggerheads with the administration of US President Donald Trump regarding the application of AI in military operations.
Known for its Claude AI assistant, Anthropic claimed that Defence Secretary Pete Hegseth exceeded his authority by designating the company as a national security “supply-chain risk” due to its unwillingness to eliminate certain usage restrictions on its products—particularly for mass surveillance and developing autonomous weapons. The classification prevents Anthropic from securing Pentagon contracts and might lead to a government-wide blacklist.
“In our view, the equitable balance here cuts in favour of the government. On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict. For that reason, we deny Anthropic’s motion for a stay pending review on the merits,” the judges said in the court ruling.
Hegseth had earlier issued directives identifying Anthropic as a risk under two separate statutes. In response, Anthropic initiated two lawsuits, one in the District of Columbia court and another in the US District Court for the Northern District of California, to contest the classification under each law, alleging that the Pentagon improperly used the designation to penalise the company for ideological reasons.
The ruling also indicated that issuing a stay would compel the US military to continue working with an undesired vendor of essential AI services during a major ongoing military conflict.
According to the New York Times, this ruling is not final, and the legal proceedings are set to continue. However, it represented an early victory for the Trump administration, which has been at odds with Anthropic following a failed negotiation over a $200 million contract to use AI in classified systems.
The Defence Department and Anthropic had differing opinions on contract stipulations concerning the Pentagon’s use of the company’s AI.
A ruling from the District of Columbia court on Wednesday continues to prevent Anthropic from obtaining new contracts with the Pentagon. The Department of Defence has announced plans to eliminate Anthropic’s software from its systems within the next six months.
In its decision, the panel recognised that Anthropic would “likely experience some level of irreparable damage.”
However, last month, a judge in the California court ruled in favour of Anthropic and stated that the startup would not face restrictions on proceeding with its federal contracts with certain defence-related agencies for the time being.