Why a federal judge temporarily halted the Pentagon’s ban on Anthropic


Full story

A federal judge in San Francisco has temporarily halted two government actions against Anthropic: the Pentagon’s move to label the company a “supply chain risk” and President Donald Trump’s order directing federal agencies to stop using its technology.

U.S. District Judge Rita F. Lin said the measures appeared aimed at punishing the company and could cripple its business.

The ruling pauses a major Trump administration move against a U.S. artificial intelligence company while a broader legal fight continues. 

The conflict, which grew out of a contract fight that became public in February, escalated after Anthropic CEO Dario Amodei said Claude should not be used for autonomous weapons or to surveil Americans. That position put the company at odds with the Pentagon over how the military could use the tool.

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

What the judge said about the government’s actions

In her order, Lin wrote that the government’s measures appeared “designed to punish Anthropic,” not protect national security, according to NPR and The Associated Press. She said the designation was likely unlawful and called it “arbitrary and capricious.” 

Lin also said the law does not support branding an American company a potential adversary or saboteur simply because it disagreed with the government.

Anthropic argued the designation was retaliatory and could damage its business by costing the company customers and revenue. The Pentagon argued that Anthropic had become untrustworthy and that the military should decide on the lawful uses of the tools it buys.

What happens next

The order leaves the restrictions on hold while the broader lawsuit moves forward. Lin delayed her order for one week and said it does not require the Pentagon to use Anthropic’s products. 

A separate, narrower Anthropic case remains pending in a federal appeals court in Washington, according to the AP.

Tags: , , , ,

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Why this story matters

A federal judge blocked the government from labeling Anthropic a security risk and banning federal use of its AI tools, temporarily lifting restrictions that could have cut off the company's access to government contracts and customers.

Federal AI tool access restored

Agencies can resume using Anthropic's Claude AI while the lawsuit continues, reversing a ban that had prohibited government use of the technology.

Contract dispute over weapon use

The conflict began after Anthropic's CEO said Claude should not be used for autonomous weapons or domestic surveillance, contradicting Pentagon expectations for military applications.

Business impact from security label

The judge found the supply chain risk designation could damage Anthropic by driving away customers and revenue based on a government adversary classification.

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Daily Newsletter

Start your day with fact-based news

Start your day with fact-based news

Learn more about our emails. Unsubscribe anytime.