Anthropic says military blacklisting was retaliatory in lawsuit filings


Full story

One of the country’s most advanced artificial intelligence companies is suing the federal government, saying its decision to blacklist the organization was rooted in politics, not policy.

Artificial intelligence company Anthropic filed two lawsuits against the U.S. government on Monday after Defense Secretary Pete Hegseth effectively blacklisted the company from working with them. Anthropic alleges that the Trump administration illegally retaliated against the company for its AI safety rules.

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

On Feb. 27, the Pentagon labeled Anthropic a supply chain risk, effectively blacklisting the company. Hegseth said it was because CEO Dario Amodei refused to remove safeguards, effectively allowing a private company to dictate how the government uses data. Hegseth called the company’s refusal an attempt to “seize veto power over the operational decisions of the United States military.” 

Anthropic’s lawsuits allege that this was a form of punishment. It stated that the government “retaliated against a leading frontier AI developer for adhering to its protected viewpoint,” which was “AI safety and the limitations of its own AI model.” They said the Trump administration’s actions violate the Constitution.

It also alleges that the administration exceeded the scope of the Defense Production Act. They are asking a federal judge to block Pentagon officials from enforcing the blacklisting.

Why are Anthropic and the Pentagon fighting?

The lawsuit is the latest move between the U.S. military and Anthropic since The Wall Street Journal reported that military officers used Claude, Anthropic’s chatbot, to target former Venezuelan leader Nicolás Maduro during the U.S. raid that captured him. 

Days later, Hegseth announced he had given Amodei a deadline to agree to the government’s full use of its chatbot or face consequences. Amodei released a statement publicly defending his company and its commitment not to use AI for domestic surveillance or for fully autonomous weapons. 

The statement didn’t sway the administration, as President Donald Trump posted on social media that Anthropic was a “RADICAL LEFT, WOKE COMPANY” and said its “selfishness is putting AMERICAN LIVES at risk.”

Hegseth followed Trump’s post with his own, saying Anthropic’s guardrails go against American values.

“Anthropic’s stance is fundamentally incompatible with American principles,” Hegseth wrote. “Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered.”

But Anthropic’s lawyers say the company didn’t create Claude for surveillance or weapons. They also said it would go against company principles. 

“Allowing Claude to be used to enable the Department to surveil U.S. persons at scale and to field weapons systems that may kill without human oversight would therefore be inconsistent with Anthropic’s founding purpose and public commitments,” according to the suit.

The Department of Defense has pushed back on Anthropic’s claims that the Pentagon planned to use the chatbot for weapons or surveillance. Instead, they said it was about allowing a private company to dictate how the government uses data. Pentagon officials said all its orders would be lawful.

What’s next for Anthropic?

Since Hegseth’s announcement, Anthropic lost a military contract worth $200 million. However, Amodei has said that was just a small portion of the more than $14 billion in investments the company has already received.

The military recently permitted OpenAI’s ChatGPT and xAI’s Grok to use classified systems. However, that didn’t help all the problems since several major defense companies used Anthropic in their work. Now these companies are searching for a new AI as competent as Claude.

Companies like Palantir, Lockheed Martin and J2 Ventures Portfolio have largely had to switch to ChatGPT. However, most experts say Claude still outperforms most other chatbots.

Tags: , , , ,

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Why this story matters

The Pentagon's blacklisting of Anthropic blocks federal contracts with the AI company and forces defense contractors currently using its chatbot to find alternatives, affecting which AI tools shape military operations and government work.

Lost federal contract revenue

Anthropic forfeited a $200 million military contract after refusing to remove restrictions preventing the use of its AI for autonomous weapons and domestic surveillance.

Defense industry AI tool changes

Major defense contractors, including Palantir and Lockheed Martin, must replace Anthropic's Claude chatbot, which experts describe as outperforming most alternatives they now use.

Government AI procurement limits

The Pentagon designated Anthropic a supply chain risk, preventing federal agencies from contracting with the company while permitting OpenAI and xAI access to classified systems.

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Daily Newsletter

Start your day with fact-based news

Start your day with fact-based news

Learn more about our emails. Unsubscribe anytime.