Hegseth blacklists Anthropic from military business after Pentagon dispute


Full story

Secretary of Defense Pete Hegseth said he has designated the artificial intelligence company Anthropic as a supply chain risk, effectively blacklisting it from doing any business with the U.S. military. 

Hegseth’s announcement comes shortly after President Donald Trump ordered all federal agencies to immediately stop using Anthropic’s artificial intelligence system after the company refused to hand over full control of the software to the military. 

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

Hegseth said Anthropic was trying to “seize veto power over the operational decisions of the United States military.” 

“Anthropic’s stance is fundamentally incompatible with American principles,” Hegseth wrote. “Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered.”

In his post, Trump called the company a “RADICAL LEFT, WOKE COMPANY” and said that Anthropic’s “selfishness is putting AMERICAN LIVES at risk.”

“We don’t need it, we don’t want it, and will not do business with them again!” Trump wrote.

On Thursday evening, Anthropic CEO Dario Amodei said his company would not grant the Pentagon unfettered access to its artificial intelligence model, Claude. He said Anthropic prohibits its AI from being used for automated weapons or surveillance. After a meeting on Tuesday, Defense Secretary Pete Hegseth gave Anthropic until Friday afternoon to give full access or face consequences.

Did Trump invoke the Defense Production Act?

When Hegseth threatened to cancel Anthropic’s $200 million Department of Defense contract, he also said he could designate the company a “supply chain risk” as well. That designation would essentially blacklist Anthropic, because anyone looking to do business with the Pentagon would have to cut ties with the company.

Hegseth has also considered invoking the Defense Production Act (DPA), according to Axios. The DPA gives the president the power to compel private companies to prioritize defense contracts, effectively forcing Anthropic to allow the military to use its AI.

However, in Trump’s post, he did not mention the DPA or blacklisting the company. He included a threat that implied the administration may take further action against the company if it didn’t begin complying. 

“Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow,” Trump wrote. 

Unbiased. Straight Facts.TM

The DoD also has $200M contracts with Google, OpenAI and xAI.

While Trump called to “immediately cease” using the tech, he later clarified a six-month phase-out period for agencies like the Pentagon to wind down operations. 

Neither Amodei nor Anthropic immediately responded to Trump’s post. 

In a statement posted on Thursday, Amodei called Hegseth’s threats “inherently contradictory,” saying, “One labels us a security risk; the other labels Claude as essential to national security.”

He added, “Regardless, these threats do not change our position: we cannot in good conscience accede to their request.”

At the time, Amodei said negotiations with the Pentagon would continue.

But even though Hegseth gave Anthropic until Friday afternoon to respond, on Wednesday, the Pentagon asked two major defense contractors — Boeing and Lockheed Martin — to provide an assessment of their reliance on Claude, Axios reported. It’s an apparent first step toward designating Anthropic a supply chain risk.

Anthropic wanted assurances

Amodei said Anthropic would not back down from its safeguard requirements.

The company wanted assurance the Defense Department wouldn’t use Claude for fully autonomous weapons or the mass domestic surveillance of Americans. The DOD, however, wanted to use it in all legal cases without those limitations, and pointed out that spying on Americans is illegal.

Hegseth hadsaid that Anthropic needed to allow the Pentagon full access to its AI for all “lawful” purposes, including AI warfare and surveillance.

This story is featured in today’s Unbiased Updates. Watch the full episode here.

Anthropic’s concerns were raised after it was reported the military used Claude during its mission to capture then-Venezuelan President Nicolás Maduro, which included bombing several sites in Caracas last month.

Under Anthropic’s usage policy, customers are not allowed to use its models to develop weapons, facilitate violence or conduct certain surveillance and tracking activities without consent. It also restricts battlefield management and predictive policing applications.

Defense officials told NPR this week that under the DPA, the military would keep using the company’s AI tools regardless of how it felt about it.

Tags: , , , , , , , , , , , , , ,

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Why this story matters

The Pentagon is pressuring a major AI company to remove restrictions on military use of its technology, with potential consequences for defense contractors and companies working with the government.

Contract requirements may shift

Defense contractors like Boeing and Lockheed Martin are being asked to assess their use of Claude AI as the Pentagon considers blacklisting Anthropic from the defense supply chain.

AI usage terms under pressure

The Pentagon wants unrestricted access to Claude for lawful military purposes including warfare and surveillance, overriding the company's current ban on autonomous weapons and mass domestic surveillance.

Government may compel private compliance

The Defense Production Act could force Anthropic to prioritize military contracts and remove usage restrictions, regardless of the company's consent.

Get the big picture

Behind the numbers

Anthropic has a $200 million contract with the Department of Defense. The company says it has forfeited several hundred million dollars in revenue by cutting off access to firms linked to the Chinese Communist Party.

Context corner

Anthropic was founded in 2021 by former OpenAI employees who left over disagreements about prioritizing safety versus commercialization. The company has positioned itself as focused on responsible AI development, publishing a "Constitution" framework for ethical AI use.

Solution spotlight

Anthropic offered to collaborate with the Department of Defense on research and development to improve AI reliability for potential future use in autonomous weapons systems, though this offer was not accepted.

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Bias comparison

  • Media outlets on the left frame the Pentagon's "demands" as aggressive, emphasizing Anthropic's "standing firm" against "autonomous killer drones" and "spying on Americans," using terms like "blacklist threat" and portraying the situation as "scariest moments in modern history.
  • Media outlets in the center include the Pentagon's assertion of using AI in "legal ways" and mention potential Cold War-era legislation.
  • Media outlets on the right focus on AI's importance for "modern warfare," suggesting Anthropic is "poking for exceptions" while highlighting the "contract risk.

Media landscape

Click on bars to see headlines

215 total sources

Key points from the Left

  • Anthropic CEO Dario Amodei said the company cannot in good conscience agree to the Pentagon's demands to remove safeguards on its AI technology, Claude, which would allow its use for mass surveillance or autonomous weapons.
  • The Pentagon insists it wants to use Anthropic's AI for all lawful purposes and denies intentions to use it for illegal mass surveillance or fully autonomous weapons without human control.
  • The Pentagon gave Anthropic a Friday deadline to agree to its terms or face contract termination, potential supply chain risk designation, and invocation of the Defense Production Act for broader authority.
  • Senators Thom Tillis and Mark Warner criticized the Pentagon's handling of the dispute and called for stronger AI governance mechanisms and more respectful negotiations.

Report an issue with this summary

Key points from the Center

  • Anthropic refused the Pentagon's request to remove AI safeguards that prevent autonomous weapon targeting and surveillance in the US, risking a $200 million contract.
  • The Pentagon threatened to deem Anthropic a 'supply chain risk' and invoke the Defense Production Act to force removal of the safeguards if Anthropic did not comply by the deadline.
  • Anthropic CEO Dario Amodei stated they cannot 'in good conscience' remove the safeguards despite the Pentagon's threats, willing to transition to another provider if necessary.

Report an issue with this summary

Key points from the Right

  • Anthropic refuses the US Department of Defense's demand to remove safeguards on its AI system Claude, which prevent its use in fully autonomous weapons or mass domestic surveillance, despite threats of losing government contracts.
  • Anthropic's CEO, Dario Amodei, stated the company cannot in good conscience agree to unrestricted military use of Claude due to concerns about reliability and risks to civil liberties.
  • The Pentagon insists it wants to use Claude for all lawful purposes and warns it may label Anthropic a supply chain risk or invoke the Defense Production Act.
  • US senators criticized the Pentagon's public dispute with Anthropic, calling for private negotiation and stronger AI governance laws in national security contexts.

Report an issue with this summary

Other (sources without bias rating):

Powered by Ground News™

Daily Newsletter

Start your day with fact-based news

Start your day with fact-based news

Learn more about our emails. Unsubscribe anytime.

By entering your email, you agree to the Terms and Conditions and acknowledge the Privacy Policy.