Secretary of Defense Pete Hegseth said he has designated the artificial intelligence company Anthropic as a supply chain risk, effectively blacklisting it from doing any business with the U.S. military.
Hegseth’s announcement comes shortly after President Donald Trump ordered all federal agencies to immediately stop using Anthropic’s artificial intelligence system after the company refused to hand over full control of the software to the military.
Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.
Point phone camera here
Hegseth said Anthropic was trying to “seize veto power over the operational decisions of the United States military.”
“Anthropic’s stance is fundamentally incompatible with American principles,” Hegseth wrote. “Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered.”
In his post, Trump called the company a “RADICAL LEFT, WOKE COMPANY” and said that Anthropic’s “selfishness is putting AMERICAN LIVES at risk.”
“We don’t need it, we don’t want it, and will not do business with them again!” Trump wrote.
On Thursday evening, Anthropic CEO Dario Amodei said his company would not grant the Pentagon unfettered access to its artificial intelligence model, Claude. He said Anthropic prohibits its AI from being used for automated weapons or surveillance. After a meeting on Tuesday, Defense Secretary Pete Hegseth gave Anthropic until Friday afternoon to give full access or face consequences.
Did Trump invoke the Defense Production Act?
When Hegseth threatened to cancel Anthropic’s $200 million Department of Defense contract, he also said he could designate the company a “supply chain risk” as well. That designation would essentially blacklist Anthropic, because anyone looking to do business with the Pentagon would have to cut ties with the company.
Hegseth has also considered invoking the Defense Production Act (DPA), according to Axios. The DPA gives the president the power to compel private companies to prioritize defense contracts, effectively forcing Anthropic to allow the military to use its AI.
However, in Trump’s post, he did not mention the DPA or blacklisting the company. He included a threat that implied the administration may take further action against the company if it didn’t begin complying.
“Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow,” Trump wrote.
Unbiased. Straight Facts.TM
The DoD also has $200M contracts with Google, OpenAI and xAI.
While Trump called to “immediately cease” using the tech, he later clarified a six-month phase-out period for agencies like the Pentagon to wind down operations.
Neither Amodei nor Anthropic immediately responded to Trump’s post.
In a statement posted on Thursday, Amodei called Hegseth’s threats “inherently contradictory,” saying, “One labels us a security risk; the other labels Claude as essential to national security.”
He added, “Regardless, these threats do not change our position: we cannot in good conscience accede to their request.”
At the time, Amodei said negotiations with the Pentagon would continue.
But even though Hegseth gave Anthropic until Friday afternoon to respond, on Wednesday, the Pentagon asked two major defense contractors — Boeing and Lockheed Martin — to provide an assessment of their reliance on Claude, Axios reported. It’s an apparent first step toward designating Anthropic a supply chain risk.
Anthropic wanted assurances
Amodei said Anthropic would not back down from its safeguard requirements.
The company wanted assurance the Defense Department wouldn’t use Claude for fully autonomous weapons or the mass domestic surveillance of Americans. The DOD, however, wanted to use it in all legal cases without those limitations, and pointed out that spying on Americans is illegal.
Hegseth hadsaid that Anthropic needed to allow the Pentagon full access to its AI for all “lawful” purposes, including AI warfare and surveillance.

This story is featured in today’s Unbiased Updates. Watch the full episode here.
Anthropic’s concerns were raised after it was reported the military used Claude during its mission to capture then-Venezuelan President Nicolás Maduro, which included bombing several sites in Caracas last month.
Under Anthropic’s usage policy, customers are not allowed to use its models to develop weapons, facilitate violence or conduct certain surveillance and tracking activities without consent. It also restricts battlefield management and predictive policing applications.
Defense officials told NPR this week that under the DPA, the military would keep using the company’s AI tools regardless of how it felt about it.