How Anthropic will lose more than $200 million if it refuses Pentagon demands


This recording was made using enhanced software.

Summary

Pentagon ultimatum

Defense Secretary Pete Hegseth gave Anthropic until 5 p.m. Friday to grant the Pentagon full, unfettered access to its AI model and threatened to invoke the Defense Production Act if the company does not comply.

Autonomous weapons concerns

Anthropic CEO Dario Amodei has raised ethical concerns about unregulated government use of AI, particularly the dangers of fully autonomous drones armed with deadly weapons.

Surveillance fears

Amodei has expressed concerns that advanced AI, given enough data, could end private life in the country, saying it could "make a mockery of the Fourth Amendment."


Full story

The clock is ticking for Anthropic, one of the world’s largest artificial intelligence companies, after the Department of Defense threatened to blacklist the company from working with the military. 

Defense Secretary Pete Hegseth said the company has until 5 p.m. on Friday to grant the Pentagon full, unfettered access to its AI model. If not, Hegesth said the department would invoke the Defense Production Act, allowing the military to use its model and labeling Anthropic as a supply chain risk, according to The New York Times. The move could put the company’s military contracts, worth hundreds of millions of dollars, at risk.

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

The disagreement comes after Anthropic asked for assurances that the Pentagon wouldn’t use the company’s AI to spy on Americans or for autonomous weapons. However, the Trump administration is demanding to use the technology without restrictions, Al Jazeera reports

Hegseth has envisioned a military AI system that operates “without ideological constraints” that may limit lawful military operations. He said he would not allow the military’s AI to be “woke.”

What is the Pentagon asking for?

Hegseth said that Anthropic needs to allow the Pentagon full access to its AI for all “lawful” purposes, including AI warfare and surveillance. Defense officials told NPR that the military would keep using the company’s AI tools regardless of how it felt about it. 

The Defense Production Act has wide-ranging implications but is typically used in manufacturing contexts, The New York Times reports. The atypical move would force Anthropic to make its product available for free. 

The Pentagon previously awarded Anthropic a military contract of up to $200 million in 2025. The company was the first cleared for classified use, beating Google’s Gemini and OpenAI’s ChatGPT. Military officials said Anthropic’s AI was the most advanced and secure model for sensitive applications. 

Anthropic CEO Dario Amodei has previously raised ethical concerns about unregulated government use of AI, especially the dangers of fully autonomous drones armed with deadly weapons. 

“A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow,” Amodei wrote in January.

On Tuesday, Amodei appeared on The Times’ “Interesting Times” podcast. He raised concerns about AI “drone swarms,” which could attack people with no human input. 

“The constitutional protections in our military structures depend on the idea that there are humans who would disobey illegal orders with fully autonomous weapons,” Amodei said.

The military has pushed back on Anthropic’s concerns, saying they require tools without built-in limitations. Pentagon officials told Al Jazeera that the military has issued only lawful orders and is legally responsible for the tools. 

Does the government use AI to spy on Americans?

Besides autonomous weapons, Anthropic and other AI companies are afraid the government would use their products to spy on Americans. The company’s biggest fear regarding surveillance is that if an advanced enough AI is given enough data, it could end private life in the country. 

Amodei said this could “make a mockery of the Fourth Amendment.” The Fourth Amendment protects Americans against unreasonable searches and seizures as well as warrantless searches.

Currently, there are no federal laws or regulations targeting AI mass surveillance. That worries Anthropic, since it doesn’t want its product to become the infrastructure of a potential American surveillance state. 

Earlier this month, Mrinank Sharma, an Anthropic AI safety researcher, left the company over concerns about AI use. In a statement following his resignation, Sharma said that something must be done now, since the crisis is already underway. 

“The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment,” Sharma wrote. “Moreover, throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions. I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.”

Palantir, a data analytics company specializing in government agencies and the military, is developing an AI-based program that can track and pinpoint potential deportation targets, according to the Electronic Frontier Foundation. The company called the tool the Enhanced Leads Identification and Targeting for Enforcement, or ELITE. 

Straight Arrow News has previously reported that Immigration and Customs Enforcement has potentially used surveillance tools to track protesters.

How do Anthropic’s guidelines differ from other AI companies?

Anthropic is the last major AI company to refuse to grant the military unrestricted access. 

OpenAI, the largest AI company, quietly deleted references to military and warfare from its list of prohibited uses in early 2024. When asked why, company officials said there were “national security use cases that align” with the company’s mission. 

Google followed OpenAI’s lead about a year later. The reversal was a major shift for the company, since it explicitly pledged not to use AI for weapons or surveillance. The pledge followed employees’ concerns about Project Maven in 2018, and Google eventually left the project. The ongoing project is an attempt by the military to accelerate its adoption of AI across military intelligence workflows. 

Unlike Google and OpenAI, Elon Musk’s xAI never had any safety policies regarding military or surveillance use. The company just reached a deal with the military to allow the Pentagon’s “all lawful use” standard. xAI was the second AI system approved for classified military networks.

Tags: , , ,

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Why this story matters

The Pentagon has threatened to force Anthropic to provide unrestricted access to its AI technology for military use, which could set a precedent for how the government compels private companies to hand over technology without usage limits.

No federal limits on AI surveillance

There are currently no federal laws or regulations restricting AI mass surveillance, meaning the government faces no legal barriers to using AI systems to monitor Americans' communications and activities.

Military already uses AI tracking tools

Immigration and Customs Enforcement has used surveillance tools to track protesters, and Palantir is developing AI programs to identify deportation targets, according to the Electronic Frontier Foundation.

Tech companies dropping military restrictions

OpenAI and Google have removed prohibitions on military and warfare applications from their usage policies, leaving Anthropic as the last major AI company refusing unrestricted Pentagon access.

Get the big picture

Synthesized coverage insights across 139 media outlets

Context corner

The Defense Production Act is a Cold War-era law from 1950 that grants the president authority to compel companies to prioritize national defense production. It was recently invoked during the COVID-19 pandemic to increase production of medical equipment.

History lesson

The Pentagon-Anthropic debate echoes an earlier controversy over Project Maven, a Pentagon drone surveillance program. While some tech workers quit and Google dropped out of that project, the Pentagon's reliance on drone surveillance has only increased since then.

Policy impact

If the Pentagon invokes the Defense Production Act or designates Anthropic a supply chain risk, it could force other defense contractors to certify they don't use Claude in their workflows, potentially disrupting Anthropic's business with companies that do business with the U.S. government.

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Bias comparison

  • Media outlets on the left frame the Pentagon's demand for unrestricted AI use as a "threat" to make Anthropic a "pariah" over "woke AI" concerns, emphasizing ethical "safeguards" and even linking it to "fascist Grok AI.
  • Media outlets in the center neutrally convey the "ultimatum" and "deadline," also noting the contract's value.
  • Media outlets on the right portray a firm "ultimatum" and "deadline," highlighting the potential to "lose $200M deal" and viewing "restrictions" as impediments to military authority.

Media landscape

Click on bars to see headlines

139 total sources

Key points from the Left

  • Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a Friday deadline to provide unrestricted military access to their AI technology or risk losing their Pentagon contract, with potential actions including a supply chain risk designation or invoking the Defense Production Act.
  • Anthropic was the first AI company approved for classified U.S. military networks and provides the AI chatbot Claude while resisting its use for fully autonomous weapons or domestic surveillance.
  • The Pentagon is concerned about losing access to Claude due to its advanced capabilities and is encouraging other AI companies to expand into classified military applications under less restrictive terms.
  • The dispute illustrates broader concerns about the ethical use of AI in military contexts and the need for stronger oversight amid rapid AI integration and civil liberties risks.

Report an issue with this summary

Key points from the Center

  • At a Tuesday Pentagon meeting, Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei the contract will be terminated by Friday unless safeguards are loosened.
  • At issue are the guardrails Anthropic placed on its Claude model, banning fully autonomous weapons and mass domestic surveillance, while Pentagon concerns escalated this month over the Venezuela military raid.
  • Pentagon officials warned they could invoke the Defense Production Act or designate Anthropic a supply‑chain risk, and the government gave Anthropic until Friday at 5 p.m. to respond.
  • Canceling or blacklisting Anthropic could severely damage its ability to work with government partners and enterprise customers, while designation as a supply‑chain risk could force company executives to allow unrestricted Pentagon use.
  • Other firms such as Google, OpenAI and xAI have agreed to Pentagon terms and are moving onto classified networks, while Anthropic's $20 million donation adds a political element, highlighting gaps in law and oversight.

Report an issue with this summary

Key points from the Right

  • The Pentagon has given AI company Anthropic until Friday to remove usage restrictions on its Claude AI system for lawful military purposes, threatening to cancel its $200 million contract or impose other penalties if it refuses.
  • Defense Secretary Pete Hegseth warned Anthropic CEO Dario Amodei that failure to allow unrestricted lawful military use could lead to contract termination, supply chain risk designation, or invocation of the Defense Production Act.
  • Anthropic refuses to allow its AI to be used for fully autonomous weapons or domestic mass surveillance but claims these restrictions do not interfere with lawful military operations.
  • The dispute highlights tensions over control of AI usage in the military, with the Pentagon insisting lawful military authority, not private company policy, should govern how AI tools are deployed.

Report an issue with this summary

Other (sources without bias rating):

Powered by Ground News™

Daily Newsletter

Start your day with fact-based news

Start your day with fact-based news

Learn more about our emails. Unsubscribe anytime.