Pentagon used Anthropic AI in Maduro raid as contract faces review: Report


Summary

Combat deployment

Anthropic’s Claude was reportedly used in the U.S. operation to capture Nicolás Maduro and his wife, though it wasn’t clear exactly how the model was used.

Usage conflict

Anthropic’s Usage Policy bars using its products to develop or design weapons and to facilitate or promote violence, and it restricts certain surveillance uses, including “battlefield management applications.”

Contract friction

Anthropic’s $200 million Pentagon contract is reportedly at risk amid disputes over limits on how Claude can be used, including domestic surveillance and autonomous lethal operations.


Full story

The U.S. operation targeting former Venezuelan leader Nicolás Maduro is exposing a growing rift between the Pentagon and AI developers over how artificial intelligence can be used in military operations.

The Wall Street Journal reported the Pentagon used Anthropic’s AI model, Claude, during the mission, which included bombing several sites in Caracas last month. Anthropic’s public usage policy prohibits using its products to develop weapons, facilitate violence or conduct certain surveillance activities.

Anthropic said it couldn’t comment on whether Claude was used in any particular mission, classified or otherwise, but said any deployment must comply with its usage policies.

Axios reported Monday that Defense Secretary Pete Hegseth is weighing whether to cut ties with Anthropic and potentially designate the company a “supply chain risk,” citing a senior Pentagon official.

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

What the Journal reported about Claude’s role

According to the Journal, Claude’s deployment occurred through Anthropic’s partnership with Palantir Technologies, whose software is widely used by the Defense Department and federal law enforcement.

The Journal said it was not clear exactly how Claude was used in the operation. After the raid, an Anthropic employee asked a Palantir counterpart how the model had been deployed, people familiar with the matter told the paper. Anthropic said it has not discussed the use of Claude in specific operations with industry partners outside routine technical conversations.

The Journal also reported that Anthropic was the first AI model developer used in classified Defense Department operations. Axios separately reported that Claude is currently the only AI model available in certain classified military systems.

Tension over usage limits

Anthropic’s usage policy bars customers from using its models to develop weapons, facilitate violence or conduct certain surveillance and tracking activities without consent. It also restricts battlefield management and predictive policing applications.

The Journal reported that Anthropic’s contract with the Pentagon — valued at up to $200 million — has faced pressure amid disagreements over those limits. The company has raised concerns about autonomous lethal operations and domestic surveillance, which have become key sticking points in negotiations.

Chief Pentagon spokesman Sean Parnell said the department’s relationship with Anthropic is under review.

“Our nation requires that our partners be willing to help our warfighters win in any fight,” Parnell said, according to the Journal.

Anthropic said it remains “committed to using frontier AI in support of US national security.” Axios reported the company has signaled it may loosen some terms but still wants guardrails around mass domestic surveillance and fully autonomous weapons.

In earlier reporting on the dispute, the Journal reported that Anthropic has said Claude is used “extensively” for U.S. national security missions and that it is in “productive discussions” with the Defense Department about continuing that work.

Tags: , , , ,

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Why this story matters

The Pentagon's use of AI in military operations now depends on whether tech companies permit it, creating uncertainty about which tools will remain available for classified missions and national security work.

Military AI access faces new limits

Defense operations currently rely on a single AI model in certain classified systems, and that access is now under review due to disputes over usage policy.

Tech usage terms now constrain operations

AI developers can prohibit their products from being used in weapons development, surveillance and battlefield management even after government contracts are signed.

National security tools may be restricted

A major AI provider may be designated a supply chain risk and lose its Pentagon contract over disagreements about how its technology can be deployed.

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Daily Newsletter

Start your day with fact-based news

Start your day with fact-based news

Learn more about our emails. Unsubscribe anytime.

By entering your email, you agree to the Terms and Conditions and acknowledge the Privacy Policy.