Judge Blocks Pentagon’s Anthropic Retaliation, Supply Chain Risk Label Frozen

A federal judge in California has blocked the Pentagon from using a supply-chain risk label against Anthropic, saying the government likely crossed constitutional lines when it moved to cut ties with the artificial intelligence company. The ruling keeps the defense department from enforcing the designation for now and sets up a possible appeal.

US District Judge Rita Lin said the government’s actions appeared to punish Anthropic for refusing to abandon safety limits on how its Claude AI system could be used. She wrote that the record showed retaliation tied to the company’s public stance, not a neutral national security review.

What the court blocked

The Pentagon had labeled Anthropic a supply chain risk and told federal agencies to stop using the company’s product. It also moved to push companies with military contracts to prove they did not use Anthropic tools, a step the company said could threaten hundreds of millions of dollars in business.

The label had previously been used mainly for companies linked to foreign adversaries, which made the move unusually forceful. Lin said the measure went beyond the government’s stated security concerns and appeared to target the company’s speech.

Pentagon action Effect described in court
Supply chain risk label Required contractors to show they did not use Anthropic products
Agency guidance Pushed federal users to stop using the company’s tools
Contract pressure Risked broader business ties with firms working with the military

Why Anthropic challenged the move

Anthropic said it drew two lines in its government work: it would not allow its AI systems to be used in autonomous weapons or domestic mass surveillance. The company argued that its position was protected speech and that the Pentagon knew about those limits before the dispute escalated.

The company also said the designation harmed its reputation and threatened major contracts. After the ruling, an Anthropic spokesperson said the court moved swiftly and agreed the company was likely to succeed on the merits.

“We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits,” the spokesperson said. “While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”

Judge cites First Amendment concerns

Lin said the Pentagon’s response looked like retaliation for Anthropic’s refusal to change its policy. She wrote that public criticism and disagreement with government contracting terms do not justify branding a company as a threat to the U.S.

Her ruling said the measures likely violated Anthropic’s First Amendment and due process rights. She also said the government’s own records suggested the label was tied to the company’s “hostile manner through the press,” a justification the court found especially troubling.

The judge gave the government one week before the order takes effect, allowing time for an appeal. That delay means the dispute is not over, even though Anthropic won an important early victory in court.

Part of a wider clash over Pentagon power

The decision adds another legal setback for Defense Secretary Pete Hegseth, who has faced a series of rulings over how he has used his authority. A federal judge in Washington earlier found he violated the First Amendment rights of reporters through a restrictive press policy, and another ruling said he infringed on the free speech rights of a Democratic senator.

The Anthropic case also highlights a larger fight over how much control the military should have over commercial AI systems. The Pentagon said it wanted full access to Claude for lawful uses, including during wartime, while Anthropic said that access could not come at the cost of safeguards on autonomous weapons and mass surveillance.

  1. Anthropic kept its policy limits on military and surveillance use.
  2. The Pentagon labeled the company a supply chain risk.
  3. Federal agencies were told to stop using the product.
  4. A California judge blocked the move and cited constitutional concerns.
  5. The government can still appeal after the short delay.

A separate challenge to other authorities used in the designation is still pending in Washington, DC, leaving the broader legal fight unresolved as the government and one of the world’s leading AI firms continue to clash over security, speech rights, and the rules for military AI use.

Read more at: www.cnn.com
Exit mobile version