politicsconservative

U. S. Military Cuts Ties With AI Firm Over Safety Rules

Washington DC, USASaturday, March 7, 2026

The Pentagon has officially designated AI firm Anthropic PBC as a supply‑chain risk, effectively barring the company from future government contracts. The decision could also ripple through businesses that work with the military, prompting them to sever ties.

Why the Label?

  • Disagreement over usage: Anthropic’s AI tool, Claude, is at the center of a dispute. The company insists that it should not be used for spying on U.S. citizens or in autonomous weapons without human oversight.
  • Pentagon’s stance: The Defense Department insists on unrestricted use of its tools for any lawful purpose, fearing that constraints could set a precedent limiting future military options.

Escalation

  • Leaked memo: Anthropic CEO Dario Amodei sent a memo criticizing the Pentagon’s position and referencing political pressures. After leaks, he apologized for the wording but vowed to challenge the label in court.

Political Reactions

  • Senator Kirsten Gillibrand: She called the Pentagon’s action “reckless” and warned it would aid enemies, arguing that targeting an American company for standing up to the government is a tactic more common in China.

Pentagon’s Position

The agency emphasizes that allowing vendors to dictate technology usage could put soldiers at risk. It will not accept any constraints that might limit its ability to deploy AI for lawful missions.

Potential Outcomes

  • If Anthropic wins: A new standard could emerge for defense‑tech negotiations.
  • If the Pentagon prevails: It would reinforce the military’s right to use AI tools without external limits.

Actions