Pentagon Threatens AI Lab Over 'No Murder Bots' Clause in Contract
KEY POINTS
- •The Pentagon is upset with Anthropic for restricting AI use in mass surveillance and autonomous weapons development.
- •Tensions rose when Anthropic reportedly questioned the use of its Claude model in a Venezuelan raid involving gunfire.
- •Despite internal employee concerns and ideological clashes, Anthropic remains committed to national security but may see its partnership dialed back.
In a fiery 2024 saga of tech vs. tanks, the Pentagon is seriously miffed at Anthropic for insisting AI can’t be a snitch on mass surveillance or pilot nukes solo — two hobbies the military apparently counts on. The $200 million deal signed last summer got awkward recently when Anthropic allegedly side-eyed Claude’s role in the Venezuelan raid that involved actual shooting, prompting a very FBI-agency ‘Did you use our software to gun down folks?’ grapevine moment with Palantir. Meanwhile, OpenAI, Google, and Musk’s xAI have tossed their ethical floats away and promised the Pentagon ‘use for all lawful torturing... ops... purposes.' Anthropic prefers internal debates, starring CEO Dario Amodei worried about AI apocalypse, while Pentagon officials mutter about replacing Claude, whom no one else can quite match yet. The love-hate power couple of AI and the military continues to write awkward breakup texts.
Share the Story
Source: Axios | Published: 2/15/2026 | Author: Dave Lawler