Palantir, Anduril and a suite of other Tolkien-inspired tech nightmares want to integrate artificial intelligence into every aspect of the U.S. military. Both companies have software suites they’re pitching as agents that will help make command decisions during combat. An AI general, if you will.
Yes, that’s a terrible idea.
On this episode of Angry Planet, Cameron Hunter and Bleddyn Bowen will tell us why. Hunter is a researcher at the University of Copenhagen and Bowen is a professor of Astropolitics at Durham University. They’ve just written a paper that skewers the idea that AI will ever be able to make command decisions.
The narrow definition of AI
The folly of the AI general
The games AI can’t win
“Targeting things is a command decision”
The IDF’s use of Microsoft’s use of AI systems
“The enemy gets a vote”
Killing more doesn’t mean winning more
American military as a “glass tank”
Matthew gets lost in a rant
“They don’t even have an animal’s intelligence”
The very real military uses of AI
Scientists Explain Why Trump's $175 Billion Golden Dome Is a Fantasy
OpenAI Employees Say Firm's Chief Scientist Has Been Making Strange Spiritual Claims
Eastern Europe Wants to Build a ‘Drone Wall’ to Keep Out Russia
How Palantir Is Using AI in Ukraine
How Israel Is Using Microsoft AI to Pick Targets in Gaza
The Israeli military is using AI products from Microsoft to conduct its war in Gaza. Off the shelf AI products powered by the tech company’s Azure cloud computing system and OpenAI are helping the IDF sort through data, translate Arabic, and even pick targets. But AI translations aren’t perfect and these systems often make mistakes.