Which Way to Armageddon?
Thinking about the mix of artificial intelligence and nuclear weapons while staring at the Chicago skyline.
In July, I was on the 8th floor of a building on the University of Chicago campus looking over the city’s skyline with Jeffrey Lewis, a nuclear weapons expert at the Middlebury Institute of International Studies. He was there, along with dozens of other nuclear wonks, to teach Nobel laureates about the horrors of nuclear war. With a smile on his face, Lewis pointed to all the parts of the Chicago skyline that would vanish in a puff of atomic ash should someone detonate a nuke over the Windy City.
This exercise, he explained, was part of his presentation. I hope it had as big an effect on them as it did on me. Nuclear weapons can be an abstract nightmare – a weapon from the past or the McGuffin in an action movie – and it’s good every now and then to be confronted with the horrific realities of what even one nuke can do.
The nuclear experts and laureates had gathered on the 80th anniversary of the Trinity explosion that brought us into the nuclear age. After the meetings, the laureates issued a “declaration” that called on the world’s nuclear powers to work towards nuclear disarmament. The list of asks included re-committing to a ban on nuclear weapons testing and the resumption of arms control talks between the United States and Russia.
It was a good declaration, but I wasn’t there just for the nukes. I was in Chicago to talk to nuclear experts about a different abstract horror: artificial intelligence. With so many nuclear priests in one place, I thought it was a great opportunity to pick their brains about how the world’s most dangerous technology (nuclear weapons) is coming together with what might become the world’s most dangerous technology (AI).
Their answers to my questions were not comforting. In short, the dozen or so people I spoke with warned me that mixing the old world-ending tech with the fancy new computer systems was inevitable, and was probably already happening. In a speech last year, U.S. Air Force General Anthony J. Cotton—the man in charge of America’s nuclear weapons—said his forces need AI. While I wrote about this for Wired, I wanted to expand on it a bit here.
I had a long conversation with Mallory Stewart, the former assistant secretary of state for arms control, verification, and compliance under President Joe Biden. Steward has since left government and is now the executive vice president of the Council on Strategic Risks. Like many of the other nuclear wonks there, Stewart viewed the intersection of nukes and AI as a fait accompli. Also like many others, she was less sure what exactly that meant.
“Artificial intelligence is fascinating to me because everyone is excited about it or terrified of it,” she said. “It can either solve all the world’s problems or create worse problems than we’ve ever possibly envisioned.”
She pointed out that the nuclear age has brought new medicines and technologies that have done good for humanity and that AI might do the same. “The way we’re looking at it is how it is exacerbating risks in various contexts and what can we do to mitigate that? In the nuclear arena, everyone knows there’s a risk of bringing in an AI decision maker or AI that’s open to a cyber attack, an AI that has hallucinations,” she said.
Stewart is neither an AI booster nor an AI doomer. Her perspective is practical: AI is a new technology that everyone wants, is using, and has to be dealt with. “We have to put in place as much risk reduction and resilience as we possibly can, and then every single second, re-evaluate and re-adapt to AI itself,” she said. “But to cut [AI]off entirely is also a problem.”
The Trump administration is going all in on AI and using metaphors from the nuclear arms race while doing it.
“AI is the next Manhattan Project, and THE UNITED STATES WILL WIN,” the Department of Energy said in a post on X. A New York Times op-ed from Thomas Friedman took the metaphor and ran with it, calling on the United States and China to work together to tackle the regulation of AI the same way the world used to regulate nuclear weapons.
For Stewart, and many of the other experts I spoke with, the metaphor is faulty. “I really worry that our present stance of… arms racing and our unfortunate inclination to move away from regulation is going to exacerbate all of the challenges we see in AI,” she said. “Those of us that are looking to highlight risks have to figure out how to do it in a way that appeals to an audience that wants to see their own capacity to use AI give them greater power.”
Stewart said that dealing with AI is a more sophisticated challenge than managing the dangers of nuclear missiles. Calls to regulate AI like nukes misses the point of both technologies. “In the nuclear context, it’s a little more easy to wrap your head around. It’s not an evolving risk. Obviously weapons programs evolve and uses evolve, but it’s not like the underlying capacity of the system itself evolves,” she said.
The week ended with a press conference and an official declaration in which the laureates called on the countries of the world to work together to regulate nuclear weapons. AI got a shot out. “Acknowledging the fallibility of AI, we call on all nuclear armed states to ensure meaningful and enhanced human control and oversight over nuclear command and control, and increase decision-making timelines for determining the reliability of information received and the prudence of any decision on whether to use military force. Further acknowledging the fallibility of human beings, we call on all nuclear armed states to institute the ‘two-person rule’ that ensures at least two individuals are involved in any decision about the use of nuclear force.”
Keep a human in the loop, in other words. Hell, keep two humans in the loop. It’s a good policy recommendation, a good place to start, but I do worry if anyone in power is listening.
Two parts of the conference have stuck with me since July. The first is standing with Lewis overlooking a Chicago evoked in ashes. The second was at a concert the organizers held after it was all over. The Kronos Quartet played a series of anti-nuclear songs. During some of the performances, the experts who’d spoken with the laureates read snippets of nuclear history.
While the quartet played Tusen Tankar toward the end of its set, nuclear historian Alex Wellerstein read a speech given by U.S. General George Lee Butler to the Canadian Network Against Nuclear Weapons in 1999. I’d never heard this speech and it felt both prescient and maddening. Almost three decades later, all the old nuclear problems are still here, and now AI may exacerbate them.
Butler was the last head of Strategic Air Command, a post-World War II American military outfit that was in charge of the nukes. He oversaw the end of the Cold War and only became aware of the totality of America’s nuclear war plans at the finish. That knowledge changed him.
“Even having some sense of what it encompassed, I was shocked to see that in fact it was defined by 12,500 targets in the former Warsaw Pact to be attacked by some 10,000 nuclear weapons, virtually simultaneously in the worst of circumstances, which is what we always assumed. I made it my business to examine in some detail every single one of those targets. I doubt that that had ever been done by anyone, because the war plan was divided up into sections and each section was the responsibility of some different group of people. My staff was aghast when I told them I intended to look at every single target individually. My rationale was very simple. If there had been only one target, surely I would have to know every conceivable detail about it, why it was selected, what kind of weapon would strike it, what the consequences would be. My point was simply this: Why should I feel in any way less responsible simply because there was a large number of targets. I wanted to look at every one.
“What I have come to believe is that much of what I took on faith was either wrong, enormously simplistic, extraordinarily fragile, or simply morally intolerable,” Butler said. “What I have come to believe is that the amassing of nuclear capability to the level of such grotesque excess as we witnessed between the United States and the Soviet Union over the period of the 50 years of the Cold War, was as much a product of fear, and ignorance and greed, and ego and power, and turf and dollars, as it was about the seemingly elegant theories of deterrence.”
An artificial intelligence system is incapable of having this kind of emotional revelation or sense of culpability. AI can scan the targets in an instant, but no LLM would not feel what Butler felt when faced with the entirety of the plan. It cannot reckon with the full horror of what these weapons can do.
The world is now in the midst of a dual arms race, one for AI and the other for “better” nuclear weapons. They say we need the nukes to keep us safe. They say we need AI to keep us competitive. But I think the truth is closer to what Stewart said: many folks in charge see both technologies as a path to giving them greater power.