AI Will Not Start Nuclear War – But It May Change How We Think About It
Misconceived notions about AI-assisted analysis dramatically overlook the parallels between contemporary intelligence processes and the future form of Artificial Generative Intelligence.
Debates about artificial intelligence and nuclear weapons tend to swing between hype and fear. Either AI will become a strategic oracle that outthinks human leaders or it will destabilise deterrence and accelerate us toward catastrophe.
Both views miss the real shift underway.
The most important impact of AI on nuclear strategy is not about machines making launch decisions. It is about machines helping humans think differently about escalation, risk and bargaining under extreme uncertainty.
My research builds on recent work at King's College London examining how frontier large language models (LLMs) behave in simulated nuFFFclear crises. While the results highlight LLMs' tendency to employ nuclear weapons against each other in abstract scenarios, they offer limited insights into real-world escalation dynamics.
I adopt a more operationally-grounded approach, designing concrete, policy-relevant scenarios and incorporating insights from human tabletop exercises and political-military analysis. Specifically, I study American nuclear strategy, including in a 2030 US–China contingency over Taiwan and a 2035 multi-theatre US-China-Russia conflict .
The goal is not prediction. It is exploration.
LLMs operate by identifying probabilistic patterns in the relationship between language, actions and outcomes. To make this process analytically tractable, I provide the model with a structured corpus of strategically relevant material– including doctrine, historical cases and wargaming insights and a decision-making architecture that requires it to explicitly evaluate trade-offs between escalation and restraint. By repeating this structured process across hundreds of iterations under varying conditions, the simulations generate distributions of possible outcomes. These outputs enable us to examine how escalation pathways form, stabilise, or spiral – and under what conditions nuclear use becomes thinkable.
The patterns that emerge are revealing.
Nuclear first use in these simulations tends to appear not at the point of national collapse, but when conventional defeat at the theatre level looks imminent. In other words, nuclear weapons enter the picture when leaders face losing a war, not losing their state. I reached similar conclusions via tabletop exercises and political-military analysis : both corroborated the AI insight that nuclear use is most plausible at ‘critical but not existential’ thresholds.
Escalation, moreover, often remains bounded. When nuclear weapons appear, they are usually limited and operationally targeted. The models do not automatically leap to countervalue strikes or all-out strategic exchange. The popular image of inevitable escalation to Armageddon is not what emerges most frequently.
Finally, decisive victories are rare. The most common outcomes are stalemates, pauses, or coercive bargaining under severe damage. Nuclear use does not reliably deliver clean success.
Taken together, this supports the idea that limited nuclear war occupies a narrow band of what we might call ‘optimal instability’: a space where incentives to escalate coexist with strong pressures to avoid catastrophe. It is dangerous, fragile and morally sobering – but not purely suicidal.
So where does Artificial General Intelligence (AGI) fit into this picture?
Public discussion often assumes AGI would appear as a single, monolithic superintelligence – a machine strategist that calculates faster and better than any human. But a more plausible trajectory is collective and distributed machine intelligence: swarms of specialised models performing discrete functions.
One model might map escalation ladders. Another might estimate military effectiveness. Another might model adversary perceptions. Another might explore termination pathways. Together, they would form a variegated and dynamic analytic ecosystem.
This would not replace human decision-making. It would resemble an amplified version of how national security staffs already operate: distinct and complementary sets of expertise fused into an overarching decision process. The difference is scale and speed. Instead of examining a handful of scenarios, AI analysts could examine millions.
This points to a second misconception: that AI’s value lies in prediction. Policymakers often ask whether AI can forecast who will use nuclear weapons or how a war will end.
That is the wrong benchmark. The real value of AI is not predicting outcomes but mapping pathways. AI can help generate a strategic multiverse – a landscape of plausible escalation routes, each with associated costs, benefits and risks. Leaders are unlikely to get a crystal ball, but they may get a far richer map of possibility space.
Historically, new technologies are first used to automate existing processes. But their true impact comes when they augment human judgment and reshape how problems are framed. Radar did not just automate detection; it redefined airpower. Computers did not just speed up calculations; they triggered a revolution in military affairs.
AI may do something similar for nuclear strategy.
For decades, nuclear theory has rested on a thin empirical base: a small number of historical crises, theoretical models and limited war games. AI-enabled simulations allow scholars and practitioners to stress-test concepts like escalation dominance, intra-war deterrence and damage limitation across large ensembles of scenarios. Assumptions become more visible. Fragile equilibria can be identified before they are tested in the real world.
None of this implies delegating nuclear decisions to machines. Human judgment must remain central in matters of existential risk. Political responsibility, moral reasoning and accountability cannot be automated.
But refusing to use AI as an analytic tool would be its own form of risk. It would mean relying on narrower evidence and smaller samples in a domain where mistakes are catastrophic.
If highly capable AI systems emerge, their most stabilising contribution may not be as autonomous decision-makers but as engines for exploring strategic possibility space . They could help leaders see where escalation tends to run away, where it can be contained and which postures narrow or widen the margins of risk.
The greatest catastrophes in strategy arise less from pure irrationality than overconfidence in simple models of how wars will unfold. History repeatedly shows that crises do not follow scripts. AI will not remove uncertainty from nuclear strategy. But it may help us interrogate our assumptions before reality does it for us.
WRITTEN BY
Leo Keay
Guest Contributor, UK PONI


