AI, Autonomy, and the Risk of Nuclear War

minuteman III test

Will emerging technologies like AI increase the risk of nuclear war? We are in an era of rapid disruptive technological change, especially in AI. Therefore, the nascent journey to reorient military forces to prepare for the future digitized battlefield is no longer merely speculation or science fiction. “AI technology” is already fused into military machines, and global armed forces are well advanced in their planning, research and development, and, in many cases, deployment of AI-enabled capabilities.

AI does not exist in a vacuum. In isolation, AI is unlikely to be a strategic game changer. Instead, it will likely reinforce the destabilizing effects of advanced weaponry, thereby increasing the speed of war and compressing the decision-making timeframe. The inherently destabilizing effects of military AI may exacerbate tension between nuclear-armed powers, especially China and the United States, but not for the reasons you may think.

 

 

How and to what degree does AI augmentation mark a departure from automation in the nuclear enterprise, which goes back several decades? How transformative are these developments? And what are the potential risks posed by fusing AI technology with nuclear weapons? While we can’t answer these questions fully, only by extrapolating present trends in AI-enabling capabilities can we illuminate the potential risks of the current trajectory and thus consider ways to manage them.

The Emerging AI-Nuclear Nexus

It is worth considering how advances in AI technology are being researched, developed, and, in some cases, are deployed and operational in the context of the broader nuclear deterrence architecture — early-warning and intelligence, surveillance, and reconnaissance; command and control; nuclear weapon delivery systems; and non-nuclear operations.

Early-Warning and Intelligence, Surveillance, and Reconnaissance

AI machine learning might, in three ways, quantitatively enhance existing early-warning and intelligence, surveillance, and reconnaissance systems. First, machine learning, in conjunction with cloud computing, unmanned aerial vehicles (or drones), and big-data analytics, could be used to enable mobile intelligence, surveillance, and reconnaissance platforms to be deployed in geographically long ranges, and in complex, dangerous environments (e.g., contested anti-access/area-denial zones, urban counterinsurgency, or deep-sea) to process real-time data and alert commanders of potentially suspicious or threatening situations such as military drills and suspicious troop or mobile missile launcher movements.

Second, machine-learning algorithms could be used to gather, mine, and analyze large volumes of intelligence (open-source and classified) sources to detect correlations in heterogeneous — and possibly contradictory, compromised, or otherwise manipulated — datasets. Third, and related, algorithmic processed intelligence could be used to support commanders to anticipate — and thus more rapidly preempt — an adversary’s preparations for a nuclear strike. In short, AI could offer human commanders operating in complex and dynamic environments vastly improved situational awareness and decision-making tools, allowing for more time to make informed decisions with potentially stabilizing effects.

Nuclear Command and Control

Compared to intelligence and early-warning systems, the impact of AI is unlikely to have a material impact on nuclear command and control, which for several decades have synthesized automation but not autonomy. As we have seen in these pages, algorithms that underlie complex autonomous systems today are too unpredictable, vulnerable (to cyber attacks), unexplainable (the “black-box” problem), brittle, and myopic to be used unsupervised in safety-critical domains. For now, there is a broad consensus amongst nuclear experts and nuclear-armed states that, even if the technology permitted, AI decision-making that directly impacts nuclear command-and-control functions (i.e., missile-launch decisions), should not be pre-delegated to AIs. Whether this fragile consensus can withstand mounting first-mover advantage temptations in a multipolar nuclear order is less certain. Whether human commanders — predisposed to anthropomorphize subjects, cognitive offloading, and automation bias — can avoid the temptation to view AI as a panacea for the cognitive fallibilities of human decision-making is also unclear. The question, therefore, is perhaps less whether nuclear-armed states will adopt AI technology into the nuclear enterprise, but rather by whom, when, and to what degree.

Nuclear and Non-Nuclear Missile Delivery Systems

AI technology will likely affect the nuclear weapon delivery systems in several ways. First, machine-learning algorithms may be used to improve the accuracy, navigation (pre-programed guidance parameters), autonomy (“fire-and-forget” functionality) of missiles, and precision — mainly used in conjunction with hypersonic glide vehicles. For example, China’s DF-ZF maneuverable hypersonic glide vehicle is a dual-capable (nuclear and conventionally armed) prototype with autonomous functionality.

Second, it could improve the resilience and survivability of nuclear launch platforms against adversary countermeasures such as electronic warfare jamming or cyber attacks — that is, autonomous AI-enhancements would remove the existing vulnerabilities of communications and data links between launch vehicles and operators.

Third, the extended endurance of AI-augmented unmanned (i.e., unmanned underwater vehicles and unmanned combat aerial vehicles) platforms used in extended intelligence, reconnaissance, and surveillance missions — that cannot be operated remotely — can potentially increase their ability to survive countermeasures and reduce states’ fear of a nuclear decapitation. This is especially the case in asymmetric nuclear dyads, such as United States-Russia, India-Pakistan, and United States-China. AI and autonomy might also strengthen states’ second-strike capability — and thus deterrence — and even support escalation management during a crisis or conflict.

Conventional Counterforce Operations

AI could be used to enhance a range of conventional capabilities, with potentially significant strategic implications — especially strategic non-nuclear weapons used in conventional counterforce operations. Machine learning could increase the onboard intelligence of manned and unmanned fighter aircraft, thus increasing their capacity to penetrate enemy defenses using conventional high-precision munitions. Moreover, increased levels of AI-enabled autonomy might allow unmanned drones — possibly in swarms — to operate in environments hitherto considered inaccessible or too dangerous for manned systems (e.g., anti-access and area denial zones, or deep-water and outer space environments). The 2021 Azerbaijani-Armenian war and the recent Ukrainian-Russian conflict have demonstrated how smaller states can integrate new weapon systems to amplify their battlefield effectiveness and lethality.

Machine-learning techniques could materially enhance missile, air, and space defense systems’ ability to detect, track, target, and intercept. Though AI technology has been integrated with automatic target recognition to support defense systems since the 1970s, the speed of defense systems’ target-identification — because of the limited database of target signatures that an automatic target recognition system uses to recognize its target — has progressed slowly. Advances in AI and particularly generative adversarial networks could alleviate this technical bottleneck, generating realistic synthetic data to train and test automatic target recognition systems. Besides, autonomous drone swarms might also be used defensively (e.g., decoys or flying mines) to buttress traditional air defenses.

AI technology is also changing how (both offensive and defensive) cyber capabilities are designed and operated. On the one hand, AI might reduce a military’s vulnerability to cyber attacks and electronic warfare operations. AI cyber-defensive tools and anti-jamming capabilities — designed, for example, to recognize changes to patterns of behavior and anomalies in a network and automatically identify malware or software code vulnerabilities — could protect nuclear systems against cyber intrusions or jamming operations.

On the other hand, advances in AI machine learning (notably an increase in the speed, stealth, and anonymity of cyber warfare) might enable identifying an adversary’s “zero-day vulnerabilities” — that is, undetected or unaddressed software vulnerabilities. Motivated adversaries might also use malware to take control, manipulate, or fool the behavior and pattern recognition systems of autonomous systems such as the Project Maven — for example, using generative adversarial networks to generate synthetic and realistic-looking data poses a threat to both machine learning and rules-based forms of attack detection. In short, AI technology in the nuclear domain will likely be a double-edged sword: strengthening the nuclear systems while expanding the pathways and tools available to adversaries to conduct cyber-attacks and electronic warfare operations against these systems (e.g., AI-augmented “left of launch”).

Finally, advances in AI technology could contribute to the physical security of nuclear weapons, particularly against threats posed by third-party and non-state actors. Autonomous vehicles (e.g., “anti-saboteur robots”) could be used, for example, to protect states nuclear forces, patrol the parameter of sensitive facilities, or form armed automated surveillance systems (e.g., South Korea’s Super Aegis II robotic sentry weapon that includes a fully autonomous mode), along vulnerable borders. AI technology — in conjunction with other emerging technologies such as big-data analytics and early-warning and detection systems — might also be harnessed to provide novel solutions to support nuclear risk and non-proliferation efforts; for example, removing the need for “boots on the ground” inspectors in sensitive facilities to support non-interference mechanisms for arms control verification agreements.

The 2025 “Flash War” in the Taiwan Straits

How might AI-powered capabilities intensify a crisis between two nuclear-armed adversaries? Consider the following fictional counterfactual: On the morning of Dec. 12, 2025, political leaders in Beijing and Washington authorized a nuclear exchange in the Taiwan Straits. Independent investigators into the 2025 “flash war” expressed sanguinity that neither side deployed AI-powered “fully autonomous” weapons nor intentionally violated the law of armed conflict.

In an election dominated by the island’s volatile relations with Communist China in 2024, President Tsai Ing-wen, and in another major snub to Beijing, pulled off a sweeping victory, securing her third term for the pro-independence Democrats. As the mid-2020s dawned, tensions across the Straits continued to sour, as both sides — held hostage to hardline politicians and hawkish generals — maintained uncompromising positions, jettisoning diplomatic gestures, and inflamed by escalatory rhetoric, fake news, and campaigns of mis/disinformation. At the same time, both China and the United States deployed AI to support battlefield awareness, intelligence, surveillance, and reconnaissance, early-warning, and other decision-support tools — to predict and suggest tactical responses to enemy actions in real time.

By late 2025, the rapid improvements in the fidelity, speed, and predictive capabilities of commercially produced dual-use AI applications, persuaded great military powers not only to feed data-hungry machine learning to enhance tactical and operational maneuvers but increasingly to inform strategic decisions. Impressed by the early adoption and fielding by Russia, Turkey, and Israeli of AI tools to support autonomous drone swarms to outmaneuver and crush counterterrorist incursions on their borders, China synthesized the latest iterations of dual-use AI, sacrificing rigorous testing and evaluation in the race for first-mover advantage.

With Chinese military incursions — aircraft flyovers, island blockade drills, and drone surveillance operations — in the Taiwan Straits marking a dramatic escalation in tensions, leaders in China and the United States demanded the immediate fielding of the latest strategic AI to gain the maximum asymmetric advantage in scale, speed, and lethality. As the incendiary rhetoric playing out on social media — exacerbated by disinformation campaigns and cyber intrusions on command-and-control networks — reached a fever pitch on both sides, a chorus of voices expounded the immediacy of a forced unification of Taiwan by China.

Buoyed by the escalatory situation unraveling in the Pacific — and with testing and evaluation processes incomplete — the United States decided to bring forward the fielding of its prototype autonomous AI-powered “Strategic Prediction & Recommendation System” (SPRS) — supporting decision-making in non-lethal activities such as logistics, cyber, space assurance, and energy management. China, fearful of losing the asymmetric upper hand, fielded a similar decision-making support system, “Strategic & Intelligence Advisory System” (SIAS), to ensure its autonomous preparedness for any ensuring crisis.

On June 14, 2025, at 06:30, a Taiwanese coast guard patrol boat collided with and sank a Chinese autonomous sea-surface vehicle conducting an intelligence recon mission within Taiwan’s territorial waters. On the previous day, President Tsai hosted a senior delegation of U.S. congressional staff and White House officials in Taipei on a high-profile diplomatic visit. By 06:50, the cascading effect that following — turbo-charged by AI-enabled bots, deepfakes, and false-flag operations — far exceeded Beijing’s pre-defined threshold, and thus capacity to contain.

By 07:15, these information operations coincided with a spike in cyber intrusions targeting U.S. Indo-Pacific Command and Taiwanese military systems, defensive maneuvers of orbital Chinese counter space assets, automated People’s Liberation Army logistics systems were activating, and the suspicious movement of the PLA’s nuclear road-mobile transporter erector launchers. At 07:20, U.S. SPRS assessed this behavior as an impending major national security threat and recommended an elevated deterrence posture and a powerful demonstration of force. The White House authorized an autonomous strategic bomber flyover in the Taiwan Straits at 07:25.

In response, at 07:35, China’s SIAS notified Beijing of an increased communication loading between U.S. Indo-Pacific Command and critical command and communication nodes at the Pentagon. By 07:40, SIAS raised the threat level for a preemptive U.S. strike in the Pacific to defend Taiwan, attack Chinese-held territory in the South China Seas, and contain China. At 07:45, SIAS advised Chinese leaders to use conventional counterforce weapons (cyber, anti-satellite, hypersonic weapons, and other smart precision missile technology) in a limited preemptive strike against critical U.S. Pacific assets including U.S. Air Force Base, Guam.

Chinese military leaders, at 07:50, fearful of an imminent disarming U.S. strike and increasingly reliant on the assessments of SIAS, authorized the attack — which SIAS had already anticipated and thus planned and prepared for. At 07:55, SPRS alerted Washington of the imminent attack and recommended an immediate limited nuclear strike to compel Beijing to call off its offensive. After a limited U.S.-China atomic exchange in the Pacific, leaving millions of people dead and tens of millions injured, both sides agreed to cease hostilities.

In the immediate aftermath of the deadly confrontation — lasting only a matter of hours — killing millions and injuring many more, leaders on both sides were dumbfounded about what caused the “flash war.” Both sides attempted to retroactively reconstruct a detailed analysis of decisions made by SPRS and SIAS. However, the designers of the algorithms underlying SPRS and SIAS reported that it was not possible to explain the decision rationale and reasoning of the AI behind every subset decision. Besides, because of the various time, encryption, and privacy constraints imposed by the end military and business users, it was impossible to keep retroactive back-testing logs and protocols. Did AI technology cause the 2025 “flash war”?

Human Solutions to the Machine Problem

In the final analysis, the best way to prepare for the AI-nuclear future may be to adhere to a few basic principles to guide the management of nuclear weapons in their interactions with emerging technology. First, nuclear weapon systems should avoid being unduly complex, entangled, or overcomplicated. Second, these systems must be fortified and robust enough to withstand traditional threats and increasing new threats emerging in the digital domain. Third, nuclear weapons must be disentangled and, where possible, distinctly separate (both physically and doctrinally) from non-nuclear capabilities and command, control, communications, and intelligence systems. If this principle was followed it would likely rule out the existence of the kind of dual-use systems described in the “flash war” vignette.

Towards these lofty ends, AI could also support defense planners’ design and run wargaming and other virtual training exercises to refine operational concepts, test various conflict scenarios, and identify areas and technologies for potential development. For instance, AI-machine learning techniques — modeling, simulation, and analysis — might complement counterfactuals and low-tech table-top wargaming simulations to identify contingencies under which nuclear risk might arise. As Alan Turing wrote in 1950: “We can only see a short distance ahead, but we can see plenty there that needs to be done.”

 

 

James Johnson is a lecturer in strategic studies at the University of Aberdeen. He is also an honorary fellow at the University of Leicester, a non-resident associate on the European Research Council-funded Towards a Third Nuclear Age Project, and a mid-career cadre with the Center for Strategic Studies Project on Nuclear Issues. He is the author of Artificial Intelligence and the Future of Warfare: USA, China & Strategic Stability. His latest book project with Oxford University Press is AI & the Bomb: Nuclear Strategy and Risk in the Digital Age. You can follow him on Twitter: @James_SJohnson.

Image: U.S. Air Force photo by Tech. Sgt. Patrick Harrower