AI Risks to Nuclear Deterrence Are Real

ZAK III

Why does the United States have so many nukes? Over 1,750 warheads are currently deployed on submarines, aircraft, and in missile silos. It’s less than the total at the peak of the Cold War — the U.S. stockpile exceeded 31,000 warheads in 1967 — but it’s still a lot.

There are a few reasons for this, but the most important is that nuclear deterrence relies, in part, on the ability of nuclear forces to survive a first strike. A nuclear threat is not as effective if an adversary can eliminate all U.S. nuclear forces in a single strike. The survivability of that deterrent is a core component of overall U.S. national security. As new technologies like artificial intelligence (AI) emerge and grow, an obvious question to ask is: What does this mean for nuclear deterrence?

 

 

In “Will Artificial Intelligence Imperil Nuclear Deterrence?” Rafael Loss and Joseph Johnson argue that technical limitations prevent AI from threatening the deterrent value of today’s nuclear forces. In theory, AI-based image recognition systems could identify second-strike capabilities. But Loss and Johnson highlight two key challenges: bad data and an inability to make up for bad data. Few images of mobile nuclear missile launchers are available to compare with he images of commercial trucks from which the AI system would have to distinguish. Available data is also insufficient because a machine cannot infer the difference between a regular truck and a nuclear launcher. Even the best AI technologies cannot make up for these limitations. To reduce Loss and Johnson’s argument to a cliché: garbage in, garbage out.

I believe that AI could help create windows of opportunity in which a successful decapitation strike is possible. AI enables the development of novel platforms to collect intelligence and attack nuclear systems. Although AI has limitations, other, non-AI capabilities mitigate AI’s limitations regarding information processing. This means the potential for AI-based systems to aid second-strike platform identification should not be ignored.

While Loss and Johnson persuasively highlight real technical challenges with AI, their arguments do not support overall confidence in the survivability of U.S. and allied nuclear forces. AI-based systems only exist as part of a broader military apparatus that counteracts some of AI’s limitations. AI also enables the creation and improvement of novel platforms to collect sensor data and strike nuclear platforms.

This state of vulnerability is not inevitable. The United States and allied countries can reduce the risks AI creates by exploring new means of operating stealthily and developing new decoys.

AI Enables Platform Development

AI enables the development of novel autonomous platforms with significant relevance to nuclear deterrence. Developments in machine learning enable significant improvements in autonomous vehicle operation. Vehicles can better recognize and avoid obstacles, including hostile projectiles. Likewise, autonomous vehicles can better make decisions for themselves and plan their own tasks. And virtually every major military power has or is developing unmanned systems with varying levels of autonomy.

AI also enables the use of drones en masse and true drone swarms. Human cognition limits how many drones an operator can control at once. However, greater autonomy reduces the cognitive load. Cheap, autonomous drones enable wide area coverage and swarming enables them to coordinate their searches. Swarms also allow more complex searches: Drones can be equipped with different sensor packages to collect different types of information and reduce false positives.

Autonomous platforms also enhance risks to nuclear forces and second-strike platforms. Aerial, surface, and undersea autonomous platforms may search the ocean for adversary submarines. They may distribute sensors in proximity to submarine ports, strategic chokepoints, and the broader ocean. AI-based systems can also help analyze the collected data and optimize the overall sensor network. Autonomous systems can also carry out strikes against command-and-control nodes, early warning systems, and nuclear weapon-delivery systems. Provided platform costs stay low, autonomous systems also enable the use of mass, overwhelming adversary defenses. Although AI has limitations, the broader defense apparatus mitigates them.

AI Does Not Work Alone

AI-enabled technologies pose a threat to the survivability of nuclear forces. The threat is not only from AI operating alone. Every military technology exists only as part of the overall defense ecosystem. The broader military bureaucracy, existing capabilities, and other emerging technologies raise the likelihood of AI’s success in second-strike platform identification.

AI may help intelligence analysts sort through the huge masses of collected information. States deploy a wide variety of assets to collect and assess data on adversary nuclear forces. Human intelligence assets may collect information on classified military plans and technical characteristics of nuclear systems and their stealth capabilities, satellite and aircraft flights may identify nuclear weapons-related activity, while a broad range of anti-submarine capabilities search the ocean for adversary submarines. AI-based systems may identify information of interest with a higher probability of pointing to a second-strike platform’s location.

Collection assets also collect data that can help train AI detection systems. For example, news reports have noted numerous drone sightings over Bangor Kitsap naval base where eight Trident submarines are stationed. Although open-source reports do not identify who controlled the drones, an adversary could certainly use drones to collect extensive imagery, video, and other sensor data related to nuclear submarines. The same drones could also collect information on broader base activity, such as when a submarine enters or leaves.

Whether that training data is enough for robust AI is a difficult question for the public to assess. The capabilities of those assets, their deployment, and the data itself are likely to be highly classified. If states knew the what, where, and how of information collection, they would change their behavior. States with better intelligence networks and assets will also have more and better training data for their AI systems. And even imperfect AI detection can be significant.

Humans make up for some limitations of AI. The Department of Defense emphasizes the concept of human-machine teaming: Humans and machines work together symbiotically. Humans provide higher-order decision-making and ensure ethical and appropriate operation of autonomous systems. For example, a “loyal wingman” drone flies alongside manned aircraft, offering a range of capabilities from radar jamming to weaponry usable at the pilot’s discretion. Human-machine teaming is relevant to the nuclear domain too.

In a world in which AI-based systems help locate mobile nuclear forces, humans can verify the results. Analysts can consult available satellite or aircraft imagery of the area. Human and unmanned assets can be deployed to collect additional data and track any identified mobile nuclear system. Humans can also help narrow searches based on assessments of military doctrine, behavior, and platform locations at known times. For example, if a submarine is spotted leaving Bangor Kitsap, the area of ocean to search shrinks drastically. Of course, verification takes time beyond any AI-based system processing. But AI also enables the creation of novel platforms that can help carry out these and other tasks.

AI creates risks for nuclear deterrence, and other emerging technologies worsen those risks by mitigating some of AI’s challenges. As Loss and Johnson note, the time to train AI algorithms limits AI’s usefulness in platform identification. However, improvements to quantum computer reliability and usability and supercomputing writ large improve computing power to more quickly process large volumes of data. In fact, Google reportedly achieved quantum supremacy, meaning it created a quantum computer more powerful than the world’s most powerful supercomputers. (Although Google’s quantum computer is only capable of a single, highly sophisticated calculation, it demonstrates the real potential of quantum computing.) New computer chips and other hardware optimized to support AI applications will speed the process too. Overall, this means faster, more effective AI. These developments in combination pose real risks to nuclear deterrence.

A Window of Opportunity Is Enough to Threaten Nuclear Stability

AI only needs to help create a window of opportunity in which a strike is possible to pose a serious risk. Since a successful decapitation strike is essentially a game-ender and the risks of failure are huge, any thoughtful adversary would wait until they are confident in their knowledge of second-strike platform positions before launching a strike. Even if AI-based detection is not perfect, AI only needs to be good enough to allow that to happen, perhaps with some luck involved. Even a peephole is a window.

Even the potential for a window to open is destabilizing. Adversary knowledge of U.S. nuclear force positions is a closely guarded secret and vice versa. In a crisis, policymakers on both sides would not know if the window was wide open or shut tight. They might take actions to enhance survivability that are interpreted as a desire to exploit a decapitation opportunity. For example, a Russian policymaker might think: “Did the United States deploy its strategic bombers because it fears a strike or because it is preparing to carry one out?”

However, AI is unlikely to pose a day-to-day risk to second-strike forces. As Loss and Johnson highlight, AI-based systems may fail to recognize a missile launcher. Processing the large volume of data may also take a long time, and especially when including more voluminous sensor data.

For a true window of opportunity to open, actors must also be confident they have sufficient military assets in close enough range. Adversaries must have sufficient nuclear and conventional capabilities to eliminate mobile and immobile nuclear launch platforms. AI makes that easier too, because of the way autonomous platforms augment nuclear and conventional strike capabilities.

Of course, one should not assume a state would take the opportunity. The window may not open during a time of crisis when tensions are high and war appears close. If the window opens during peacetime, states will decide what to do based on state policy and military doctrine. One state may consider the possibility, while others find the notion abhorrent. Nonetheless, states can take action to reduce the risk.

Mitigating the Threat

Current trends suggest AI will have a significant, but not apocalyptic impact on nuclear deterrence. A small window that allows a decapitation strike is still a window worth worry. Of course, the likelihood of a window opening depends on a host of unanswered questions, such as whether fundamental AI research continues to progress, how effective AI counter-measures are, and how robust AI needs to be in the roles described above. As Loss and Johnson rightly highlight, the brittleness of AI also limits the overall impact. Smart investments can mitigate emerging AI risks to nuclear deterrence without turning a tornado into a hurricane.

The United States should explore ways to hide from AI-based detection systems. The brittleness of AI systems can be exploited for defensive gain. The United States could consider cyber means of manipulating adversary AI. For example, changing the labels on adversary training data for machine-vision systems could poison all systems that use the trained algorithm. Besides releasing manipulated images, the United States could spread other forms of bad data, such as acoustic emitters to mimic submarine signatures. The United States could also explore means to disable sensors via electronic, cyber, or space-based attacks. Additional decoy platforms would help, too.

The likelihood of a successful decapitation strike decreases if an adversary must target more second-strike platforms. Each additional platform must be identified and attacked. The United States and allied nations could build additional decoy platforms, using advancements in AI, machine learning, and robotics. For example, unmanned undersea vehicles could be designed to emulate the signatures of nuclear submarines. Russia is reportedly doing exactly that. Such “sub-sinks” could also be equipped with weapons and sensors to help identify and defend nuclear undersea platforms against conventional threats.

The technical limitations of AI mean the sky is not falling; however, AI does create real risks. Action now can reduce the likelihood of a window opening in which an adversary could eliminate the United States’ second-strike capabilities. Such action is necessary to preserve the security of the United States.

 

 

Zachary Kallenborn is a freelance researcher and analyst, specializing in chemical, biological, radiological, and nuclear (CBRN) weapons, CBRN terrorism, drone swarms, and emerging technologies writ large. His research has appeared in the Nonproliferation Review, Studies in Conflict and Terrorism, Defense One, the Modern War Institute at West Point, and other outlets. His most recent study, “Swarming Destruction: Drone Swarms and CBRN Weapons,” examines the threats and opportunities of drone swarms for the full scope of CBRN weapons.

This article does not represent the views of the author’s current or former funders or employers.

Image: U.S. Navy (Photo by Mass Communication Specialist 1st Class Ashley Berumen)

CORRECTION: An earlier version of this article stated, “A nuclear threat is not as effective if an adversary can eliminate all U.S. nuclear forces in a single, decapitation strike.” This has been changed. It now reads, “A nuclear threat is not as effective if an adversary can eliminate all U.S. nuclear forces in a single strike.”