AI, Cyberspace, and Nuclear Weapons

ICBM

When it comes to artificial intelligence (AI), cyberspace, and national security, there are more questions than answers. But these questions are important, as they touch on key issues related to how countries use increasingly powerful technologies while, at the same time, keep their citizens safe. Few national security topics are as technical as nuclear security. How might the linkages between AI and cyberspace impact the security of nuclear systems?

A new generation of AI-augmented offensive cyber capabilities will likely exacerbate the military escalation risks associated with emerging technology, especially inadvertent and accidental escalation. Examples include the increasing vulnerability of nuclear command, control, and communication (NC3) systems to cyber attacks. Further, the challenges posed by remote sensing technology, autonomous vehicles, conventional precision munitions, and hypersonic weapons to hitherto concealed and hardened nuclear assets. Taken together, this trend might further erode the survivability of states’ nuclear forces.

 

 

AI, and the state-of-the-art capabilities it empowers, is a natural manifestation — not the cause or origin — of an established trend in emerging technology. the increasing speed of war, the shortening of the decision-making timeframe, and the co-mingling of nuclear and conventional capabilities are leading states to adopt destabilizing launch postures.

The AI-Cyber Security Intersection

AI will make existing cyber warfare capabilities more powerful. Rapid advances in AI and increasing degrees of military autonomy could amplify the speed, power, and scale of future attacks in cyberspace. Specifically, there are three ways in which AI and cyber security converge in a military context.

First, advances in autonomy and machine learning mean that a much broader range of physical systems are now vulnerable to cyber attacks, including, hacking, spoofing, and data poisoning. In 2016, a hacker brought a Jeep to a standstill on a busy highway and then interfered with its steering system causing it to accelerate. Furthermore, machine learning-generated deepfake (i.e., audio or video manipulation), have added a new, and potentially more sinister, twist to the risk of miscalculation, misperception, and inadvertent escalation that originates in cyberspace but has a very real impact in the physical world. The scale of this problem ranges from smartphones and household electronic appliances, to farming equipment, roadways, and pacemakers — these applications are associated with the ubiquitous connectivity phenomena known as the Internet of Things.

Second, cyber attacks that target AI systems can offer attackers access to machine learning algorithms, and potentially vast amounts of data from facial recognition and intelligence collection and analysis systems. These things could be used, for example, to cue precision munitions strikes and support intelligence, surveillance, and reconnaissance missions.

Third, AI systems used in conjunction with existing cyber offense tools might become powerful force multipliers, thus enabling sophisticated cyber attacks to be executed on a larger scale (both geographically and across networks), at faster speeds, simultaneously across multiple military domains, and with greater anonymity than before.

Many of the ways AI augments cyber capabilities to develop AI-enhanced cyber weapons (or “adversarial AI”) may appear relatively benign, for example, enumerating the target space or repackaging malware to avoid detection. However, the speed and scope of the next generation of AI cyber tools will likely have destabilizing effects.

The machine speed of AI-augmented cyber tools could enable even a low-skilled attacker to penetrate an adversary’s cyber defenses. It could also use advanced persistent threat tools to find new vulnerabilities. For example, air-gapped, nuclear-powered submarines considered secure when submerged could become increasingly vulnerable to a new generation of low-cost — and possibly black-market — highly automated advanced persistent threat cyber attacks when docked for maintenance.

An attacker could also apply AI machine learning techniques to target autonomous, dual-use early warning and other operating systems (e.g., NC3; intelligence, surveillance, and reconnaissance; early warning; and robotic control networks) with “weaponized software” such as hacking, subverting, spoofing, or tricking. This could cause unpredictable and potentially undetectable errors, malfunctions, and behavioral manipulation to weapons systems, also known as “data poisoning.”

This is a problem because AI machine learning systems need high quality datasets to train their algorithms. Injecting “poisoned” data into those training sets could lead these systems to perform in undesired and potentially undetectable ways. Furthermore, as the linkages between digital and physical systems — the Internet of Things — expand, the potential for an adversary to use cyber attacks in both kinetic and non-kinetic attacks will increase.

A significant risk in the operation of autonomous systems is the time that passes between a system failure (i.e., performing in a manner other than how the human operator intended), and the time it takes for a human operator to take corrective action. If the system failure is the result of a deliberate act, this timeframe will be compressed.

How AI and Cyber Could Improve Nuclear Security

AI could actually improve nuclear security. Several U.S. national security officials believe that AI, used as a force multiplier for both defensive and offensive cyber weapons, will have a transformative impact on cyber security. Recent advances in machine learning have significantly contributed to resolving several technical bottlenecks in several fields of AI, which make significant qualitative improvements to a wide range of autonomous weapon systems.

Taken together, machine learning and autonomy could transform nuclear security in a multitude of ways, with both positive and negative implications for cyber security and strategic stability.

On the one hand, AI might reduce a military’s vulnerability to cyber attacks. AI cyber-defense tools (or “counter-AI”), designed to recognize changes to patterns of behavior in a network, detect anomalies and software code vulnerabilities, and apply machine learning techniques, such as “deep learning,” to identify deviations from normal network activity, could form a more robust defense against cyber intrusions. For example, if certain code fragments mimic existing malware structures, machine leaning algorithms might be used to locate vital evidence to ascertain the identity of an attacker.

With this goal in mind, the Defense Department’s Defense Innovation Unit is prototyping an application that leverages AI to decipher high-level strategic questions, map probabilistic chains of events, and develop alternative strategies. This could make Defense Department systems more resilient to AI-augmented cyber attacks and configure and fix errors faster than humans.

On the other hand, autonomy itself will likely increase a military’s vulnerability to cyber attacks. AI will increase the anonymity of attacks in cyberspace, which rely on stealth, deception, and stratagem. An adversary could, for example, use malware to take control, manipulate, or fool the behavior and pattern-recognition systems of autonomous systems, as a team of Chinese white hatters wirelessly and remotely did against a Tesla Model X.

Similar attacks against modern weapons systems would be relatively easy to execute but very difficult to detect and attribute, and therefore to counter. Ironically, the use of machine learning to strengthen cyber security might simultaneously increase the points at which an attacker can interact, and thus potentially manipulate, or otherwise interfere with, a network.

New Risks to the Security of Nuclear System

During the early stages of a cyber operation, it is generally unclear whether an adversary intends to collect intelligence or prepare for an offensive attack. The blurring of cyber offense-defense will likely compound an adversary’s fear of a preemptive strike and increase first-mover incentives. In extremis, strategic ambiguity caused by this issue may trigger use-them-or-lose-them situations.

Open-source research suggests, for example, that Chinese analysts view the vulnerability of China’s NC3 to cyber infiltrations — even if an attacker’s objective was limited to cyber espionage — as a highly escalatory national security threat. By contrast, Russian analysts tend to view Russia’s nuclear command, control, communications, and intelligence (C3I) network as more isolated, and thus, relatively insulated from cyber attacks.

To be sure, even a modicum of uncertainty about the effectiveness of AI-augmented cyber capabilities during a crisis or conflict would, therefore, reduce both sides’ risk tolerance, increasing the incentive to strike preemptively.

Furthermore, any potential advantages from enhanced reassurances premised on comprehensive intelligence would require equal access to intelligence and analysis systems between great and rising powers. Shared confidence in the accuracy and credibility of these systems would also be needed. Most optimistically, the intentions of all rival states would need to be genuinely benign. In a world of “revisionist” rising powers, the prospects of such a rosy outcome seems improbable.

Against the backdrop of a competitive strategic environment in which states are inclined to assume the worst of others’ intentions, one state’s efforts to enhance the survivability of its strategic forces may be viewed by others as a threat to their nuclear retaliatory capability or second-strike capacity.

During crisis conditions, for example, an offensive AI cyber tool that succeeds in compromising an adversary’s nuclear weapon systems — resulting in an “asymmetric information” situation — could cause either or both sides to overstate (or understate) their retaliatory capabilities, and in turn, be more inclined to act in a risky and escalatory manner.

It is now thought possible that a cyber attack (i.e., spoofing, hacking, manipulation, and digital jamming) could infiltrate a nuclear weapons system, threaten the integrity of its communications, and ultimately (and possibly unbeknown to its target) gain control of both its nuclear and non-nuclear command and control systems.

AI has not yet evolved to a point where it could credibly threaten the survivability of a state’s nuclear second-strike capability. However, recent reports of successful cyber attacks against dual-use, early-warning systems suggests that cyber intrusions against NC3 is fast becoming a reality. Irrespective of the technical feasibility of “left of launch” operations (i.e., a preemptive operation to prevent an adversary launching its missiles) against NC3 systems, the perception alone that this capability exists would be inherently destabilizing. Moreover, while the veracity of these counterforce capabilities remains highly contested, several states, including the United States, have already shifted their strategic force postures and doctrine to reflect these emergent threat perceptions.

Somewhat paradoxically, AI applications designed to enhance cyber security for nuclear forces could simultaneously make cyber-dependent nuclear weapon systems (e.g., communications, data processing, or early-warning sensors) more vulnerable to cyber attacks.

Pathways to Escalation

AI-enhanced cyber attacks against nuclear systems would be almost impossible to detect and authenticate, let alone attribute, within the short timeframe for initiating a nuclear strike. According to open sources, operators at the North American Aerospace Defense Command have less than three minutes to assess and confirm initial indications from early-warning systems of an incoming attack. This compressed decision-making timeframe could put political leaders under intense pressure to make a decision to escalate during a crisis, with incomplete (and possibly false) information about a situation.

Ironically, new technologies designed to enhance information, such as 5G networks, machine learning, big-data analytics, and quantum computing, can also undermine its clear and reliable flow and communication, which is critical for effective deterrence.

Advances in AI could also exacerbate this cyber security challenge by enabling improvements to cyber offense. Machine learning and AI, by automating advanced persistent threat (or “hunting for weaknesses”) operations, might dramatically reduce the extensive manpower resources and high levels of technical skill required to execute advanced persistent threat operations, especially against hardened nuclear targets.

Information Warfare Could Lead to Escalation

Machine learning, big data analytics, and sensing technologies, supported by 5G networks, could alert commanders of incoming threats with increased speed and precision. This could result in fewer accidents in the sensitive command and control environment. However, this technological coalescence will also amplify risks of escalation in two ways: First, AI machine learning used as a force multiplier for cyber offense — e.g., data poisoning spoofing, deepfakes, manipulation, hacking, and digital jamming — would be considerably more difficult to detect, especially if an attacker used advanced persistent threat tools in the spectrum-contested environment. Second, in the unlikely event that an attack was successfully detected, threat identification (or attribution) at machine speed would be virtually impossible. In short, the key security challenge lies not in making more convincing fakes, but in detecting the spread of false information.

AI machine learning techniques might also exacerbate the escalation risks by manipulating the digital information landscape, where decisions about the use of nuclear weapons are made. Given current tensions between the United States and other nuclear powers — China, Russia, and North Korea — it is possible to imagine unprovoked escalation caused by a malicious third-party (or state-proxy) clandestine action.

During a crisis, the inability of a state to determine an attacker’s intent may lead an actor to conclude that an attack (threatened or actual) was intended to undermine its nuclear deterrent. For example, an AI-enabled, third-party-generated deepfake, coupled with data-poisoning cyber attacks, could spark an escalatory crisis between two (or more) nuclear states.

As demonstrated at a recent workshop hosted by the International Institute for Strategic Studies, malign manipulation of input data received by early-warning systems might not only subvert the output of AI systems in a specific situation, but also undermine the reliability of an entire algorithm network environment if executed during the program’s training phase.

Consider the following fictional scenarios, in which the use of deepfakes and spoofing by nefarious third-party, non-state, or state-proxy actors triggers unintentional and unprovoked escalation.

Fictional Example #1: Deepfakes

To incite conflict between two rival states, State A uses proxy hackers to launch deepfake video or audio material, which depicts senior military commanders of State B conspiring to launch a preemptive strike on State C. Then, this deepfake footage is deliberately leaked into State C’s AI-augmented intelligence collection and analysis systems, provoking State C to escalate the situation with strategic consequences. State B responds to the threat of preemption with a retaliatory strike.

Escalation in this case would, of course, be deliberate. Thus, increased escalation risk as a result of technology is not always inadvertent or accidental. For example, escalation risks caused by the aggressive U.S.-Soviet expansion of counterforce technology during the Cold War reflected shifting nuclear doctrines on both sides (i.e., away from assured mutual destruction), not the pursuit of these technologies themselves. Moreover, AI technology could enable an adversary to pursue a predetermined escalatory path. In fact, AI may be developed precisely for this purpose.

Fictional Example #2: Spoofing

State A launches a malicious AI-enhanced cyber attack to spoof State B’s AI-enabled autonomous sensor platforms and automated target recognition systems, in such a way that the weapon system (a human-supervised automated target recognition system) is fooled into interpreting a civilian object (a commercial airliner, for example) as a military target. State B, based on subverted information and the inability of human supervisors to detect the spoofed imagery in time to take corrective action, accidentally (and unintentionally) escalates the situation.

In this example, the spoofing attack on the weapon systems’ algorithm is executed in such a way that the imagery appears to the recognition system as indistinguishable from a valid military target, escalating the situation based on a false premise that would be unlikely to fool the human eye. AI experts have proven that even when data appears accurate to AI image recognition software, these systems often hallucinate objects that do not exist.

The explainability (or “black box”) problem associated with AI applications may further compound these dynamics. Insufficient understanding of how and why AI algorithms reach a particular judgment or decision would complicate the task of determining if datasets had been deliberately compromised to manufacture false outcomes — such as attacking incorrect targets or misdirecting allies during combat.

Furthermore, as humans and AI team up to accomplish particular missions, the opacity associated with how AI systems reach a decision may cause an operator to have either too much or too little confidence in a system’s performance.

Consequently, unless the system’s machine learn­ing algorithm is terminated, once deployed at the end of the training phase it could potentially learn something it was not intended to, or even perform a task or mission that its human designers do not expect it to do. This issue is one of the main reasons why the use of AI machine learning in the context of weapon systems is, for now, confined to mostly experimental research.

Even if nuclear early-warning systems might eventually detect the subversion, heightened levels of uncertainty and tension caused by an alert may impel the respective militaries to put their nuclear weapons on high alert status. This skewed assessment in the context of nuclear weapons, ready to launch at a moment’s notice, would likely precipitate worst-case scenario thinking that may spark inadvertent escalation.

Therefore, AI-augmented cyber intelligence gathering tools (or espionage) used during a crisis could easily be misinterpreted by an adversary as a prelude for a preemptive attack on its nuclear force.

Conclusion 

Rapid advances in military-use AI and autonomy could amplify the speed, power, and scale of future attacks in cyberspace via several interconnected mechanisms — the ubiquitous connectivity between physical and digital information ecosystems; the creation of vast treasure troves of data and intelligence harvested via machine learning; the formation of powerful force multipliers for increasingly sophisticated, anonymous, and possibly multi-domain cyber attacks.

AI systems could have both positive and negative implications for cyber and nuclear security. On balance, however, several factors make this development particularly troublesome. These include the increasing attacks vectors which threaten states’ NC3 systems, a new generation of destabilizing, AI-empowered cyber offensive capabilities (deepfakes, spoofing, and automated advanced persistent threat tools), the blurring of AI-cyber offense-defense, uncertainties and strategic ambiguity about AI-augmented cyber capabilities, and not least, a competitive and contested geo-strategic environment.

At the moment, AI’s impact on nuclear security remains largely theoretical. Now is the time, therefore, for positive intervention to mitigate (or at least manage) the potential destabilizing and escalatory risks posed by AI and help steer it toward bolstering strategic stability as the technology matures.

The interaction between AI and cyber technology and nuclear command and control raises more questions than answers. What can we learn from the cyber community to help us use AI to preempt the risks posed by AI-enabled cyber attacks? And how might governments, defense communities, academia, and the private sector work together toward this end?

 

 

James Johnson is a Postdoctoral Research Fellow at the James Martin Center for Nonproliferation Studies at the Middlebury Institute of International Studies (MIIS), Monterey. His latest book project is entitled, Artificial Intelligence & the Future of Warfare: USA, China, and Strategic Stability. Twitter:@James_SJohnson

Eleanor Krabill is a Master of Arts in Nonproliferation and Terrorism candidate at Middlebury Institute of International Studies (MIIS), Monterey. Twitter: @EleanorKrabill

Image: U.S. Air Force (Photo by Senior Airman Jonathan McElderry)