Artificial Intelligence, Foresight, and the Offense-Defense Balance
There is a growing perception that AI will be a transformative technology for international security. The current U.S. National Security Strategy names artificial intelligence as one of a small number of technologies that will be critical to the country’s future. Senior defense officials have commented that the United States is at “an inflection point in the power of artificial intelligence” and even that AI might be the first technology to change “the fundamental nature of war.”
However, there is still little clarity regarding just how artificial intelligence will transform the security landscape. One of the most important open questions is whether applications of AI, such as drone swarms and software vulnerability discovery tools, will tend to be more useful for conducting offensive or defensive military operations. If AI favors the offense, then a significant body of international relations theory suggests that this could have destabilizing effects. States could find themselves increasingly able to use force and increasingly frightened of having force used against them, making arms-racing and war more likely. If AI favors the defense, on the other hand, then it may act as a stabilizing force.
Anticipating the impact of AI on the so-called “offense-defense balance” across different military domains could be extremely valuable. It could help us to foresee new threats to stability before they arise and act to mitigate them, for instance by pursuing specific arms agreements or prioritizing the development of applications with potential stabilizing effects.
Unfortunately, the historical record suggests that attempts to forecast changes in the offense-defense balance are often unsuccessful. It can even be difficult to detect the changes that newly adopted technologies have already caused. In the lead-up to the First World War, for instance, most analysts failed to recognize that the introduction of machine guns and barbed wire had tilted the offense-defense balance far toward defense. The years of intractable trench warfare that followed came as a surprise to the states involved.
While there are clearly limits on the ability to anticipate shifts in the offense-defense balance, some forms of technological change have more predictable effects than others. In particular, as we argue in a recent paper, changes that essentially scale up existing capabilities are likely to be much easier to analyze than changes that introduce fundamentally new capabilities. Substantial insight into the impacts of AI can be achieved by focusing on this kind of quantitative change.
Two Kinds of Technological Change
In a classic analysis of arms races, Samuel Huntington draws a distinction between qualitative and quantitative changes in military capabilities. A qualitative change involves the introduction of what might be considered a “new form of force.” A quantitative change involves the expansion of an existing form of force.
Although this is a somewhat abstract distinction, it is easy to illustrate with concrete examples. The introduction of dreadnoughts in naval surface warfare in the early twentieth century is most naturally understood as a qualitative change in naval technology. In contrast, the subsequent naval arms race — which saw England and Germany competing to manufacture ever larger numbers of dreadnoughts — represented a quantitative change.
Attempts to understand changes in the offense-defense balance tend to focus almost exclusively on the effects of qualitative changes. Unfortunately, the effects of such qualitative changes are likely to be especially difficult to anticipate. One particular reason why foresight about such changes is difficult is that the introduction of a new form of force — from the tank to the torpedo to the phishing attack — will often warrant the introduction of substantially new tactics. Since these tactics emerge at least in part through a process of trial and error, as both attackers and defenders learn from the experience of conflict, there is a limit to how much can ultimately be foreseen.
Although quantitative technological changes are given less attention, they can also in principle have very large effects on the offense-defense balance. Furthermore, these effects may exhibit certain regularities that make them easier to anticipate than the effects of qualitative change. Focusing on quantitative change may then be a promising way forward to gain insight into the potential impact of artificial intelligence.
How Numbers Matter
To understand how quantitative changes can matter, and how they can be predictable, it is useful to consider the case of a ground invasion. If the sizes of two armies double in the lead-up to an invasion, for example, then it is not safe to assume that the effect will simply cancel out and leave the balance of forces the same as it was prior to the doubling. Rather, research on combat dynamics suggests that increasing the total number of soldiers will tend to benefit the attacker when force levels are sufficiently low and benefit the defender when force levels are sufficiently high. The reason is that the initial growth in numbers primarily improves the attacker’s ability to send soldiers through poorly protected sections of the defender’s border. Eventually, however, the border becomes increasingly saturated with ground forces, eliminating the attacker’s ability to exploit poorly defended sections.
Figures 1: A simple model illustrating the importance of force levels. The ability of the attacker (in red) to send forces through poorly defended sections of the border rises and then falls as total force levels increase.
This phenomenon is also likely to arise in many other domains where there are multiple vulnerable points that a defender hopes to protect. For example, in the cyber domain, increasing the number of software vulnerabilities that both an attacker and defender can each discover will benefit the attacker at first. The primary effect will initially be to increase the attacker’s ability to discover vulnerabilities that the defender has failed to discover and patch. In the long run, however, the defender will eventually discover every vulnerability that can be discovered and leave behind nothing for the attacker to exploit.
In general, growth in numbers will often benefit the attacker when numbers are sufficiently low and benefit the defender when they are sufficiently high. We refer to this regularity as offensive-then-defensive scaling — and suggest that it can be helpful for predicting shifts in the offense-defense balance in a wide range of domains.
Artificial Intelligence and Quantitative Change
Applications of artificial intelligence will undoubtedly be responsible for an enormous range of qualitative changes to the character of war. It is easy to imagine states such as the United States and China competing to deploy ever more novel systems in a cat-and-mouse game that has little to do with quantity. An emphasis on qualitative advantage over quantitative advantage is a fairly explicit feature of the American military strategy and has been since at least the so-called “Second Offset” strategy that emerged in the middle of the Cold War.
However, some emerging applications of artificial intelligence do seem to lend themselves most naturally to competition on the basis of rapidly increasing quantity. Armed drone swarms are one example. Paul Scharre has argued that the military utility of these swarms may lie in the fact that they offer an opportunity to substitute quantity for quality. A large swarm of individually expendable drones may be able to overwhelm the defenses of individual weapon platforms, such as aircraft carriers, by attacking from more directions or in more waves than the platform’s defenses are capable of managing. If this method of attack is in fact viable, one could see a race to build larger and larger swarms that ultimately results in swarms containing billions of drones. The phenomenon of offensive-then-defensive scaling suggests that growing swarm sizes could initially benefit attackers — who can focus their attention increasingly intensely on less well-defended targets and parts of targets — before potentially allowing defensive swarms to win out if sufficient growth in numbers occurs.
Automated vulnerability discovery tools also stand out as another relevant example, which have the potential to vastly increase the number of software vulnerabilities that both attackers and defenders can discover. The DARPA Cyber Grand Challenge recently showcased machine systems autonomously discovering, patching, and exploiting software vulnerabilities. Recent work on novel techniques such as “deep reinforcement fuzzing” also suggests significant promise. The computer security expert Bruce Schneier has suggested that continued progress will ultimately make it feasible to discover and patch every single vulnerability in a given piece of software, shifting the cyber offense-defense balance significantly toward defense. Before this point, however, there is reason for concern that these new tools could initially benefit attackers most of all.
Forecasting the Impact of Technology
The impact of AI on the offense-defense balance remains highly uncertain. The greatest impact might come from an as-yet-unforeseen qualitative change. Our contribution here is to point out one particularly precise way in which AI could impact the offense-defense balance, through quantitative increases of capabilities in domains that exhibit offensive-then-defensive scaling. Even if this idea is mistaken, it is our hope that by understanding it, researchers are more likely to see other impacts. In foreseeing and understanding these potential impacts, policymakers could be better prepared to mitigate the most dangerous consequences, through prioritizing the development of applications that favor defense, investigating countermeasures, or constructing stabilizing norms and institutions.
Work to understand and forecast the impacts of technology is hard and should not be expected to produce confident answers. However, the importance of the challenge means that researchers should still try — while doing so in a scientific, humble way.
This publication was made possible (in part) by a grant to the Center for a New American Security from Carnegie Corporation of New York. The statements made and views expressed are solely the responsibility of the author(s).
Ben Garfinkel is a DPhil scholar in International Relations, University of Oxford, and research fellow at the Centre for the Governance of AI, Future of Humanity Institute.
Allan Dafoe is associate professor in the International Politics of AI, University of Oxford, and director of the Centre for the Governance of AI, Future of Humanity Institute. For more information, see www.governance.ai and www.allandafoe.com.