From Deception to Attrition: AI and the Changing Face of Warfare
Editor’s Note: This article was submitted in response to the call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the first question (part a.) on how artificial intelligence might affect the character and/or the nature of war?
Ernest Swinton’s “The Defense of Duffer’s Drift” educated a few generations of military professionals in the West. It introduces a set of tactical lessons through six dreams seen by Lt. Backsight Forethought, who was responsible for the defense of a river crossing in the Boer War. Every dream unraveled combat situations triggered by the lieutenant’s previous tactical decisions. They revealed his mistakes, allowing him to gradually improve the defense. This is similar to how an artificial intelligence-powered system might evaluate combat scenarios. The only difference is that it would do millions of these evaluations, instead of six, would consider much more information, and would do it at incredible speed. This is why artificial intelligence (AI) is likely to revolutionize warfare: It could qualitatively improve the key factors in war — human strategizing and decision-making speed.
Assessments and predictions on the future of warfare, as is the case with any type of forecasting, are demanding endeavors by default. One of the challenges is the difficulty of resisting the temptation to make linear projections of past experience onto the expected future. Especially since new technological developments seem to be rendering obsolete our older theories about conflict. Forecasting warfare evolution under the impact of AI is thus a formidable task. An approach that can reduce errors is to combine two sets of knowledge: understanding accurately the micro-dynamics of war (the most basic drivers of combat interaction) and assessing AI impact on war considering AI-specific abilities instead of humans’ innate limitations.
The Micro-Dynamics of War
Traditionally, military and security experts have perceived warfare in Clausewitzian terms, through preponderance of force and large uncertainty. This view has been determined by humans’ physical and cognitive limitations rather than by the fundamental dynamics of war. Humans are not the quickest and most agile creatures. They cannot swiftly move away from a bullet’s trajectory. They are also not the sharpest shooters. It took about 10,000 rounds to produce a casualty in World War II. This number was even higher during the U.S. counterinsurgency operations in Afghanistan and Iraq.
We learned to compensate for accuracy failures by relying on automatic fire or indiscriminate artillery strikes. Because we are not quick enough in decision and evasion movement, we chose the path of armor building. However, the fundamental element of combat dynamics is not firepower or fire accuracy. Based on my dissertation research, I argue that it is the manipulation of one’s relative degree of lethal exposure in combat. And it is by better exploiting lethal exposure that AI systems will affect the future of warfare. Capabilities are just a reservoir from which a side can supply its combat efforts. A side with more troops and more weapons will have a larger reservoir.
Lethal exposure has always been the “invisible hand” of war, as it practically has complete control over how power preponderance can influence war outcomes. The currently dominant view on warfare emphasizes the importance of attrition of forces as the underlying combat dynamics. The proposed exposure-driven concept of combat brings instead to prominence the critical role played by the speed of attrition of one’s troops and weapons for military effectiveness and victory.
To better understand the logic of lethal exposure, let us consider a classical combat analysis scenario with interacting Blue and Red forces. Blue’s exposure to Red’s lethal fire and thus attrition depends on its ability to avoid identification by Red. Basically, Blue’s degree of exposure becomes dependent upon Red’s knowledge about Blue’s location. At minimum, exposure is determined by two factors: the physical presence of Blue in Red’s range of fire and the knowledge of Red about Blue’s exact location inside its targeting range. It is the quality that reflects the degree to which a side’s combat-engaged troops are subject to lethal fire of its competitor, resulting in a varying level of attrition of its troops. To use a more familiar expression, combat exposure of a military unit is the degree of its stealth.
It is not possible to strike an opponent beyond one’s weapons’ reach, no matter how accurate one is. And even inside the targeting range, it is hardly possible to hit the opponent without knowing its location. An extreme illustrative case is the example of a perfect ambush — when the ambusher has the opponent in its weapons’ sights, while the ambushed has no clue about it. The exposure is close to zero for the ambusher and is maximum for the ambushed. The degree of exposure of forces can completely block the impact of capabilities and fire accuracy on troops’ attrition speed. It is when the value of exposure is very high — like in hand-to-hand combat, or trench warfare — that the effects of capabilities and fire accuracy are maximized. This logical illustration of combat dynamics allows us to open the black box of combat uncertainty.
To see this, consider two fighting forces, one having 100 units and another one having 30 units. Keeping the conventional combat-related factors and conditions equal, numerical superiority will bring victory. However, as we unilaterally decrease the smaller force’s degree of lethal exposure (through various tactical moves or technology), its speed of combat attrition will start diminishing too. In comparison to the unchanged attrition of the larger force, the smaller force’s speed of attrition will eventually drop to a threshold value, when the larger force keeps degrading, by contrast, much quicker. But not just quicker. This threshold value, at which the larger side faces a higher speed of proportional attrition (when its X percent of troops and weapons degrade quicker than the respective X percent of the smaller combatant’s forces) can be rightfully labeled as the margin of military victory. Technically, in order to remain alone on the battlefield after combat, the 30-large force has to reach a kill rate higher than about 3.34 units of its opponent for each of its own destroyed units.
The described combat micro-dynamics allow even a belligerent with considerably fewer forces to win combat engagements and consequently wars against a militarily more powerful opponent.
Manipulating the degree of exposure requires advantage in decision-making and movement speed as well as environmental awareness. One can decrease own exposure while instantly increasing the opponent’s exposure in combat, by having a good knowledge about the combat environment — including location of the opponent — achieving a quick decision-making speed and being able to move faster on the battlefield. AI-driven combat systems could have crucial impacts in all three domains and change military strategy.
AI’s Capabilities in Warfare
In September 2019 the Russian military conducted the strategic exercise Tsentr-2019, allegedly testing an AI-based command, control, communication, computers, intelligence, surveillance and reconnaissance system. According to Russian open sources, the new “information system of combat command and control” gathers combat-related information from various sources in real time, assesses combat scenarios, and provides the commanders with a ranked list of combat mission decisions and an assessment of resulting scenarios.
While still at a rudimentary level, this application transformed AI from an enabler into an actor in the combat environment. If its recommendations are implemented by the human commanders, it makes little difference whether AI directly commands automatic combat units or if it achieves this through human fighters — although the second scenario is still much slower. We may be witnessing the birth of an automatic battlefield commander.
Would such an AI system actually be able to conduct combat assessment and mission planning better than a trained human professional? Claims that the military context generates unique challenges may not necessarily be true for AI. We should think about AI as another actor with its own specific capabilities: similar to an alien, who sees our environment outside of the human “visible spectrum.” An actor that operates by simplifying and stripping a problem down to its essential elements in its search for optimally tailored solutions. A sound way of exploring this issue has been advocated by Iyad Rahwan of the Max Planck Institute for Human Development. Rahwan suggested taking an anthropological approach — since AI systems have become so complex that we cannot understand and predict what they will do, we should instead observe their behavior “in the wild.”
This would allow us to perceive the combat environment the way AI will see it. The beginning of this discussion aimed to propose a most likely candidate for AI perception of combat, driven by lethal exposure. AI systems are optimizers, trying to fulfill a task based on a set of incentives. The accuracy of their work depends on the quality of data that humans feed into AI. As technical progress evolves, the evolution of sensors that allow AI combat systems to directly connect to the environment — including counter-battery fire, other radars, video cameras, electro-optics, or even satellite observation — will solve the data collection problem and increase the output quality of the AI-driven military systems. Because exposure, as described earlier, is the key driver of combat uncertainty dynamics, AI will identify it through data analysis and focus on instrumentalizing it.
Despite skepticism among researchers and policy practitioners, including claims that “you can’t ‘AI’ your way out of physics,” AI showed capacity to outperform the best human strategists not only in perfect information games such as Chess and Go. A revelatory example is the recent development of Pluribus AI, which is able to win against elite professional players in a multi-player poker game — an imperfect-information strategic interaction. Moreover, researchers ingeniously reduced the game’s high complexity, solving the problem of too many decision points by bundling situationally similar ones into groups, treating then all decisions in a group as identical.
Together with the poker example, which is arguably the most sophisticated, there is too much evidence to doubt the ability of AI to solve complex problems better than humans. For instance, in the biotech industry AI is not just a tool. It designs experiments, carries them out, and interprets their results. The genetic changes that AI systems generate in these labs represent discoveries that human scientists would likely not have identified. According to scientists working in the industry, some of the AI-created genes have no human-known functions.
Moreover, while winning in some of the most complex strategic games, AI revealed an insightful trait. Defeating one of the world’s best Go players, Lee Sedol, DeepMind’s AlphaGo AI player made a move that reportedly caught Sedol and other top players by surprise — “a move that no human would ever do.” Facing another of DeepMind’s AI systems, AlphaStar, one of the world’s best players in StarCraft 2 pointed out that AI explores strategies in ways different from what a human player would do. Capabilities have an instrumental effect, as they drive behavior.
Unlike humans, AI is better equipped to explore the exposure-manipulated speed of attrition concept, which is arguably the most basic, irreducible dynamic of combat. It arguably suggests the most effective solution in combat interaction — the Nash equilibrium of military strategies — as whatever the other side does, reducing one’s speed of attrition while increasing that of the opponent is the best possible response in combat.
Harnessing the power of modern computing, AI is able to explore an enormously larger part of the probability space — all possible developments in its environment. In contrast, to do the same task, humans incur significant cognitive costs and try to reduce them by searching the space of options for the solution that is “closest” to them. Imagine a solutions landscape, where we are surrounded by various hills, each of their heights representing a solution’s optimality level. It is very likely that a moderately high decision “hill” in our surroundings represents a better solution, but we cannot see it, as a different solution “hill” that is closer to us covers our “eyesight.” We will stop at this particular solution, while AI would have a wider vision and therefore will identify and explore the better options.
Providing AI with the logical shortcut of ready-built models will compensate for the debilitating shortage of big data on combat dynamics. Using combat simulations that explore high-fidelity micro-dynamics of war — similar to the suggested exposure-guided speed of attrition concept — researchers can train effective AI combat systems. Thus, AI battlefield commanders able to outperform human experts can be developed even given the shortage of combat-related data. This will also help us understand how AI deals with combat uncertainty, to be able to know how to instrumentalize this uncertainty for exploring AI’s weak sides.
Food for Thought
By implementing the two listed conditions of this AI-empowering approach, international actors will end up in a real AI arms race — a race in which they will compete not for the most powerful weapon, but for the most optimized AI-driven battlefield commander algorithm. This will give tremendous advantages in combat, harvesting the AI’s decision-making speed and optimized strategies.
Secondly, to fully explore AI’s potential by avoiding the transaction costs of the human interface, a few accompanying technological developments are necessary. A demand will emerge for specific hardware applications — sensors, exoskeletons, micro-turbine engines, robotic systems, and unmanned vehicles — that could best harvest AI’s decision-making agility and reaction speed potential.
Third, the use of lethal force will develop to become more accurate, quick, and localized — almost surgical. Due to AI’s expected ability to effectively discriminate among various types of targets in combat, military AI systems would likely offset the equalizing effect between regular armies and insurgents that automatic rifles made possible, likely due to restricting effects of international humanitarian law.
Fourth, given the resulting intensity and speed of combat attrition, fighting will become faster, will destroy combat resources more quickly and in higher quantities, making wars costlier for the defeated. The way capabilities transform into victory will become again more transparent, so that not everyone will be able to afford and be willing to fight wars. Therefore, state actors with significant financial resources will likely gain advantage in crises.
What should the U.S. government do? Because AI technologies that can be used in the military domain are readily available in the commercial sector, banning the use of AI in warfare becomes impractical. The United States should invest more in studying AI behavior in warfare. Similar to how scientists evolve and study deadly viruses to build vaccines against them, the U.S. government should encourage the laboratory-based “anthropological” study of AI behavior in virtual combat environments to understand how it “thinks.” This would enable understanding of the opaque functioning of AI — opening its black box — and would prepare policymakers to better address potential unanticipated consequences of AI operations, also known as King Midas problem.
The U.S. government ought to encourage and fund disruptive thinking on combat dynamics, military strategy and grand strategy implications, accounting for the ways AI developments are going to remove many of the existing physical and intellectual limitations on how humans have been fighting wars.
Finally, the United States ought to prioritize investing into electromagnetic pulse technology that breaks the connection between the AI and the combat systems it controls, which upgrade AI from a virtual advisor into a physically present battlefield commander.
Dumitru Minzarari, PhD (University of Michigan), is a former military officer who served as state secretary for defense policy and international cooperation with the Moldovan Ministry of Defense, worked for the Organization for Security and Cooperation in Europe field missions in Georgia, Ukraine and Kyrgyzstan, and with several think tanks in Eastern Europe.