AI and Irregular Warfare: An Evolution, Not a Revolution
This article was submitted in response to the call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the first question (part a.), which asks how artificial intelligence will affect the character and/or the nature of war.
***
How will artificial intelligence change the way wars are fought? The answer, of course, depends. And it mainly depends on what type of wars are being fought. AI could very well change the fundamental nature of conventional conflicts between states. Technologies enabled by AI could become so powerful and ruthless that war as we now it becomes too deadly and costly to contemplate. But what about the shadow wars? What about irregular wars between states, non-state groups, and proxies? In other words, how will AI affect the type of wars that the United States is most likely to fight?
Regardless of advances in AI, states will continue to seek advantage through limited, irregular wars prosecuted through insurgency, resistance, coercion, and subversion. This competition below the level of state-to-state armed conflict — as it always has — allows antagonists to achieve military objectives without risking escalation into more costly wars with uncertain outcomes.
AI will drive an evolution in irregular warfare, where dominance in information and understanding can prove decisive by increasing the speed, precision, and efficacy with which information is wielded in these conflicts. But advances in AI over the coming decade are unlikely to prove revolutionary, particularly for a form of conflict where humans, and not hardware, have historically proven decisive.
Improving Our Understanding of the Human Domain
Success in irregular conflicts requires an understanding of the physical, cultural, and social environments in which they take place. This proved critical to the American mission in Bosnia and Kosovo, where detailed information about local populations provided commanders the ability to shape unfolding events. On the other hand, an inability to effectively tailor messaging efforts in Afghanistan rendered efforts to undermine popular support for the Taliban ineffective. AI could hyper-enable this type of analysis and quickly translate it into concrete changes in the conditions driving any given conflict.
Already, AI is giving the U.S. military the ability to more easily analyze the world in which it fights. The efforts of Project Maven, the Department of Defense’s initiative to apply AI to intelligence, surveillance, and reconnaissance (ISR) platforms and sensors, give the United States the capability to exploit full motion video at an enterprise scale. The Department of Defense and the intelligence community are utilizing comparable approaches for automated exploitation of audio, text, and other unstructured data. The speed and accuracy of these tools decrease the time lag between an event and Washington’s response.
We are also beginning to see AI-driven integration of real-time data, enabling a deeper understanding of behavioral patterns, relationships, patterns of life, and tradecraft. These capabilities offer the promise of allowing U.S. commanders to more quickly and effectively respond to adversaries’ irregular warfare capabilities by identifying, shaping, and disrupting subversive efforts in real time. Future applications of these technologies could involve, for example, automated identification of early warning indicators, or even predictive analysis of a population’s key vulnerabilities to adversarial disinformation.
Maneuver in the Human Domain Still Requires Humans
Despite its potential, AI is not a magic bullet. It is unlikely, at least in the next decade, that AI will allow the enhanced sensemaking necessary to mimic, influence, and alter group behavior and shape the socioeconomic drivers of irregular conflict. AI will certainly increase the efficiency of such efforts, providing the ability to digest new sources and even larger quantities of information. But the platform making decisions about what the data truly mean and what to do about it — the analyst — remains difficult to scale.
For one, we currently lack the training sets and computer models to replicate a person’s ability to use observable data to predict the behavior of adversaries or the likely response of local populations to U.S. military efforts. This is perhaps best demonstrated by the challenges that law enforcement has faced in deploying AI in Western countries. If law enforcement — operating in data-rich environments with a well-understood problem set — is still grappling with these challenges, the application of AI to irregular warfare is still a long-term prospect.
While human-machine teaming may enhance the Department of Defense’s ability to outmaneuver an enemy on the battlefield, the human component of irregular warfare still requires detailed understanding of the political environment. For example, Russian AI-enabled disinformation, while potent, is still constrained by the need for significant human expertise to develop targeted, authentic, and impactful content.
The other key challenge is that the data that feed AI algorithms are liable to be scarce, denied, shallow, corrupted, and prone to manipulation by our adversaries. When irregular conflict occurs in cyberspace or even in sensor-rich physical domains, AI-enabled platforms will be vulnerable to sabotage and deception. And where conflict occurs in under-governed or under-developed spaces with shallow pools of data, an over-reliance on AI may actually limit our ability to detect patterns in the human domain, rather than enable it.
AI Threatens America’s Ability to Operate in the Shadows
AI will, however, make it harder for the U.S. military and intelligence community to operate in the shadows during proxy or undeclared wars. As facial recognition, biometrics, and signature management technologies become ubiquitous, it will become far harder to hide soldiers or equipment from adversaries or even private citizens. Private groups have already exposed the Russian agents associated with the downing of Flight MH17 in Ukraine in July 2014 and Turkey’s arms transfers to Libyan militias this past May. With a far more extensive AI-enabled intelligence collection, processing, and exploitation apparatus, a nation-state can do much more.
The risk of discoverability has long been a key factor in U.S. planning for low-visibility military activities, as retaining operational security is critical to protecting the safety of U.S. forces, partners, and reputation on the global stage. Moreover, America’s influence in global affairs has long made it difficult to effectively hide its hand. AI simply increases the complexity of efforts to remain in the shadows. The U.S. military should prioritize new approaches to deception and signature management, as well as an emphasis on “counter-AI” capabilities that frustrate the efforts of its adversaries to use AI to uncover what it wishes to remain hidden.
Proliferation of AI-Enabled Weaponry
The Department of Defense should also embrace the near-certainty that non-state actors and groups will gain access to AI-enabled weaponry. In terms of proliferation risk, these weapons have unique appeal to non-state actors as they are relatively cheap to develop and easy to procure compared to weapons of mass destruction. Great powers may even deliberately provide AI-enabled tools to non-state groups, just as they do conventional weapons. The AI capabilities of non-state actors will likely be inferior to the U.S. toolkit, but these groups will almost certainly target the portions of Western economies, infrastructure, and populations that are most vulnerable to disruption and subversion.
In the future, Russian support to proxies in Ukraine could include the transfer of AI-enabled robotic improvised explosive devices to target key infrastructure or government leadership. Similarly, future offshoots of the Islamic State could develop AI capabilities that target, radicalize, and enable vulnerable individuals in the United States with hyper-specific propaganda built off of social media signatures.
The United States has historically relied upon international norms and physical interdiction to deter the production or proliferation of lethal conventional weaponry. While the development of international norms on the use of AI in war is desirable and necessary, ensuring that these norms both protect U.S. values and interests and are enforceable will be challenging. Enforceability is complicated by the fact that, unlike nuclear or chemical weapons, the lack of a physical signature behind AI-enabled weaponry means that the United States may struggle to detect when a non-state actor has acquired or employed an AI capability.
Preparing the United States for this Next Generation of Irregular Warfare
Fortunately for the United States, AI’s impact on future irregular warfare — while uncertain — will occur along known fault lines. The underlying character of irregular warfare will persist, including the need to understand human behavior, operate in the shadows where necessary, and address asymmetric challenges from non-state actors and proxies. At the same time, the Pentagon should not be complacent. AI will increase the complexity of all types of warfare and offer distinct advantages to those with superior capabilities. The United States should proactively shape AI’s impact on the next generation of irregular warfare to our advantage through a few key steps.
First, the Department of Defense and intelligence community should continue to adapt their approach to better capture innovation happening in the commercial ecosystem. Initial steps by U.S. Special Operations Command to streamline procurement and contracting processes are a good start, such as the SOFWERX Data Engineering Laboratory. But deeper collaboration and teaming across the Department of Defense, private sector, and academia is required to develop the data culture and architecture necessary for success.
Next, the U.S. government should recruit, develop, retain, and enable personnel capable of leveraging AI capabilities. This will require institutionalizing AI as part of irregular warfare doctrine, strategy, and tactics, and developing the training programs necessary to build the force appropriately. It will also require developing highly specialized, integrated teams that draw on the blended skill set of data scientists, operators, and intelligence professionals.
Finally, Washington should work with U.S. allies. International coordination — with foreign governments, global civil society, and international organizations — will be critical to countering the proliferation of hostile AI capabilities. The use of physical force to deter proliferation may be possible in rare cases, but the intangible nature of these capabilities will make effective norms all the more important.
These deliberate efforts can help the United States build the AI capabilities necessary to anticipate and prevent the malign use of these emerging technologies and shape evolutions in irregular warfare to its advantage.
Daniel Egel is a senior economist at the nonprofit, nonpartisan RAND Corporation. Eric Robinson is detailed from the RAND Corporation as a policy advisor in the Office of the Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict. Lt. Gen. (Ret.) Charles T. Cleveland is an adjunct international defense researcher at the RAND Corporation. Christopher (CJ) Oates is the founder and managing partner of Nio Advisors, a strategic advisory firm. The views expressed in this article are those of the authors and do not represent the official policy or positions of the Department of Defense or U.S. government.