war on the rocks

Artificial Intelligence and the Military: Technology Is Only Half the Battle

December 25, 2018

Editor’s Note: As 2018 comes to a close, War on the Rocks is publishing a series of year-end reflections on what our editors and contributors learned from the publication’s coverage of various national security topics. These reflections will examine how War on the Rocks coverage evolved over the year, what it taught us about the issue in question, and what questions remain to be answered in 2019 and beyond. Enjoy, and see you next year!


What will advances in artificial intelligence (AI) mean for national security? This year in War on the Rocks, technical and non-technical experts with academic, military, and industry perspectives grappled with the promise and peril of AI in the military and defense realms. War on the Rocks articles discussed issues ranging from the different ways international competitors and military services are pursuing AI to the challenges AI applications present to current systems of decision-making, trust, and military ethics. War on the Rocks contributors added to our understanding of the trajectory of military AI and drew attention to critical remaining questions. A key takeaway is that technical developments in AI probably represent less than half the battle in attempting to effectively integrate AI capabilities into militaries. The real challenge now, both in the United States and abroad, is going beyond the hype and getting the right people, organizations, processes, and safeguards in place.

How Militaries Will Apply AI

The seemingly boundless potential applications of AI, from AlphaGo to Artificial General Intelligence, raise tough questions about how and where militaries should prioritize AI development and integration. Michael Sulmeyer and Kathryn Dura argued that AI-based automated network defenses in the cyber domain could be an obvious area for U.S. investments to minimize vulnerabilities in a cost-effective way. Connor McLemore and Hans Lauzen identified broader areas where AI may have the greatest strategic impact: those where machines have an edge over human speed, agility, and labor intensity and where machines can effectively identify patterns.

Just how quickly should militaries adopt AI into weapons systems and operational concepts? Jeff Cummings, Scott Cuomo, Olivia Garard, and Noah Spataro made the case for a revolutionary approach. The Marines, they argued, should capitalize on the promise of AI to prepare for likely operational scenarios and to help speed the development of the smaller-scale, close-combat lethality the Pentagon demands. In other domains, though, the military may need to pursue AI integration along a more evolutionary pathway. Shmuel Shmuel spelled out some of the specific tradeoffs in firepower, mobility, and cost that strategists will need to navigate in integrating AI into weapons systems. Such complexities in balancing strategy and budget may provide a good rationale, as Dan Wassmuth and Dave Blair recommended for the Air Force, to target gradual change when applying AI in combat.

Organizing Implementation

During periods of changing technology, the victors in international politics are generally those militaries capable of making organizational change to take advantage of technological change. Along these lines, multiple War on the Rocks writers have suggested that technical challenges to strategic AI development could pale in comparison to the task of organizing a military acquisitions complex to achieve it. Richard Kuzma sounded the alarm for Pentagon leaders: Integrating AI means getting infrastructural changes right from the start. Nailing down long-term strategic goals may be second-order to making the unsexy reforms — prioritizing enterprise-wide data labeling, cloud-building, and software developer-user teaming — that will enable military AI no matter its specific applications. New systems to develop and manage the information — and human expertise — generated in AI testbeds like the Athena platform and others will require further innovation in resource organization.

Energetic, well-resourced leadership, as always, will be key to getting things done. Just as the benefits of undersea naval propulsion and aviation were only realized thanks to innovative figures like Adm. Rickover and Adm. Moffett, respectively, today’s military services will need leaders empowered to move forward applications of AI into the force.

Realizing AI’s promise to reduce personnel costs and increase military effectiveness will require keeping implementation costs down. Setting ambitious goals may induce innovation-building pressures. But even if militaries keep costs down and set lofty goals, at the end of the day, AI likely won’t be worth the investment without organization-wide change.

Managing Humans to Manage AI

As Jacquelyn Schneider noted, the Third Offset counterintuitively turns the expected ascendency of machine over man on its head: “In contrast to the first or second offsets, in which the United States was able to double down on the development of physical components of technology… the autonomy arms race is all about talent and manpower.”

If battlefield AI is to be revolutionary, War on the Rocks writers argued this year, the U.S. military may need to make even larger changes in how it recruits, trains, and keeps the humans required to build and operate AI systems. A strategic “AI edge” against peer competitors will come from the “intellectual edge” that Mick Ryan argued can only be developed by adapting professional military education at all levels. McLemore and Eric Jimenez and Sebastian Bae pointed to potential lessons from naval nuclear schools and the wargaming profession for developing a competitive military AI career track.

Militaries using AI will also need to develop literacy and trust beyond the ranks of experts. If cognitive bias leads humans to struggle to trust their AI-enabled smartphone GPS, how will it play out when commanders ask unit-level operators to rely on specific AI applications in life or death situations? War on the Rocks hosted a productive exchange among Jon Askonas and Colby Howard; Julia Macdonald and Schneider; and a large group of operators with experience piloting aircraft about the degree to which remote pilots and on-the-ground operators can trust one another. While this debate was about remotely piloted systems, not AI, it nonetheless raises important questions about how the level of human trust in new technologies will vary as militaries adopt more advanced systems. As machines play a greater role in planning, AI may also change military decision-making and the civil-military advisory and oversight processes, as Risa Brooks explored.

Where is the AI Debate Headed in 2019?

Assuming that advances in AI continue to generate interest in how these technologies might shape the future character of war, there are several promising areas for more research and thinking. First, it is critical to continue moving beyond the hype and dive deeper into what AI is actually useful for — both in “back-end” applications and closer to the battlefield. Advancing current ideas about experimentation and war games is critical both to technology development and for training and doctrine questions.

Second, the national security community should continue discussing the need to balance rapid development of AI with ethics, safety, and reliability. Controversies like Google’s withdrawal from Project Maven suggest this is an important task. Both militaries and the civilian AI industry want AI applications that work effectively, requiring extensive development and testing to address the challenges of spoofing, hacking, and appropriate usage. Developing strong testing, training, and reliability procedures for AI applications is a critical area for research. Safety may be a concern shared with other nations also developing military AI, and might thus be a productive area for diplomacy and confidence building in the absence, as Alexandra Bell and Andrew Futter note, of verification standards for AI-related arms control.

Last, it is important to continue the debate over what America’s AI strategy should be. War on the Rocks contributors have identified impediments to progress, but part of this comes from a lack of strategic, even grand-strategic, direction. There are many disparate pockets of AI policy work — and technical work — in the U.S. government that will need to learn to work together. The Defense Department, for example, now has several institutions tasked with AI in one way or another, including the Joint Artificial Intelligence Center and the Defense Innovation Board’s AI ethics review. Meanwhile, Congress recently created a new artificial intelligence commission, and the Office of Science & Technology Policy in the White House is also heavily engaged on AI policy issues. It is easy to imagine how battlefields from the Indo-Pacific to the Baltics will be replete with AI applications. Yet, as Charles Rybeck, Lanny Cornwell, and Philip Sagan note, U.S. policymakers and budgeteers have yet to fully grasp the political implications of competing with powers like Russia and China, comfortable with “informatization” of everyday citizens’ livelihoods in the name of national security.

Intellectual progress on these questions and others will be essential in 2019 if artificial intelligence is to become not just a technology with military promise, but one that actually shapes the character of warfare.


Michael C. Horowitz is professor of political science and associate director of Perry World House at the University of Pennsylvania. He is also a senior editor at War on the Rocks. You can find him on twitter @mchorowitz. Casey Mahoney is pursuing his PhD in political science and is a Perry World House Graduate Associate at the University of Pennsylvania. He previously served as a U.S. Department of Defense Nunn-Lugar Fellow in OSD Policy and AT&L.

This research was supported in whole or in part by the Air Force Office of Scientific Research and Minerva Research Initiative under grant #FA9550-18-1-0194. The views and conclusions contained in this report are those of the authors and should not be attributed to the U.S. Air Force or Department of Defense.

Image: Mike McKenzie/Flickr