Winter Isn’t Coming, but HAL’s Grandkids Are

Hal

Paul Scharre, Army of None: Autonomous Weapons and the Future of War (W. W. Norton, 2018).

The field of artificial intelligence (AI) has known some historical ups and downs. Breakthroughs and false hopes came in cycles, along with peaks and valleys of interest and funding. Those valleys of fiscal backing are known to the small community of computer scientists as an “AI Winter.” Enthusiasm and optimism about AI has gradually increased, and now AI has again become widely publicized and generously funded. Optimism and research grants are now at their zenith, with billions of dollars of commercial and government resources searching for breakthrough applications.

As the futurist Ray Kurzweil now writes: “the AI winter is long since over.”

As noted in various government and think tank reports, the convergence of robots and artificial intelligence will be major sources of profound change in the character of warfare. The impact in the conduct of war will influence the interactions of policy, populations, and their military forces in ways that we have not considered since Stanley Kubrick’s 2001: A Space Odyssey a full 50 years ago. In that classic, HAL 9000 becomes paranoid and kills several astronauts while ultimately dueling for control of the space mission with astronaut David Bowman.

What Arthur C. Clarke and film-maker Kubrick depicted in their futuristic collaboration remains topical, perhaps even existential if you listen to some futurists. Over 100 scientists and entrepreneur Elon Musk believe that ongoing efforts to develop machine learning systems is dangerous and “summoning the demon” to a Faustian bargain.

Beyond the hype and hyperbole in ongoing debates for and against autonomous weapons and AI, beyond science fiction references to the Terminator and Slaughterbots, there is still much that needs to be demystified about what AI actually can and cannot do. We already have machines with task-specific or narrow AI capabilities that can handle routine tasks. The larger challenge is artificial general intelligence, which provides the capacity to think and learn like humans and contribute to commerce, medicine, or defense. Here, progress has been impressive. We are now 20 years past the defeat of Kasparov by Deep Blue, and are already moving well beyond drones and driverless cars. Already, machine learning can write symphonies that match the classics, and defeat Go Masters, conceiving unorthodox moves and challenging practices considered central to the ancient game. AI-enabled systems have also demonstrated the ability to win poker tournaments with multiple contestants, deception, and incomplete information.

All of this is fueling increased concern about the potential for a present-day version of the Sputnik moment, where we find ourselves surprised and beaten to the punch. The Chinese see the potential application of artificial intelligence and are shifting from merely the “informationalization” of warfare into smart or “intelligentization” via AI-enabled applications. Putin himself has proclaimed that any state that can obtain a monopoly in AI will dominate the world.

The U.S. Joint Chiefs of Staff are aware of the potential contributions of the converging capabilities of AI and robotics. Their future environmental forecast, the Joint Operational Environment 2035, identified this critical trend:

The next two decades will see significant advances in autonomy and machine learning, to include the emergence of robots working together in groups and as swarms. New and powerful robotic systems will be used to perform complex actions, make autonomous decisions, deliver lethal force, provide ISR coverage, and speed response times over wider areas of the globe.

While we remain at least a decade away from this kind of AI becoming a reality, we may want to think more about its impact on war and warfare. Machines and task-specific AI are, in fact, already here and will continue to evolve, perhaps exponentially. While extensive automation of warfare will make dirty and dangerous tasks far easier, these developments also bring significant moral and ethical questions to the fore.

Paul Scharre’s new book wrestles with these challenging questions. It has been more than a decade since Peter Singer’s Wired for War focused our thinking on the coming age of robotics. Now Scharre has raised the level of discussion and debate with a thoughtful analysis about where the combination of robots and artificial intelligence is taking us. A former Army Ranger with time served in the Pentagon as a policy wonk, Scharre combines practical policy experience with sincere interest in the social, political, and ethical implications of autonomous weapon systems. This is a sober, policy-focused primer on what is coming, but written in an interesting and accessible manner, much like Singer’s pathbreaking work. Scharre’s balance and impassioned analysis is one of Army of None’s strongest selling points. He does not embrace the hype of AI-enabled robots, nor does he hesitate to explore the policy and moral challenges their development poses.

Scharre devotes his final chapters to the appropriate role that humans should play in war. Fulfilling existing international law, for many involved in debates about the legality of autonomous weapons, requires an accountable human being. Only a human, these advocates against autonomous weapons systems contend, can and should be able to make the decisions involved in lawful armed conflict. Only a human can make the moral judgments about distinction and proportionality that are essential to fulfilling international humanitarian law.

Of course, human beings play critical roles at many levels in war. At the strategic and operational levels of war, humans are responsible for initiating and directing war while establishing political direction for its aims and electing the time and place of battle. Human beings design and build both the computers and various robotic platforms. They write the initial code and algorithms, and also maintain the machines.

At the theater level, some commander and his staff will conceive of plans that will deploy and maneuver ships, aircraft and ground combat formations across the battlespace. Over time, the commander’s intuitive grasp of the battlespace will be augmented by his decision support systems including AI-enabled systems at all levels of warfare. Autonomous weapon systems will not displace the decisions of senior leaders, but these leaders will have to learn how to trust their smart and tireless machine assistants to enhance their decisions.

Below them will be commanders of ships and air defense batteries that have highly sophisticated weapon systems in which the operator is “on the loop” nominally but is not capable of responding fast enough to defend his shipmates or fellow warriors. The lives of his or her ship or unit will be in the hands of an algorithm that can address the speed of a swarming attack. In effect, these machines will be the foot soldiers and the supporting staff.

Yet, at lower levels, what is the appropriate place for human control and judgment? Does the standard change between lethal and nonlethal uses of force? Are we only concerned when a machine or mindless-munition is about to take human life? Do we differentiate between offensive systems and defense systems? Should we differentiate between systems that only destroy unmanned machines and cyber intrusions, from those that might target and kill a manned aircraft or a combatant in a dense urban context? Where should we draw the line, and can there be a line?

Some contend that man-machine teaming offers the best of both worlds. Scharre devotes a chapter to so-called centaur machines. But man-machine teaming is not without moral issues, particularly when we begin to link human brains directly to computers and databases. We may begin to alter the definition of what human decision-making is, or even our understanding of what being human is.

Scharre does not deny that the technology to enable machines to take life on their own is within reach. We are, he admits, “at the threshold of a new technology that could fundamentally change or relationship with war.” Rather than ban AI and its connection to unmanned systems, Scharre suggests we establish norms and principles about the role of human judgment in taking life. To this reader, this begs a fundamental question: regardless of whether or not we can make machines that make inherently moral decisions, should we?

The potential of AI has been hyped for more than a generation, but the breakthroughs of the last five years alone suggest that an age of autonomy is much closer than we had anticipated. We probably will not have superhuman AI anytime soon, but its ultimate potential and consequences should not be prematurely dismissed.

This makes Army of None extremely relevant today. Legal, ethical, and moral dimensions are being debated intensely yet there is little work on operational concepts or organizational and tactical reforms for new systems. Do not expect them in Scharre’s work either. Readers will have to find those questions and answers elsewhere. The book is highly recommended, but I wish Scharre had examined the opportunities that loom ahead just as rigorously as the ethical concerns.

In 2001: A Space Odyssey, HAL ultimately evolved and turned on his masters. When artificial general intelligence emerges, is this the biggest risk? What advances and missions should we not explore because of the potential norms we want to protect or the downsides we fear? What moral guidelines should we set up now, even before the science makes something feasible? Scharre emphasizes caution in policy development and research at the expense of opportunity costs that might enhance our security. What are the potential risks when AI superiority is ceded to a competing state that has no qualms with winning at any cost? Can we pause to deal with the key policy questions and later develop AI safely and responsibly as Scharre argues? Or should we make a deal with the devil now to advance AI only in non-military fields and hope that other states follow suit?

Half a century has passed, and Kubrick’s fantasy is upon us. It remains to be seen whether HAL’s grandkids will accept that bargain, or tell us, “Sorry Dave, I can’t do that.”

 

Dr. Frank Hoffman serves as a Distinguished Research Fellow at the National Defense University. He holds a Ph.D. in War Studies from King’s College, London. This review reflects his own views and not that of the Department of Defense.

Image: YouTube