Killer Robots: Our Ethics Aren’t the Problem

DARPA-Humanoid-Robot

 “If we must act in accordance with our interests, driven by our fears of one another, then talk about justice cannot possibly be anything more than talk.”

-Michael Walzer, Just and Unjust Wars

Are our overly precious ethics standing in the way of getting autonomous weapons into the fight? That is what three authors would have you believe in an article published last week at War on the Rocks. Brecher, Niemi, and Hill claim that “self-imposed and self-important legal and ethical constraints on autonomous weapons” place U.S. forces “at risk of suffering a decisive military disadvantage.” Autonomy will play an important role in shaping the success of future military forces, but their argument commits the United States to a false choice between values and operational relevance. The authors are mistaken in their claim that U.S. policy prohibits the development of lethal autonomous weapon systems. Likewise, they are wrong to suggest that adherence to the laws of war and applicable international treaties is either inappropriate or harmful to U.S. interests. Beyond obvious moral concerns, lamenting Western adherence to such principles runs risk of distracting attention from the institutional and procedural barriers that actually place the U.S. at strategic disadvantage with respect to fielding lethal autonomous weapon systems.

Far from impeding the development of lethal autonomous systems, DoD Directive 3000.09 offers a framework for ensuring the safe development of mission capable systems consistent with the laws of war. Brecher, Niemi, and Hill argue for this directive to be updated so that it “allows the full development and use of robotics.” As they point out, the policy is explicit in limiting routine development of fully autonomous weapon systems to applications involving non-lethal, non-kinetic force. What their argument overlooks, however, is that the directive also specifies guidelines for the development of lethal autonomous weapon systems.

By policy, lethal autonomous weapons and other non-routine autonomous systems must undergo additional review and approval by two under secretaries of defense and the Chairman of the Joint Chiefs of Staff prior to formal development and again before fielding. Among other considerations, the additional review must determine whether the system design incorporates “the necessary capabilities to allow commanders and operators to exercise appropriate levels of human judgment in the use of force.” Even then, the secretary of defense may waive the majority of these requirements in the case of “urgent operational need.” The directive sets a clear framework for developing autonomous capabilities without committing the Pentagon recklessly beyond its current understanding of the technology and its ramifications.

More to the point, the laws of war and the moral principles that lend them coherence are not some empty, rhetorical obstacle to U.S. national security. I won’t offer a full defense of this claim here because others have offered sufficient argument against the sort of blunt military realism advocated by Brecher, Niemi, and Hill (Michael Walzer’s Just and Unjust Wars, among others). I will limit myself to pointing out that any approach that disconnects state violence from its underlying moral reasons risks collapsing the basis for the American professional military ethic. That is to say, if we understand military force strictly in terms of might and power, we also lose any practical basis for drawing distinction between the U.S. military and those of corrupt governments­­­, let alone violent criminal organizations. If that’s the case, so much the worse for the professional ethic, but one should at least commit to the view in full recognition of its broader implications.

To put a finer point on the matter, one might suggest authoritarian regimes enjoy a systematic advantage in weapons development over large democracies (although the historical record doesn’t necessarily support that), but does that justify abandoning democracy in pursuit of strategic gains?

Even without a more thorough examination of the moral point, the argument offered by Brecher, Niemi, and Hill motivates its appeal to the realist point of view by exaggerating the operational risks imposed by our enemies’ pursuit of lethal autonomous weapons. Hordes of killer robots wielded by our most unscrupulous enemies represent the proverbial wolf at the door, but it’s worth remembering that history has shown that the wolf’s shadow is usually far more imposing than the beast itself. The most notable exception is the development of nuclear weapons, a parallel that should make us cautious rather than eager to launch headlong into the development of lethal autonomous weapons.

Technologies that enable lethal autonomous weapon systems are making significant advances, but it isn’t obvious that such systems offer clear advantage over semi-autonomous or human-supervised autonomous systems into the foreseeable future. Fully autonomous systems are vulnerable to corruption by cyber and electromagnetic spectrum threats. Further, limits in machine cognition still make control of such systems difficult in the context of a commander’s intent. The reality is that future success will require systems operating dynamically across the range of autonomy as required by mission and environment. That reality makes the strategic risk posed by autonomous weapon systems manageable, assuming the U.S. remains deliberate in its approach to developing and integrating these new systems.

The real danger in the view offered by Brecher, Niemi, and Hill is that it runs risk of deflecting  attention from actual institutional and procedural challenges to gaining and maintaining competitive advantage through autonomous systems. First, the technology is evolving at a pace that will make it difficult for acquisition processes across the Department of Defense to keep pace. To that end, the new Defense Science Board Summer Study on Autonomy (approved for public release not yet published) recommended “accelerating the adoption of autonomous capabilities” by adjusting research, development, engineering, and testing to accommodate the unique challenges posed by autonomous systems. Second, the potential applications for autonomous systems are so widespread, they will challenge the U.S. military to imagine and develop innovative concepts to guide the effective employment of these new technologies. Brecher, Niemi, and Hill suggest one such approach and that represents an important starting point for these discussions.

The potential risks and opportunities imposed by autonomous systems don’t justify the United States abrogating its leadership within cooperative international institutions like the United Nations. To the contrary, existing policies like DoD Directive 3000.09 constitute the basis for shaping evolving international norms, laws, and treaties regarding the safe development of lethal autonomous weapon systems to the benefit of U.S. interests. As is so often the case, those interests are best served by embracing values, not turning away from them.

 

Kevin Schieman is a Strategic Plans and Policy Officer in the U.S. Army. This essay is an unofficial expression of opinion. The views expressed are those of the author and not necessarily those of the Department of the Army, Joint Staff, Department of Defense, or any agency of the U.S. government.

Image: DARPA