My Droneski Just Ate Your Ethics

Russian-War-Robot

  “War does not determine who is right — only who is left.”
Anonymous

Morality is a private and costly luxury.

Henry Adams

In 2018, you may expect battalions of adversarial Russian “Avtonomnyy” to provide potentially unpleasant greetings for our young soldiers on the battlefield. Meanwhile, Western governments will still be wrangling over the precise wording of their self-imposed and self-important legal and ethical constraints on autonomous weapons. U.S. forces are at risk of suffering a decisive military disadvantage due to these limitations. As that famous Kazakh journalist and socio-political commentator Borat Sagdiyev might say: “Bahttt Whyyyyy?”

Success in future warfare will require deploying robots capable of using lethal force without a human in the loop. In a forthcoming white paper, we describe how to use “the fourth wave” of emerging robotics and autonomous systems on the battlefield, minimizing risk to friendly forces while achieving military objectives in the face of advanced anti-access, area-denial systems. In this multi-phased wave concept, the attacking force moves from preparation to precision strike. Machine-machine and human-machine teams will allow for the relatively safe introduction of manned combat units.  The fourth wave is exactly the type of CONOPS that is required to address the Department of Defense’ third offset strategy and its call for human machine collaboration and teaming.

The fourth wave depends on more flexible policies that enable greater autonomy for unmanned systems, such as machine-machine teams. It also depends on a command construct that increases leaders’ situational understanding, enables remote presence, and delegates authorities required for a given decision to the level that best matches the situation, a concept called “flexive command”. The fourth wave is enabled by advances in sensor fusion, target discrimination, artificial intelligence, long-range air-ground delivery vehicles, stealth, and hardened, closed-loop command, control, communications, computers, and intelligence (C4Inetworks. However, to realize the potential of these technologies, we must initiate the research and development of systems many consider to be morally and ethically anathema to Western values and ultimately, our way of war — namely fully autonomous lethal weapons systems.

In 2012, the Pentagon issued DoD Directive 3000.09, addressing autonomy in weapons systems. The directive defined the future creation and use of semi-autonomous and autonomous systems. In general, it allows for lethal engagement when there is a human in the loop but restricts fully autonomous systems to “non-lethal, non-kinetic force, such as some forms of electronic attack.” In addition, it properly directs that the operation of any of these systems is done with the appropriate care and in accordance with the law of war, weapon system safety rules, and applicable rules of engagement (ROE) — but it goes further to cite  “applicable treaties” on independent autonomous weapons.  Two question arise: Is there a future need for the U.S. to deploy fully autonomous lethal systems? What treaties are deemed applicable and acceptable as advanced RAS are developed and deployed?

Concern about the morality and ethics of lethal robots is evident in a recent U.N.-sponsored report on extrajudicial, summary or arbitrary executions that states, “Autonomous weapons systems that require no meaningful human control should be prohibited.” This follows recommendations by Christof Heyns for a moratorium on these systems in 2013, 2014, and 2015. Thus, maintaining human control over lethal robots is an appealing strategy, ceteris paribus (all else being equal). However, as is so often the case, ceteris paribus is an unwise assumption.

The question of autonomous, lethal robots presents us with a sort of prisoner’s dilemma, in which uncertainty regarding other players’ decisions drives us to pursue a sub-optimal strategy in order to avoid the most damaging outcome. Consider a game with two combatants who have two possible strategies for using lethal robots in a contest: full autonomy and human control. (We categorize any human interventions in the robot’s decision-making as human control, even if the human’s role is limited to approving the release of a weapon.) Neither side seeks to create humanity’s future robot overlords, so full autonomy is an unappealing strategy given the risk (however small it may seem) that humans could lose control over fully autonomous weapons.

However, maintaining human control exposes players to other disadvantages that may be decisive should the other player opt for full autonomy. First, fully autonomous systems could operate in a much faster decision cycle than human-in-the-loop systems. An inferior platform can defeat a superior one if the inferior platform can, to borrow a phrase from John Boyd, get inside its enemy’s decision loop. This places a manned, semi-autonomous future force at significant risk when encountering a fully autonomous first echelon or defensive screens of a less scrupulous enemy.

Second, fully autonomous systems can operate in a compromised network environment. A lethal, autonomous drone would not require a reliable link to the robot. In contrast, telying on human-controlled systems, requires total confidence in the integrity of our communications during a conflict.

These two assumptions — the superior decision speed of fully autonomous systems and the vulnerability of networks in future wars — are likely to drive players in the game to embrace full autonomy. They may prefer to risk creating Skynet than take the chance that they will be decisively defeated by a human adversary. This is the dilemma of fully autonomous, lethal robots.

Given this dilemma, we should add that our actual and potential adversaries are bound by a different code of ethics or morality. This alters the game, since it means that some players are less concerned about the negative consequences of lethal, autonomous robots. As Deputy Secretary of Defense Robert Work stated:

I will make a hypothesis: that authoritarian regimes who believe people are weaknesses … that they cannot be trusted, they will naturally gravitate toward totally automated solutions. Why do I know that? Because that is exactly the way the Soviets conceived of their reconnaissance strike complex. It was going to be completely automated.

Autocracies may therefore view automation as a way to extend state control and reduce the potential for internal opposition to regime policy. If the Russians and Chinese are already moving rapidly towards autonomous lethal systems, how will other nation-states and non-state actors approach the development of fully autonomous systems?  Ultimately this quest for greater autonomy is resulting in a “drone race” in which we must maintain the technological and operational lead.

We should temper our concerns about “killer robots” with the knowledge that our adversaries care about the morals and ethics of lethal, autonomous systems only insofar as those concerns give them a competitive advantage. If full autonomy gives them supremacy on the battlefield, they will care little about what human rights lawyers think, even if an international agreement to prohibit autonomous weapon systems emerges.

The sanctity of self-interest often prevents nations from conforming to international law. During the 1930s, the major powers attempted to constrain the use of submarines in war through two major naval treaties. When war came, however, the United States and Germany embraced unrestricted submarine warfare as soon as it was in their interest to do so. In the current international environment, Iran continues to arm and fund Hezbollah and destabilize the region while also continuing to test-launch nuclear-capable ballistic missiles, in  violation of the spirit of the nuclear deal. Chinese actions in the South China Sea and Russian aggression on Europe’s eastern flank are examples of self-interest coming into conflict with international law. And the United States itself is not beyond a self-interested aversion to international laws and treaties. Nations should not be expected to abide by agreements that infringe on their perceived self-interest. We live in a world of realpolitik and “realconflikt”.

Therefore, why would the United States sign agreements to deny development of autonomous technologies that provide a comparative advantage, while its adversaries have no such restrictions? In the Ottawa Treaty, the United States acquiesced to a self-limiting agreement regarding mines and cluster-munitions. As a consequence, it now finds itself in a regrettable tactical and operational position in multiple potential theaters of conflict. With Ottawa as a warning, Washington needs to proceed with extreme caution “into that good-night” of future autonomous weapons treaties and agreements.

Paradoxically, it is the knowledge that the United States has the capability and capacity to wage war using fully autonomous, lethal robots that is most likely to persuade America’s potential adversaries to refrain from using or making escalating investments in these systems. To be clear, we fully expect the states to acquire lethal, fully autonomous weapons systems. Having acquired these systems, however, competing states are likely to make the capabilities and sophistication the object of an arms race. Such a race will increase the complexity and sophistication of autonomous systems, thereby increasing the likelihood of the loss of human control (what sociologist Charles Perrow called “normal accidents”). However, to influence the investments of other states, the United States must have the technological leverage afforded by its own capabilities.

Prisoner’s dilemmas rely on one characteristic to justify the games’ predictions that players will employ sub-optimal strategies: imperfect information. If the players know what the other players know and if they know what choices are available to other players, they are much more likely to cooperate and achieve more favorable outcomes. Through a series of nuclear arms control treaties, the Soviet Union and the United States demonstrated this principle repeatedly during the Cold War.

The best way to control the spread of autonomous, lethal robots is for the United States to acquire a viable operational capability itself. This requires an update to DoD 3000.9 that allows the full development and use of robotics, but also allows for the prudent exercise of human judgment and control. Additionally, it needs to mandate placing those controls at the appropriate time, place and echelon given technology enabled future Concept of Operations (CONOPS) This updated policy will allow the measured use of lethal autonomous systems. Fully autonomous systems bear risks. Good policy must build on technology  that enables advanced, artificially intelligent target discrimination to provide reasonable safeguards for friendly forces, civilians and other non-combatants. Any policy must ensure the doctrine, organizations, and processes associated with the use of autonomous systems are executed with appropriate care and in accordance with the law of war and applicable rules of engagement. Finally, fully autonomous systems may be destabilizing insofar as they make military interventions more attractive. Given the low risk and high capability of advanced robotics, we must safeguard against leadership succumbing to the god-like “Jupiter Complex” in the conduct of warfare.

The prospects of multiple, lethal autonomous systems roaming the future battlefield is unpleasant. However, given the lack of restraint by our adversaries and the progress of technology and artificial intelligence, the United States should not adopt an international, legally binding instrument that absolutely prohibits our development, production, and use of lethal, autonomous systems. There is a place for autonomous lethality in Western arsenals, and we deny its development at our peril.

 

Joseph Brecher retired as a colonel in the U.S. Army. Col. Heath Niemi, is in the U.S. Army and a graduate of the U.S. Army War College. Andrew Hill is a regular contributor to War on the Rocks. He is a Professor at the U.S. Army War College, and Director of the Carlisle Scholars Program. The views here do not represent those of the U.S. Army War College, the U.S. Army, the Department of Defense, or any part of the U.S. government.

Image: Russian State Media