Secret Agent Man: How to Think about Autonomy

2879759988_f51b8665d5_b

Opponents of autonomous weapons have already lost the debate over so-called “killer robots” according to Matthew Hipple, writing here at War on the Rocks. Military necessity will win out, and autonomous weapons are already here. The tech-savvy Hipple, who muses about the military implications of an astounding array of esoteric emerging technologies in his spare time, is right. Autonomy, as defined by opponents of “killer robots,” is already here and has been for a very long time. So does this mean that the debate is settled? Not quite — and it has everything to do with basic conceptual problems in how policymakers talk about autonomy, and fundamental contradictions regarding our understanding of machine intelligence.

Those considering the military implications of artificial intelligence (AI) would be better served to foreground the idea of agents, which not only sidesteps the complex and philosophical discussions of intelligence but also may be more readily incorporated into current military doctrine. And no, I’m not talking about James Bond movies. An agent merely denotes a software entity that can control both its inner state and outward behavior; an intelligent agent can react to changes in its environment, proactively shape the environment, and socially interact with other agents or humans. The key considerations in making agents have to do with trust and responsibility. What do we trust agents to do? What are we comfortable giving them responsibility for? Agents and the agency we grant them, rather than vague musings about autonomy or intelligence, matters because it fundamentally concerns what responsibility we are willing to delegate to a program that executes our will in a dynamic, uncertain environment populated by other human beings and AI programs.

The problem with the debate about autonomy is that the term could have at least four very separate meanings. Autonomy might be the ability of a program to reason about alternatives — an autonomous agent in computer science often refers to a program that makes its own choices about how to act using only the information it takes from its environment, a decision program, and a way to act based on perceptions and decision logic. Alternatively, an autonomous system may just be a program that can act (period) without human intervention. The biological meaning of autonomy is even simpler. At a very minimum, what makes up a biologically autonomous system boils down to a capacity to manage its interactions with the world to stay alive. Finally, there is the more familiar and popular definition of a program that runs and executes decisions without human intervention.

Unfortunately in discussions of autonomy, the term has become so thoroughly obfuscated that it might denote everything from a thermostat to an advanced mobile robot. So in one sense, Hipple is right about the debate being lost or over. If an “autonomous” platform can be everything and nothing, critics have already forfeited hope of banning them! “But Adam,” you might ask, “isn’t a computer program that does scary things without human intervention autonomous?” By some legal definitions and theories, yes. But that says a lot about the very problematic way we fetishize human interaction with the system. We assume that the problem with autonomous systems is that their behavior is complex and uncontrollable, and that is why humans must keep a tight leash on them. Algorithms, after all, are often said to be too complex to understand. But the frequency and nature of human control does not inherently mean anything about the “intelligence” of a computer program.

It is a trivial matter to create a computer program that makes decisions based on a simple control loop or hand-crafted scripts. Yet as long as no human interfered while the program did things, a naïve observer would dub it “autonomous” instead of “automated.” Many “bots” consisting of a few simple lines of code are left to run on the Internet without much, if any, interaction with their owners. There are countless small programs on personal computers that run more or less unnoticed by the average user and do not require any user input. Yet many would regard that as “automation” rather than autonomy. Would a simple control loop or hardcoded script be a successful or useful program in a dynamic and possibly adversarial environment of nontrivial complexity? Hell no. It would not be a program you would trust with matters of life and death, right? It needs to be “intelligent,” right?

The answer is more complicated. A recent IEEE Intelligent Systems piece noted rightfully that AI as a discipline has an “anarchy of methods.” Models of AI intelligence range from sophisticated high-level reasoners to bug-like automata. And here we return to the problematic issue of autonomy: Not all action is the product of decision. Many basic biological mechanisms that help humans act in the world are products of evolutionary processes refined over a grand time scale. And in many cases robots can exhibit seemingly sophisticated behaviors in the real world merely through collections of simple, hardcoded behaviors.

So now we come to the central difficulty. First, contrary to popular belief, the problem with autonomy isn’t necessarily that the underlying software is so complex as to be untrustworthy absent human control and direction. That’s a consequence of the overall problem, not necessarily the problem. A simple control loop that executes actions independent of what is going on in the outside environment would be just as undesirable as a robot that goes haywire because of some complex algorithm that complicates human ability to understand its behavior. Second, a simple program may be “intelligent” despite having little more than a set of basic scripts designed by a human creator, and an arbitration mechanism to handle which ones should have priority when they conflict. Such a program, if allowed to do its thing absent human control, would be “autonomous” in multiple ways — it would not need operator interaction to survive in the environment it was designed for, and no human would be interacting with it.

None of these often contradictory musings about autonomy and intelligence get to what people really want when they talk about machine autonomy in war and other important endeavors: consequence prevention and management. They want to know that machines will not do anything undesired: that they will not violate the laws of war; that they can tell a civilian apart from an insurgent; that someone might be accountable when a machine screws up. They are questions of authority, trust, and delegation.

Computer programs can be written that run without human intervention even if they lack anything we might perceive to be artificial “intelligence;” the question is whether our goals will be accomplished if we let them run unsupervised. A robot can be designed that survives in a harsh and difficult environment; the question is whether it hurts anyone we don’t want it to hurt in the process. A program can be completely hardcoded so that it never makes a decision that the programmer did not already anticipate. Or it can be written in a manner that allows it to reason about alternatives by weighing the pros and cons in terms of how much utility it maximizes with respect to a goal. The latter might lead to a situation in which a program calculates that the most rational action (given the goal) is to do something that we don’t want it to do, but the former might create even worse outcomes when the program encounters a situation that the designer did not anticipate.

It is clear that the issue is not autonomy, but agency — how much authority can we grant artificial agents even if we know that there may be a host of reasons why delegating authority to them could result in undesired outcomes? There are endless amounts of ways we could code up a machine that does something on our behalf, guided by an idea of what we want the agent to do and how much we are willing to trust it to act in our stead. The notion of “principal-agent problems” in economics describes any number of reasons why the principal may come to regret trusting the agent. These reoccur in computer science as well: Economic models and game theory have become key drivers of new research in AI agents.

At a minimum we can assume that a basic agent can control its inner state and also act in the world. At most we know an intelligent agent will need to react in a timely manner to changes in the environment, proactively shape it, interact with other entities, and do all of this to fulfill our goals. If this sounds familiar to military commanders, it is because they need their subordinates to perform all of these activities to fulfill a mission without having their hands held every step of the way. A good subordinate, after all, needs to be able to react quickly to sudden emergencies without losing sight of the overall orders he or she is given, and handle a variety of both cooperative and adversarial interactions with those they encounter in the course of their military duties.

The notion of agents and agency deals directly with issues of command, responsibility, and delegation that military officers otherwise unfamiliar with AI and robots understand intimately — an absurd comparison some may say. But scientifically analyzing the behavior of a mobile robot (or any AI system) is a matter of exploring how the behavior emerges from the combination of the agent’s makeup (hardware, software, control programs), the environment, and the task. This is functionally similar to assessing individual and group military performance in terms of individual/group psychology and decisions, environment, and task — which allows for AI to be used in sophisticated modeling, simulation, and wargaming as a means of simulating real-world military operations.

So while the makeup of the agent is important, what it is doing and what it is interacting with matters too. And while AI itself poses unique and enormously complex problems, in principle, an agent approach to framing this issue is actually not that much different from Cold War debates over whether or not decentralizing conventional and nuclear military units was worth the risk of accidental war if they provided guarantees of survivability. A recent DefenseOne article pondered in a similar vein whether AI agents, empowered to do human bidding, would lead to an accidental “flash war” if left unimpeded.

So what agency are you willing to give to a software program that may, in the course of executing your will, end up (perhaps for reasons completely external to the program) doing something other than what you want it to? Experts in AI and robotics have dramatically differing opinions. But the inconsistent and contradictory “anarchy” of AI methods cannot determine the answer alone. Nor will vague-sounded musings about “autonomy” save us. If we really want to regulate the killer robots, we have to ask ourselves a similar set of questions as the ones we ponder before we send a group of autonomous, alternatively scared and pumped 18 and 19-year-olds into killing fields with M4 rifles. Can we trust our agents to do the job at an acceptable (military and ethical) cost? All other questions, however interesting, are secondary.

 

Adam Elkus is a PhD student in Computational Social Science at George Mason University and a columnist at War on the Rocks. He has published articles on defense, international security, and technology at CTOVision, The Atlantic, the West Point Combating Terrorism Center’s Sentinel, and Foreign Policy.

 

Photo credit: The U.S. Army