Between a Roomba and a Terminator: What is Autonomy?

282258671_4fbfea2c30_b

Editor’s note: This is the first article in a six-part series, The Coming Swarm, on military robotics and automation as a part of the joint War on the Rocks-Center for a New American Security Beyond Offset Initiative.

 

Department of Defense leaders have stated that robotics and autonomous systems will be a key part of a new “offset strategy” to sustain American military dominance, but what is autonomy? Uninhabited, or unmanned, systems have played important roles in Iraq and Afghanistan, from providing loitering overhead surveillance to defusing bombs. They have operated generally in a remote-controlled context, however, with only limited automation for functions like takeoff and landing. Numerous Defense Department roadmap and vision documents depict a future of uninhabited vehicles with greater autonomy, transitioning over time to true robotic systems. What that means for how militaries fight, however, is somewhat murky.

What does it mean for a robot to be “fully autonomous?” How much machine intelligence is required to reach “full autonomy,” and when can we expect it? And what is the role of the human warfighter in this proposed future with robots running loose, untethered from their human controllers?

Confusion about the term “autonomy” is a problem in envisioning the answers to these questions. The word “autonomy” is used by different people in different ways, making communicating about where we are headed with robotic systems particularly challenging. The term “autonomous robot” might mean a Roomba to one person and a Terminator to another! Writers or presenters on this topic often articulate “levels of autonomy,” but their levels rarely agree, leading a recent Defense Science Board report on autonomy to throw out the concept of “levels” of autonomy altogether.

In the interest of adding some clarity to this issue, I want to illuminate how we use the word, why it is confusing, and how we can be more precise. I can’t change the fact that “autonomy” means so many things to so many people, and I won’t try to shoehorn all of the possible uses of autonomy into yet another chart of “levels of autonomy.” But I can try to inject some much needed precision into the discussion.

What is “Autonomy?”

In its simplest form, autonomy is the ability of a machine to perform a task without human input. Thus an “autonomous system” is a machine, whether hardware or software, that, once activated, performs some task or function on its own. A robot is an uninhabited system that incorporates some degree of autonomy, generally understood to include the ability to sense the environment and react to it, at least in some crude fashion.

Autonomous systems are not limited to uninhabited vehicles, however. In fact, autonomous, or automated, functions are included on many human-inhabited systems today. Most cars today include anti-lock brakes, traction and stability control, power steering, emergency seat belt retractors, and air bags. Higher-end cars may include intelligent cruise control, automatic lane keeping, collision avoidance, and automatic parking. For military aircraft, automatic ground collision avoidance systems (auto-GCAS) can similarly take control of a human-piloted aircraft if a pilot becomes disoriented and is about to fly into terrain. And modern commercial airliners have a high degree of automation available throughout every phase of a flight. Increased automation or autonomy can have many advantages, including increased safety and reliability, improved reaction time and performance, reduced personnel burden with associated cost savings, and the ability to continue operations in communications-degraded or denied environments.

Parsing out how much autonomy a system has is important for understanding the challenges and opportunities associated with increasing autonomy. There is a wide gap, of course, between a Roomba and a Terminator. Rather than search in vain for a unified framework of “levels of autonomy,” a more fruitful direction is to think of autonomy as having three main axes, or dimensions, along which a system can vary. These dimensions are independent, and so autonomy does not exist on merely one spectrum, but three spectrums simultaneously.

The Three Dimensions of Autonomy

What makes understanding autonomy so difficult is that people tend to use the same word to refer to three completely different concepts:

  • The human-machine command-and-control relationship
  • The complexity of the machine
  • The type of decision being automated

These are all important features of autonomous systems, but they are different ideas.

The human-machine command-and-control relationship

Machines that perform a function for some period of time, then stop and wait for human input before continuing, are often referred to as “semiautonomous” or “human in the loop.” Machines that can perform a function entirely on their own but have a human in a monitoring role, with the ability to intervene if the machine fails or malfunctions, are often referred to as “human-supervised autonomous” or “human on the loop.” Machines that can perform a function entirely on their own and humans are unable to intervene are often referred to as “fully autonomous” or “human out of the loop.” In this sense, “autonomy” is not about the intelligence of the machine, but rather its relationship to a human controller.

The complexity of the machine

The word “autonomy” is also used in a completely different way to refer to the complexity of the system. Regardless of the human-machine command-and-control relationship, words such as “automatic,” “automated,” and “autonomous” are often used to refer to a spectrum of complexity of machines. The term “automatic” is often used to refer to systems that have very simple, mechanical responses to environmental input. Examples of such systems include trip wires, mines, toasters, and old mechanical thermostats. The term “automated” is often used to refer to more complex, rule-based systems. Self-driving cars and modern programmable thermostats are examples of such systems. Sometimes the word “autonomous” is reserved for machines that execute some kind of self-direction, self-learning, or emergent behavior that was not directly predictable from an inspection of its code. An example would be a self-learning robot that taught itself how to walk or the Nest “learning thermostat.”

Others will reserve the word “autonomous” only for entities that have intelligence and free will, but these concepts hardly add clarity. Artificial intelligence is a loaded term that can refer to a wide range of systems, anywhere from those that exhibit near-human or super-human intelligence in a narrow domain, such as playing chess (Deep Blue), playing Jeopardy (Watson), or programming subway repair schedules, to potential future systems that might have human or super-human general intelligence. But whether general intelligence leads to free will, or whether humans even have free will, is itself debated.

What is particularly challenging is that there are no clear boundaries between these degrees of complexity, from “automatic” to “automated” to “autonomous” to “intelligent,” and different people may disagree on what to call any given system.

Type of function being automated

Ultimately, it is meaningless to refer to a machine as “autonomous” or “semiautonomous” without specifying the task or function being automated. Different decisions have different levels of complexity and risk. A mine and a toaster offer radically different levels of risk, even though both have humans “out of the loop” once activated and both use very simple mechanical switches. The task being automated, however, is much different. Any given machine might have humans in complete control of some tasks and might autonomously perform others. For example, an “autonomous car” drives from point A to point B on its own, but a person is still choosing the final destination. So it is only autonomous with respect to some functions.

“Full Autonomy” is a Meaningless Term

From this perspective, the question of when we will get to “full autonomy” is meaningless. There is not a single spectrum along which autonomy moves. The paradigm of human vs. machine is a common science fiction meme, but a better framework would be to ask which tasks are done by a person and which by a machine. A recent guidance document on autonomy from a number of NATO countries came to a similar conclusion, recommending a framework of thinking about “autonomous functions” of systems, rather than characterizing an entire vehicle or system as “autonomous.”

Importantly, these three dimensions of autonomy are independent. The intelligence or complexity of the machine is a separate concept from the tasks being performed. Increased intelligence or more sophisticated machine reasoning to perform a task does not necessarily equate to transferring control over more tasks from the human to the machine. Similarly, the human-machine command-and-control relationship is a different issue from complexity or tasks performed. A thermostat functions on its own without any human supervision or intervention when you leave your house, but it still has a limited set of functions it can perform.

Instead of thinking about “full autonomy,” we should focus on operationally-relevant autonomy: sufficient autonomy to get the job done. Depending on the mission, the environment, and communications, which functions are required to achieve operationally-relevant autonomy could look very different in different scenarios. In the air domain, operationally-relevant autonomy might mean the ability for the aircraft to takeoff, land, and fly point-to-point on its own in response to human taskings, with a human overseeing operations and making mission-level decisions, but not physically piloting by stick and rudder. In that case, for highly automated aircraft like the Global Hawk or MQ-1C Gray Eagle, operationally-relevant autonomy is here today. In communications-denied environments, autonomy is sufficient today for an aircraft to perform surveillance missions, jamming, or striking pre-programmed fixed targets, although striking targets of opportunity would require a human in the loop. For ground vehicles, operationally-relevant autonomy might similarly mean the ability for the vehicle to drive itself in response to human taskings without a human operator physically driving the vehicle. Operationally-relevant autonomy for ground vehicles is here today for leader-follower convoy operations or human-supervised operations, but not quite yet for communications-denied navigation in cluttered environments with potential obstacles or people. In the undersea environment where communications are challenging but there are fewer obstacles, operationally-relevant autonomy is already here today, as uninhabited undersea vehicles can already perform missions without direct human supervision.

“Autonomy” is not some point we arrive at in the future. Autonomy is a characteristic that will be increasingly incorporated into different functions on military systems, much like increasingly autonomous functions on cars: automatic lane keeping, collision avoidance, self-parking, etc. As this occurs, humans will still be required for many military tasks, particularly those involving the use of force. No system will be “fully autonomous” in the sense of being able to perform all possible military tasks on its own. Even a system operating in a communications-denied environment will still be bounded in terms of what it is allowed to do. Humans will still set the parameters for operation and will deploy military systems, choosing the mission they are to perform.

So the next time someone tells you, “it’s autonomous,” ask for a little more precision on what, exactly, they mean.

 

Paul Scharre is a fellow and Director of the 20YY Warfare Initiative at the Center for a New American Security (CNAS) and author of CNAS’ recent report, Robotics on the Battlefield Part II: The Coming Swarm. He is a former infantryman in the 75th Ranger Regiment and has served multiple tours in Iraq and Afghanistan.

 

Photo credit: Eirik Newth