The Human Element in Robotic Warfare

March 11, 2015

For special access to experts and other members of the national security community, check out the new War on the Rocks membership.

Editor’s note: This is the fourth article in a six-part series, “The Coming Swarm,” on military robotics and automation as a part of the joint War on the Rocks-Center for a New American Security Beyond Offset Initiative. Read the first, second, and third entries in this series.


The first rule of unmanned aircraft is, don’t call them unmanned aircraft. And whatever you do, don’t call them drones.

The U.S. Air Force prefers the term “remotely piloted aircraft” to refer to its Predators, Reapers, and Global Hawks. And for Predators and Reapers, that undoubtedly is a reflection of their reality today. They are flown by stick and rudder by pilots who just happen to not be onboard the plane (and sometimes are on the other side of the globe).

For aircraft like the Global Hawk, which is largely automated and does not require a pilot with stick and rudder, but rather has a person direct the aircraft via keyboard and mouse, the question of whether they are “remotely piloted” is a bit murkier.

Is piloting about a physical act – controlling the aircraft directly via inputs to the flight controls – or about commanding the aircraft and being responsible for the mission, for airspace deconfliction, and for making decisions about where the aircraft should go?

Historically, the answer has been both. But automation is changing that. It’s changing what it means to be a pilot. A person no longer has to be physically on board the aircraft to be considered a “pilot.” Nor do they need to be physically controlling the aircraft’s flight controls directly. The day will soon come when, because of automation, a person can “pilot” multiple aircraft at the same time. It is already technically possible today. The cultural presumption that a person can only command one aircraft at a time is stalling implementation of multi-aircraft control.

But that will change.

Automation is and has long been colonizing jobs once performed by humans in a range of industries, from driving forklifts to writing newspaper stories. And the changes on military operations will be no less profound. While pilots may be the first to grapple with this paradigm shift, autonomous systems will raise the same issues across many military positions, from truck drivers to tankers. Autonomous systems will inevitably change how some military duties are performed and may eliminate some job specialties entirely. Physical prowess for some tasks, like piloting an aircraft, driving a vehicle, or firing a rifle will be less important in a world where aircraft fly themselves, vehicles drive on their own, and smart rifles correct for wind, humidity, elevation and the shooter’s movements all on their own.

For some military communities, the shift will be quite significant. Sometimes, this can lead to a reluctance to embrace robotic systems, with the fear that they are replacing humans. This is unfortunate because it could not be further from the truth. Autonomous systems will not replace warfighters any more than previous innovations like firearms, steam-powered ships, or tanks replaced combatants. These innovations did, however, change how militaries fight. Today’s infantryman, sailors, and cavalrymen no longer fight with edged weapons, work the sails and rigging of ships, or ride horses, but the ethos embodied in their job specialties lives on, even as the specific ways in which warfighters carry out those duties have changed. Similarly, the duties of tomorrow’s “pilots,” “tank drivers,” and “snipers” will look far different from today, but the ethos embodied in these job specialties will not change. Human judgment will always be required in combat.

The Human Element

Terminology that refers to robotic systems as “unmanned” can feed false perceptions on the roles that human beings will or will not play. The Air Force is right to push back against the term “unmanned.” (Note: I often use it myself in writing because it has become common currency, but I prefer “uninhabited vehicle,” which is more accurate.) “Unmanned” implies a person is not involved. But robotic systems will not roll off the assembly line and report for combat duty. Humans will still be involved in warfare and still in command, but at the mission level rather than manually performing every task. Uninhabited and autonomous systems can help but also have shortcomings, and will not be appropriate for every task. The future is not unmanned, but one of human-machine teaming.

Militaries will want a blend of autonomous systems and human decision-making. Autonomous systems will be able to perform many military tasks better than humans, and will particularly be useful in situations where speed and precision are required or where repetitive tasks are to be performed in relatively structured environments. At the same time, barring major advances in novel computing methods that aim to develop computers that work like human brains, such as neural networks or neuromorphic computing, autonomous systems will have significant limitations. While machines exceed human cognitive capacities in some areas, particularly speed, they lack robust general intelligence that is flexible across a range of situations. Machine intelligence is “brittle.” That is, autonomous systems can often outperform humans in narrow tasks, such as chess or driving, but if pushed outside their programmed parameters they fail, and often badly. Human intelligence, on the other hand, is very robust to changes in the environment and is capable of adapting and handling ambiguity. As a result, some decisions, particularly those requiring judgment or creativity, will be inappropriate for autonomous systems. The best cognitive systems, therefore, are neither human nor machine alone, but rather human and machine intelligences working together.

Militaries looking to best harness the advantages of autonomous systems should take a cue from the field of “advanced chess,” where human and machine players cooperate together in hybrid, or “centaur,” teams. After world chess champion Gary Kasparov lost to IBM’s chess-playing computer Deep Blue in 1996 (and again in a 1997 rematch), he founded the field of advanced chess, which is now the cutting edge of chess competition. In advanced chess, human players play in cooperation with a computer chess program, with human players able to use the program to evaluate possible moves and try out alternative sequences. The result is a superior game of chess, more sophisticated than would be possible with simply humans or machines playing alone.

Human-machine teaming raises new challenges, and militaries will need to experiment to find the optimum mix of human and machine cognition. Determining which tasks should be done by machines and which by people will be an important consideration, and one made continually challenging as machines continue to advance in cognitive abilities. Human-machine interfaces and training for human operators to understand autonomous systems will be equally important. Human operators will need to know the strengths and limitations of autonomous systems, and in which situations autonomous systems are likely to lead to superior results and when they are likely to fail. As autonomous systems become incorporated into military forces, the tasks required of humans will change, not only with respect to what functions they will no longer perform, but also which new tasks they will be required to learn. Humans operators will need to be able to understand, supervise, and control complex autonomous systems in combat. This places new burdens on the selection, training, and education of military personnel, and potentially raises additional policy concerns. Cognitive human performance enhancement may help and in fact may be essential to managing the data overload and increased operations tempo of future warfare, but has its own set of legal, ethical, policy, and social challenges.

How militaries incorporate autonomous systems into their forces will be shaped in part by strategic need and available technology, but also in large part by military bureaucracy and culture. Humans may be unwilling to pass control for some tasks over to machines. Debates over autonomous cars are an instructive example. Human beings are horrible drivers, killing more than 30,000 people a year in the United States alone, or roughly the equivalent of a 9/11 attack every month. Self-driving cars, on the other hand, have already driven nearly 700,000 miles, including in crowded city streets, without a single accident. Autonomous cars have the potential to save literally tens of thousands of lives every year, yet rather than rushing to put self-driving cars on the streets as quickly as possible, adoption is moving forward cautiously. At the state of the technology today, even if autonomous cars are far better than human drivers overall, there would inevitably be situations where the autonomy fails and humans, who are better at adapting to novel and ambiguous circumstances, would have done better in that instance. Even if, in aggregate, thousands of lives could be saved with more autonomy, humans tend to focus on the few instances where the autonomy could fail and humans would have performed better. Transferring human control to automation requires trust, which is not easily given.

War is a Human Endeavor

Many of the tasks humans perform in warfare will change, but humans will remain central to war, for good or ill. The introduction of increasingly capable uninhabited and autonomous systems on the battlefield will not lead to bloodless wars of robots fighting robots, with humans sitting safely on the sidelines. Death and violence will remain an inescapable component of war, if for no other reason than it will require real human costs for wars to come to an end. Nor will humans be removed from the battlefield entirely, telecommuting to combat from thousands of miles away. Remote operations will have a role, as they already do in uninhabited aircraft operations today, but humans will also be needed forward in the battlespace, particularly for command-and-control when long-range communications are degraded.

Even as autonomous systems play an increasing role on the battlefield, it is still humans who will fight wars, only with different weapons. Combatants are people, not machines. Technology will aid humans in fighting, as it has since the invention of the sling, the spear, and the bow and arrow. Better technology can give combatants an edge in terms of standoff, survivability, or lethality, advantages that combatants have sought since the first time a human picked up a club to extend his reach against an enemy. But technology alone is nothing without insight into the new uses it unlocks. The tank, radio, and airplane were critical components of the blitzkrieg, but the blitzkrieg also required new doctrine, organization, concepts of operation, experimentation, and training to be developed successfully. It was people who developed those concepts, drafted requirements for the technology, restructured organizations, rewrote doctrine, and ultimately fought. In the future, it will be no different.

War will remain a clash of wills. To the extent that autonomous systems allow more effective battlefield operations, they can be a major advantage. Those who master a new technology and its associated concepts of operation first can gain game-changing advantages on the battlefield, allowing decisive victory over those who lag behind. But technological innovation in war can be a double-edged sword. If this advantage erodes a nation’s willingness to squarely face the burden of war, it can be a detriment. The illusion that such advantages can lead to quick, easy wars can be seductive, and those who succumb to it may find their illusions shattered by the unpleasant and bloody realities of war. Uninhabited and autonomous systems can lead to advantages over one’s enemy, but the millennia-long evolution of weapons and countermeasures suggests that such weapons will proliferate: No innovation leaves its user invulnerable for very long. In particular, increasing automation has the potential to accelerate the pace of warfare, but not necessarily in ways that are conducive to the cause of peace. An accelerated tempo of operations may lead to combat that is more chaotic, but not more controllable. Wars that start quickly may not end quickly.

The introduction of robotic systems on the battlefield raises challenging operational, strategic, and policy issues, the full scope of which cannot yet be seen. The nations and militaries that see furthest into a dim and uncertain future to anticipate these challenges and prepare for them now will be best poised to succeed in the warfighting regime to come.


Paul Scharre is a fellow and Director of the 20YY Warfare Initiative at the Center for a New American Security (CNAS) and author of CNAS’ recent report, “Robotics on the Battlefield Part II: The Coming Swarm.” He is a former infantryman in the 75th Ranger Regiment and has served multiple tours in Iraq and Afghanistan.

We have retired our comments section, but if you want to talk to other members of the natsec community about War on the Rocks articles, the War Hall is the place for you. Check out our membership at!