Warbots: Tactically Brilliant, Strategically Naive

6575338 (1)

Kenneth Payne, I, Warbot: The Dawn of Artificially Intelligent Conflict (Hurst, 2021)

 

As the Cold War intensified in 1979, Soviet officials tried to gain a strategic advantage by outsourcing analysis to a computer model. The KGB developed “VRYAN” — an acronym of its Russian name — to warn of any impending U.S. nuclear strike by crunching data on over 40,000 metrics. The model’s conclusions about declining Soviet power probably didn’t take the world to the brink of nuclear war. But its use exemplified how policymakers can be tempted to over-rely on technology without fully comprehending how such systems reach their judgments. Political and military officials should be wary of a potentially risky reliance on such technology.

How exactly should warfighters and policymakers use AI? That question is intertwined with a more fundamental one: In what ways can machines ever be truly intelligent? Those matters are interrelated because the nature of machine intelligence will inform humans’ thinking about how it should be used in conflict. If policymakers and warfighters lack understanding about the kind of intelligence possessed by battlefield AI systems, they could rely on those systems for inappropriate tasks. The consequences could be huge, dangerous, and potentially deadly.

 

 

In his latest book — I, Warbot — Kenneth Payne draws upon his academic background in psychology and strategy to take a fresh approach to AI, weaving together analysis about the nature of intelligence with an examination of AI’s uses. AI can enhance military capabilities such as reconnaissance and targeting that use machines’ talents in areas like processing speed and pattern recognition. Where the technology falls down is in situations that require creativity and intuition, capabilities that remain peculiar to organic life. Ultimately, militaries should only consider AI as an option in warfare where it can add value — in tactics, not in strategy.

The Narrowness of Machine Intelligence 

Much of the successful research into AI has focused on creating machines that can perform specific tasks. There is a good reason for this: Machines do well at narrow assignments with defined inputs. Many early projects began with “toy problems” such as moving blocks in microworlds in which all extraneous variables could be controlled. Researchers defined the inputs and the actions they expected the machine to take in various situations — along the lines of “if A happens, do B” — and the machine went through its assigned tasks. The problem lay in scaling this approach for use in the real world, a dynamic environment where extraneous variables fluctuate wildly. The machines often floundered when thrown into that type of setting.

Later projects went further into the realms of machine learning. Researchers could define the optimum outcome and the machine would work out how to get there for itself. This approach produced programs that, for example, could not only competently play games such as chess or Go but ultimately could beat the best human players. However, these machines were not really displaying a type of intelligence different from their block-moving predecessors. Their approach was one of “brute force” — once they had been told the rules of the game and what constituted winning, they simply used their superior processing power to probe every possible outcome to find the best option for their next move.

AI is good at recognizing patterns, processing vast amounts of data, and quickly performing huge numbers of calculations. Humans cannot hope to compete with machines in these areas because we lack their computational power. But, while many enhancements have been made over the last decade, and AI can now perform numerous tasks with reasonable competence, machines still lack true creativity. A machine may find patterns in reams of data, but it cannot interpret them and determine whether they are actually meaningful.

Payne explores this through the lens of Margaret Boden’s work, which identifies three types of creativity: exploratory, combinatorial, and transformational. The sheer computational power of AI gives it excellent exploratory creativity — the ability to discover new things —and in this area it vastly outperforms humans. Machines may also appear to demonstrate combinatorial creativity, which entails creating new permutations of ideas. But, in reality, machines are just using exploratory creativity to uncover options that humans may not readily identify. Unlike AI, a human mind is capable of devising truly new combinations of ideas.

Transformational creativity is entirely beyond AI. Machines do not have the ability to open up a whole new panorama of ideas by transcending the original search parameters. The machines that have been created thus far may be excellent at their defined tasks, but they could never adapt to other ones — a chess-playing AI would be useless if you wanted to play Go. AI does not abstract its functions and cannot combine its talents into new ways of working to complete a different goal. Humans will likely always be superior in this type of creativity.

Warbots in Action

Understanding what AI can and cannot do really matters when it comes to using warbots in the field. If an AI program has excellent narrow intelligence and exploratory power, it can be tasked to perform a particular function — and may do so better than a human. However, it cannot be relied upon in areas where transformational creativity is needed. This is the key distinction — known to warfighters everywhere — between tactics and strategy.

AI excels in tactical situations: It can target enemy forces and assets quickly with its calculating power, use image recognition to find enemy tanks in satellite imagery, and parse massive amounts of data to identify useful intelligence. What AI cannot do is strategize. Without transformational creativity, AI is only as good as the parameters it works within, and it is impossible to code every strategic eventuality or risk calculation into a program — the real world is just too complex and too changeable. Winning a war is not like winning a game, which only requires making highly defined choices to achieve a limited goal or to maximize points scored. Furthermore, strategic decisions often involve weighing uniquely human considerations such as ethics, emotion, and the unpredictability of other humans’ actions.

Surely, then, the answer is for AI and humans to work together, combining their respective talents to form a synergetic whole? Maybe, but perhaps not. Payne explores this via a thought experiment, a blend of human and machine decision-makers that he calls the “Universal Schelling Machine,” but by the end of his hypothetical analysis he has more questions than answers. We don’t know how the parts of the whole would work together — would the humans over-rely on the AI, believing it to be superior at its job, or would they not trust its suggestions because of its non-human nature? Would the AI’s lack of common-sense understanding hinder it entirely from working on the same abstract level as the humans? Would the adversary treat the team as though it were human, or would the enemy calculate risks differently due to the presence of AI on the opposing side?

Rules for Your New Warbot

Even when keeping warbots at a tactical level rather than allowing them to branch out into strategy, they need to be constrained. Allowing a machine to use lethal force without the ethical basis of a human mind is the stuff of nightmares — and a plethora of science fiction. Building upon Isaac Asimov’s famous rules bounding robot actions, Payne suggests three of his own:

  1. A warbot should only kill those I want it to, and it should do so as humanely as possible.
  2. A warbot should understand my intentions, and work creatively to achieve them.
  3. A warbot should protect the humans on my side, sacrificing itself to do so — but not at the expense of the mission.

The first of these reflects the ideals of accuracy and humanity when taking life, as we expect of our human warfighters. The second attempts to avoid the Sorcerer’s Apprentice problem of unintended consequences by ensuring that a warbot checks back to ascertain that it is still acting as its commander wishes. But the rule also recognizes that some flexibility is required — as with delegating to human personnel, it can be better to authorize units to adapt to changing conditions without constantly having to seek permission from higher levels of command. Payne acknowledges that there is a tension between efficiency and creativity in this regard, and that is an issue that should be further analyzed.

The third rule recognizes that the desire for military personnel to protect one another is often bounded by the exigencies of the mission at hand and comes with complicated calculations of duty, sacrifice, and honor. Again, this rule generates more questions than answers and runs up against the problem of coding human concepts into a framework that can be used by a machine.

The Transformational Task Ahead

Payne’s rules reflect the tensions and uncertainties that we grapple with when thinking about human warfighters. These only become more complex when trying to apply them to machines. His attempt to codify these is a brave one, and it forms an excellent point of departure for the future discussions that should occur on these difficult yet vital issues. While producing more questions than answers, the book is particularly valuable in highlighting the gaps in our understanding of machines in battle. Plenty of excellent work already exists on AI and its use in military contexts, but I, Warbot offers a new synthesis, tying together practical considerations with a more philosophical exploration of the nature of intelligence.

AI may be good at tactical tasks, but it still needs to be constrained in how it performs them if we are to be confident that the dire predictions of science fiction will not come true. We don’t yet know how to achieve that. Strategy remains out of AI’s reach, and there are many considerations to be dealt with before military commanders can rely on human-machine teaming in this realm. Armed forces around the world, including those of the United States and China, have not yet figured out how to deal with these critical issues, which will have serious implications for the future of war. I, Warbot will not reassure its readers about the prospect of using AI in warfare, but it was not intended to ­— and that is a good thing.

 

 

Emma Salisbury is working on her Ph.D. at Birkbeck College, University of London. Her research focuses on defense research and development in the United States and the military-industrial complex. She is also a senior staffer at the U.K. Parliament. The views expressed here are solely her own. You can find her on Twitter @salisbot.

Image: U.S. Marine Corps (Photo by Lance Cpl. Cheng Chang)