AI’s Inhuman Advantage


When an AI fighter pilot beat an experienced human pilot 15-0 in the Defense Advanced Research Projects Agency’s AlphaDogfight competition, it didn’t just fly better than the human. It fought differently. Heron Systems’ AI agent used forward-quarter gunshots, when the two aircraft were racing toward each other head-to-head, a shot that’s banned in pilot training because of the risk of a collision. One fighter pilot characterized the AI’s abilities as a “superhuman capability” making high-precision, split-second shots that were “almost impossible” for humans. Even more impressive, the AI system wasn’t programmed to fight this way. It learned this tactic all on its own. AI systems’ ability to perform not just better than humans, but to fight differently, is a major potential advantage in warfare. 

The militaries that will be most successful in harnessing AI’s advantages will be those that effectively understand and employ its unique and often alien forms of cognition. U.S. defense projects sometimes conceive of AI systems as operating like a teammate or copilot. Yet AI systems often think in a radically different way to humans. These differences can be an advantage, but only if warfighters understand AI’s unique inhuman strengths and weaknesses. The U.S. military should increase its investments in prototyping, experimentation, and wargaming with AI systems to better understand their potential in warfare and how to best employ them. 

Different Is Better

AI performance in games provides lessons for its potential advantages in warfare and the radical changes that may lie ahead. During AlphaGo’s celebrated victory over Lee Sedol in the Chinese strategy game go, it made a move that so stunned Lee that he got up from the table and left the room. AlphaGo calculated the odds that a human would have made that move (based on its database of 30 million expert human moves) as 1 in 10,000. AlphaGo’s move wasn’t just better. It was inhuman.



AlphaGo’s unusual move wasn’t a fluke. AlphaGo plays differently than humans in several ways. It will carry out multiple simultaneous attacks on different parts of the board, whereas human players tend to focus on one region. And AlphaGo has developed novel opening moves, including some that humans simply do not understand. Experts who study AlphaGo’s playing style describe it as “alien,” and “from an alternate dimension.”

Similar inhuman playing styles have been seen in AI agents across a range of games. The AI system Libratus, which achieved superhuman performance in poker, plays differently than expert human players. It changes betting tactics more effectively than human players and makes bets that are unusually small or unusually large, sometimes twenty times the size of the pot. “It splits its bets into three, four, five different sizes,” Daniel McAulay (who lost to Libratus) told Wired magazine. “No human has the ability to do that.”

Chess grandmasters have pored over the moves of the chess-playing AI agent AlphaZero to analyze its unique playing style. AlphaZero learned to play chess entirely through self-play without any data from human games. It engages in “ferocious, unexpected attacks” on the opponent’s king, according to chess experts. It strongly favors moves that give it more options in the future. It will sacrifice chess pieces early for long-term advantage, including sacrifices that have no immediate gain but open positions to attack the opponent’s king. It particularly excels at mobility and combining attacks, using both in ways that are difficult for humans to replicate. 

AI agents’ advantages in games point to some of their potential in warfare. AI agents in games demonstrate superior precision, speed, coordination, situational awareness, resource management, aggressiveness, and risk-taking when compared to human players. The cumulative effect of these advantages in games is devastating to human opponents. These attributes are also valuable in warfare. AI agents have weaknesses, though. Their performance is often very brittle, and AI agents can struggle to adapt to small rule changes in games. These weaknesses could prove fatal in combat — where there are no rules — and militaries should be mindful of AI systems’ flaws. 

Thinking Differently About Strategy

Computer games, such as StarCraft II and Dota 2, are a valuable testing ground for AI performance. These games pit opposing sides in a battle to control territory and resources, with each player moving units around a digital battlefield to perform reconnaissance, resource collection, and combat. While vastly simpler than the real world, these games are highly complex relative to other games. At any given point in StarCraft II there are approximately 1026 actions a player can take. Because some information is hidden, players interact in a dynamic and constantly changing environment with limited knowledge. Computer strategy games also require agents to balance short-term tactics with long-term planning. Dota 2 has approximately 20,000 time steps in which a player can make a move, much longer than the roughly 80 moves per game in chess or 150 moves per game in go.

AI agents have excelled in computer strategy games through superior command and control. AI players have access to the same information, resources, and units as human players. Their individual units have the same speed and abilities. Any advantage is due to AI agents’ ability to process information, make decisions, and take actions. AI agents’ victories demonstrate that machines can dramatically outperform humans in command and control, a potential major advantage in war.

It’s not just that AI agents are faster — although they are capable of being much faster than even the top professional human gamers. Left unconstrained, AI agents are effectively invincible in small-unit tactics, able to dodge enemy fire in computer games. Even when limited to human speed, AI agents are better at unit tactics. They can also absorb more information simultaneously, rather than having to divide their attention over multiple tasks. They are more precise and avoid wasting valuable actions, time, or resources. AI agents can also attack with greater coordination among multiple units or co-operative agents.

OpenAI’s Dota 2 agents, OpenAI Five, demonstrated many of these attributes. They were able to identify human player attacks and swiftly counter them faster than human players could react, even while operating with a 200-millisecond delay intended to match human reaction times. OpenAI’s agents, which are separate team members controlled by different AI players, were also able to precisely coordinate their attacks, hitting enemy units at the exact right moment and with the exact right amount of damage without wasting resources. Their speed, precision, and coordination led them to particularly excel in team fights, where multiple agents cooperatively fight against several opponents. The bots also played with unusual aggressiveness relative to human players, constantly attacking. One human player said, “It felt like I was pressured at all times in the game.”

While the specific algorithms and tactics used for chess, go, poker, StarCraft II, or Dota 2 wouldn’t translate to real-world combat, AI’s superhuman speed, awareness, precision, coordination, calculated risk-taking, and aggression could be extremely valuable in combat. Militaries that trained algorithms to take on command-and-control functions could potentially render their competitors demoralized and helpless, just as AI agents have done in computer games.

Increased Speed and Forcing Errors

Across multiple types of games, some common patterns emerge about AI’s potential advantages over humans. The first is perhaps the most obvious: increased speed and scale of information processing. In chess, human grandmasters can look only 15 to 20 moves ahead compared to AlphaZero’s 60,000 positions per second. In dogfighting, where split-second timing matters, the AI agent isn’t burdened with the slowness of human cognition or reflexes. In capture-the-flag computer games, AI agents can tag opponents faster and more accurately than humans. In real-time computer strategy games, AI agents can execute tasks faster than humans, including multiple simultaneous actions.

AI agents can also look more holistically at the entire state of a game. In StarCraft II or Dota 2, an AI agent doesn’t need to focus its attention on a single part of the map where combat is unfolding, as a human does. It can take in information about the whole map simultaneously. This gives the AI agents greater orientation and awareness of the whole of the action and the ability to optimally prioritize resources. AI agents also demonstrate attentiveness to parts of the game that are not directly engaged in competition at a particular moment. Both AlphaZero, the chess-playing agent, and AlphaStar, the StarCraft II–playing agent, have demonstrated the behavior of redeploying pieces that are no longer needed after an attack, rather than waiting for them to be attacked first.

The superhuman attentiveness of AI agents also plays out in their ability to not make the sort of careless blunders that characterize even expert human play. The ability to play nearly flawlessly, even if in some circumstances unimaginatively, can be a tremendous advantage in many games, especially since games are designed to be roughly evenly balanced between opposing sides. After playing against AlphaStar, professional StarCraft II player Grzegorz “MaNa” Komincz noted, “I’ve realized how much my gameplay relies on forcing mistakes and being able to exploit human reactions.” Simply avoiding careless mistakes can be a major advantage for AI agents.

Another advantage is superhuman precision, which opens up novel strategies unavailable to humans — such as forward-quarter gunshots in dogfighting or perfectly calibrated team attacks in Dota 2. AI agents’ superhuman precision also enables them to operate extremely efficiently, conserving and allocating resources to optimal efficiency. In strategy games that involve building up resources over time, this can lead to significant cumulative advantages.



AI agents also appear to have major advantages over humans in coordination and long-term planning. In chess, AlphaZero excels at combining multiple attacks. In Dota 2, AI agents demonstrate superhuman coordination in tactical actions, such as multi-character attacks, but also in strategic actions. When playing Dota 2, human players tend to divide up the map among teammates, with players only switching locations occasionally. OpenAI Five’s five AI agents switched their characters’ locations on the map more frequently than human players, flexibly adjusting as a team as the game progressed. In poker, go, and chess, AI agents make moves that appear weak at first, but gain them a long-term positional advantage. This advantage is not always present, however, and human observers have at times criticized AI agents for their apparent lack of long-term planning.

Novel Strategies

Across many games, AI agents have widened the space of available tactics and strategies, exhibiting greater range in behaviors than human players. While the novel strategies of chess- and go-playing agents have often received attention, the same behaviors have been observed in other games including poker and computer games. Professional StarCraft II player Dario “TLO” Wünsch remarked of AlphaStar, “The agent demonstrated strategies I hadn’t thought of before, which means there may still be new ways of playing the game that we haven’t fully explored yet.” In some cases, this increased variability directly leads to benefit, as in poker where unpredictability is a key advantage. In other cases, it has expanded how humans think about the game — such as in chess, where AlphaZero has led human grandmasters to explore new openings.

AI agents appear to have the ability to engage in dramatic shifts in strategies and risk-taking in ways that are different from human players and, in some cases, impossible for human players to match. In poker, Libratus can make wild shifts in bet sizes. In go, once AlphaGo has a secure advantage it plays conservatively, since it is designed to maximize its chances of winning rather than its margin of victory over the other player. If AlphaGo is ahead, it will play conservatively to lock in what may be a narrow margin, rather than press to widen the gap. Yet AI agents don’t always play cautiously. In chess, AlphaZero will sacrifice pieces early on in a game, taking risk for a longer-term advantage. In Dota 2, the OpenAI Five agents play aggressively, putting constant pressure on human players and never letting up for an instant. These agents have also demonstrated the ability to engage in more finely calibrated risk-taking than human players. OpenAI researchers noted:

Human players are often cautious when their hero has low health; OpenAI Five seemed to have a very finely-tuned understanding of when an aggressive attack with a low-health hero was worth a risk.

AI agents do not, in general, appear to play more aggressively or conservatively than humans. Rather, they appear to play in a manner that is more strategic and rational (one might say cold-blooded), regulating their degree of aggressiveness or caution to what is called for in the moment. While a human player might have a tendency toward conservative or aggressive play, AI agents seem capable of executing both extremes and pivoting quickly between them based on what is optimal for achieving their goal.

AI agents are not flawless. There are common themes in their weaknesses too. While their performance is simply superior to humans in some games such as chess and go, for real-time computer strategy games, the open-ended environment brings some of their limitations to light. One consistent weakness is that AI agents playing StarCraft II and Dota 2 appear to lean heavily on their advantages in small-unit tactics, perhaps to the detriment of long-term planning. AI systems in a diverse array of situations frequently fall victim to settling for suboptimal strategies if those strategies are easier to discover. Winning in simpler ways is easier, and the AI agents are playing to win.

AI’s general characteristic of brittleness also was on display in some games. In poker, human players occasionally found parts of the game tree that Libratus had not mapped and did not perform well at. (The researchers behind Libratus quickly improved its performance by running calculations in those parts of the game tree at night while the human players were sleeping.) In one Dota 2 match, OpenAI allowed the audience to pick the AI team’s characters. The audience chose a poor lineup of characters (a bad team). The AI agents performed poorly and inflexibly, using the same familiar tactics that were ill-suited for the new characters. The OpenAI Five also played in a restricted game space, with certain characters and types of actions off-limits to reduce the complexity of the game. The final version of OpenAI Five played over 7,000 games on the internet, racking up an impressive 99.4 percent win average against 15,000 human players. But the model was not as robust as these numbers might suggest. Every time that the Dota 2 game was updated by its developer, such as adding new characters, items, or maps, OpenAI researchers had to perform what they termed “surgery” on the AI model to adapt it to the new environment. The researchers similarly had to perform surgery if they made available to the model a new action or item, as they matured the model’s capabilities and introduced it to more complex environments. The alternative to this relatively manual engineering process was to retrain a new model entirely from scratch on the new game environment, which would have been both time- and cost-prohibitive. Without significant engineering work or retraining, the model would frequently struggle to adapt to even modest changes. This brittleness is likely to be a major detriment in real-world settings where the space of possible enemy actions is open-ended and the environment is not highly constrained and repeatable as it is in games.


In gaming environments, some advantages of AI agents are viewed differently than others. Superhuman precision and speed are often viewed as unfair advantages. The fact that Heron Systems’ AI dogfighting agent was able to take gunshots that are banned in training by human pilots could be seen as an unfair advantage. In computer games, programmers have frequently slowed down AI agents’ reaction times to match those of humans. AI agents’ superior strategic abilities, however, are often celebrated, such as their prowess at chess or go. In war, militaries may view these benefits differently. War isn’t fair, and superhuman speed and precision that enables better combat performance is likely to be welcomed. Conversely, AI decision-making that is somewhat mysterious, like the unconventional moves that AI agents sometimes make in poker, chess, and go, might be harder for militaries to embrace. It is easier for militaries to trust an AI agent whose advantage is clearly identifiable, such as quicker reflexes. Placing faith in an AI agent whose cognition is opaque and whose long-term plan is unknown may be a harder sell. Yet over time as AI systems take on more roles, including in tactical planning and decision-making, military leaders may face the decision on whether to trust an AI system’s recommendation that they don’t fully understand.

In settings where AI systems need to cooperate with humans, their alien cognition may be a disadvantage, and AI systems may need to be specifically trained to act like humans. In games such as Diplomacy that require cooperation with human players, AI agents must be specifically trained on human data. AI agents trained through self-play alone will play differently than humans.

Finding ways to optimally employ AI systems and combine them with humans in a joint human-machine cognitive system will be a difficult task. AI systems are sometimes characterized in defense projects as being teammates, as if they are another soldier in the squad or a copilot in the cockpit. But human-machine cognitive teams are fundamentally different from human-human teams. Militaries are adding into their warfighting functions an information processing system that can think in ways that are quite alien to human intelligence. Militaries that best learn how to marry human and machine cognition and take advantage of the unusual attributes of how AI systems think will have tremendous advantages. The U.S. military can best gain an edge in the disruptive changes ahead by investing in experimentation, prototyping, and wargaming to explore the unique opportunities and challenges in human-machine teaming. 



Paul Scharre is vice president and director of studies at the Center for a New American Security. This article is adapted from his new book, Four Battlegrounds: Power in the Age of Artificial Intelligence. Copyright (c) 2023 by Paul Scharre. Used with permission of the publisher, W. W. Norton & Company, Inc. All rights reserved.

Image: U.S. Air Force photo by Airman 1st Class Trenton Jancze