What Chess Can Teach Us About the Future of AI and War
This article was submitted in response to the call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the first question (part a.), which asks how will artificial intelligence affect the character and/or the nature of war.
Will artificial intelligence (AI) change warfare? It’s hard to say. AI itself is not new — the first AI neural network was designed in 1943. But AI as a critical factor in competitions is relatively novel and, as a result, there’s not much data to draw from. However, the data that does exist is striking. Perhaps the most interesting examples are in the world of chess. The game has been teaching military strategists the ways of war for hundreds of years and has been a testbed for AI development for decades.
Military officials have been paying attention. Deputy Defense Secretary Robert Work famously used freestyle (or Centaur) chess to promote the third offset strategy, where humans and computers work together, combining human strategy and computer speed to eliminate blunders while allowing humans to focus on the big picture. Since then, AI and supercomputers have continued to reshape how chess is played. Technology has helped to level the playing field — the side with the weaker starting position is no longer at such a disadvantage. Likewise, intimidation from the threat of superhuman computers has occasionally led to some unorthodox behaviors, even in human-only matches.
The experience of AI in the chess world should be instructive for defense strategists. As AI enters combat, it will first be used just in training and in identifying mistakes before they are made. Next, improvements will make it a legitimate teammate, and — if it advances to superhuman ability in even narrow domains of warfighting, as it has in chess — then it could steer combat in directions that are unpredictable for both humans and machines.
What Does Chess Say About AI-Human Interaction?
Will AI replace soldiers in war? The experience of using AI and machine learning in chess suggests not. Even though the best chess today is played by computers alone, humans remain the focus of the chess world. The world computer chess championship at the International Conference on Machine Learning in Stockholm attracted a crowd of only three when I strolled by last year. In contrast, the human championship was streamed around the globe to millions. In human-only chess though, AI features heavily in the planning process, the results of which are called “prep.” Militaries are anticipating a similar planning role for AI, and even automated systems without humans rely on a planning process to provide “prep” for the machines. The shift toward AI for that process will affect how wars are fought.
To start, computers are likely to have an equalizing effect on combat as they have had in chess. The difference in ability among the top competitors in chess has grown smaller, and the advantage of moving first has become less advantageous. That was evident in last year’s human-only chess championship where competitors had the closest ratings ever in a championship, and the best-of-12 match had 12 straight draws for the first time. There have been more draws than wins in every championship since 2005, and though it is not exactly known why, many believe it is due to the influence of superhuman computers aiding underdogs, teaching defensive play, or simply perfecting the game.
AI is likely to level the military playing field because progress is being driven by commercial industry and academia — which will likely disseminate their developments more widely than militaries. That does not guarantee all militaries will benefit equally. Perhaps some countries could have better computers or will be able to pay for more of them, or have superior data to train with. But the open nature of computing resources makes cutting-edge technology available to all, even if that is not the only reason for equalization.
AI Favors the Underdog and Increases Uncertainty
AI seems to confer a distinct benefit to the underdog. In chess, black goes second and is at a significant disadvantage as a result. Fabiano Caruana, a well-known American chess player, claimed that computers are benefiting black. He added that computer analysis helps reveal many playable variations and moves that were once considered dubious or unplayable. In a military context, the ways to exert an advantage can be relatively obvious, but AI planning tools could be adept at searching and evaluating the large space of possible courses of action for the weaker side. This would be an unwelcome change for the United States, which has benefited from many years of military superiority.
Other theories exist for explaining the underdog’s improvement in chess. It may be that computers are simply driving chess toward its optimum outcome, which some argue is a tie. In war it could instead be that perfect play leads to victory rather than a draw. Unlike chess, the competitors are not constrained to the same pieces or set of moves. Then again, in a limited war where mass destruction is off the table, both sides aim to impose their will while restricting their own pieces and moves. If perfect play in managing escalation does lead to stalemate, then AI-enhanced planning or decision-making could drive toward that outcome.
However, superhuman computers do not always drive humans toward perfect play and can in fact drive them away from it. This happened in a bizarre turn in last year’s chess world championship, held in London. The “Queen’s Gambit Declined,” one of the most famous openings that players memorize, was used to kick off the second of the 12 games in the London match, but on the tenth move, the challenger, Caruana, playing as black, didn’t choose either of the standard next moves in the progression. During planning, his computers helped him find a move that past centuries had all but ignored. When the champion Magnus Carlsen, who is now the highest-rated player in history, was asked how he felt upon seeing the move, he recounted being so worried that his actual response can’t be reproduced here.
It is not so much that Caruana had found a new move that was stronger than the standard options. In fact, it may have even been weaker. But it rattled Carlsen because, as he said, “The difference now is that I’m facing not only the analytical team of Fabiano himself and his helpers but also his computer help. That makes the situation quite a bit different.” Carlsen suddenly found himself in a theater without the aid of electrical devices, having only his analytical might against what had become essentially a superhuman computer opponent.
His response might presage things to come in warfare. The strongest moves available to Carlsen were ones that the computer would have certainly analyzed and his challenger would have prepared for. Therefore, Carlsen’s best options were either ones that were certainly safe or ones that were strange enough that they would not have been studied by the computer.
When asked afterward if he had considered a relatively obvious option that he didn’t chose seven moves later in the game, Carlsen joked that “Yeah, I have some instincts … I figured that [Caruana] was still in prep and that was the perfect combination.” Fear of the computer drove the champion, arguably history’s best chess player, to forego a move that appeared to be the perfect combination in favor of a safer defensive position, a wise move if Caruana was in fact still in prep.
In war, there will be many options for avoiding the superhuman computing abilities of an adversary. A combatant without the aid of advanced technology may choose to withdraw or retreat upon observing the adversary doing something unexpected. Alternatively, the out-computed combatant might drive the conflict toward unforeseen situations where data is limited or does not exist, so as to nullify the role of the computer. That increases uncertainty for everyone involved.
How Will the U.S. Military Fare in a Future AI World?
The advantage may not always go the competitor with the most conventional capabilities or even the one that has made the most computing investment. Imagine the United States fighting against an adversary that can jam or otherwise interfere with communications to those supercomputers. Warfighters may find themselves, like Carlsen, in a theater without the aid of their powerful AI, up against the full analytical might of the adversary and their team of computers. Any unexpected action taken by the adversary at that point (e.g., repositioning their ground troops or launching missile strikes against unlikely locations) would be cause for panic. The natural assumption would be that adversary computers found a superior course of action that had accounted for the most likely American responses many moves into the future. The best options then, from the U.S. perspective, become those that are either extremely cautious, or those that are so unpredictable that they would not have been accounted for by either side.
AI-enabled computers might be an equalizer to help underdogs find new playable options. However, this isn’t the only lesson that chess can teach us about the impact of AI-enabled supercomputers and war. For now, while humans still dominate strategy, there will still be times where the computer provides advantages in speed or in avoiding blunders. When the computer overmatch becomes significant and apparent, though, strange behaviors should be expected from the humans.
Ideally, humans deprived of their computer assistants would retreat or switch to safe and conservative decisions only. But the rules of war are not as strict as the rules of chess. If an enemy turns out to be someone aided by feckless computers, instead of superhuman computers aided by feckless humans, it may be wise to anticipate more inventive — perhaps even reckless — human behavior.
Andrew Lohn is a senior information scientist at the nonprofit, nonpartisan RAND Corporation. His research topics have included military applications of AI and machine learning. He is also co-author of “How Might Artificial Intelligence Affect the Risk of Nuclear War?” (RAND, 2018).