Adding Shots on Target: Wargaming Beyond the Game

wargames

What will the future wars look like? Fiction offers a range of answers — some contradictory. Is the priority urban security as depicted in the dystopian sci-fi world of Judge Dredd, or warfare in space shown in the sci-fi series The Expanse? Will advances in autonomy bring robot overlords like the Terminator or help-mates like Tony Stark’s Jarvis? Figuring out what the future may look like — and what concepts and technology we should invest in now to be prepared — is hard. To do it well we need to consider how America might take advantage of different futures. To this end former Deputy Secretary of Defense Bob Work and Gen. Paul Selva, vice chairman of the Joint Chiefs of Staff challenged the wargaming community to build a cycle of research to help understand what these paths might be.

But what is the cycle of research? Put simply it’s a process for using multiple tools with different strengths and weaknesses to examine the same problem from many angles, which a range of game designers recommend. Like any other method, games have limitations: They produce a specific type of knowledge that is helpful in answering some questions, but not others. Games cannot be expected to provide a credible prediction of the performance of a new weapon or detailed understanding of the cost of acquiring a platform. However, by using gaming in conjunction with modeling and exercises different types of evidence can be gathered that should yield stronger results.

But what should the cycle look like if it is going to help us understand the future of conflict? Based on my practice as a national security game designer, I’ve found the following five steps can help guide effective follow-on analysis.

  1. Plan for the Cycle from the Get-Go

Game designers and sponsors should work in advance to understand how the game will connect to and complement other research in the cycle. In some cases, it may be possible to anticipate likely lines of research questions and plan for a series of studies. For example, an office seeking new technology to solve a tactical problem may run a game to identify likely candidates and plan for modeling of the most promising candidates. Alternatively, an office may opt for a series of games that look at the same problem from different domain perspectives, with the expectation that each game will learn something from the one before.

In other cases, the utility of multiple methods may reveal itself over time. For example, a recent RAND project designed a game-theoretic model of conflict in space to identify conditions that support deterrence. The research team developed an initial model of possible decisions an actor could make to escalate or de-escalate a budding conflict in space, but given the costs of building and running a program that could examine thousands of cases, they wanted to make sure that the model accurately reflected human behavior before they began programing. The team designed a short manual game where subject-matter experts were asked to manage a conflict that could easily escalate into war in space. We watched the players to see if they would behave the same way as the model predicted. For example, we hypothesized that players would be more aggressive when they felt themselves at a disadvantage. Over and over players acted out of a concern that they needed to “appear strong” — escalating the conflict exactly as the model predicted.

In cases like this, the precise question or method that will be used may not be known until later stages of study. Even when we can’t anticipate all the tools that will be used, simply ensuring that contracts and budgets are designed to support short exploratory tasks or follow-on work if it proves to be appropriate can dramatically lower the barriers to working in a cycle.

  1. Identify Topics for Further Analysis from Surprising Moments

Game designers and participants in successful games often describe an “a-ha moment” — an unexpected game event or a statement made in the game that offered new insight on a familiar problem. For example, in the space game, participants took actions not for their operational effect, but rather to signal intentions. While the game designers had not previously included signaling actions in the design of the model, as soon as we heard it we knew it must be included. Similarly, in the RAND Baltic Games, players realized again and again that the short distance between the Russian border and Baltic capitals required forces to be prepositioned in order the have a fighting chance.

While sometimes an interesting idea is self-evident, it can be helpful to think through precisely what is happening when we have these insights. Games help participants to compare their mental model of the problem to the events of the game and realize one or the other in incomplete. In the case of the space game, it was the game’s model that had fallen short, while in the case of the Baltics games, participants needed to update their understanding.   Other types of “ah-ha” moments common in games include players finding a new use for an emerging technology, or opponents taking a surprising action.

To an analyst, these moments that represent the gap between the expected and observed behavior can hone the next stage of research. As a result, the more detail captured in game discussion and post-game reporting — about both what players and designers thought before the game and how their mental models changed because of the game — the more precisely the information can guide further stages of research.

  1. Develop Specific Research Questions

Once we have identified topics of interest, we can refine the specific research questions. While games designed to foster innovation vary widely, two key types of questions tend to result from them — confirmatory and exploratory.

Confirmatory questions ask whether the game results are really representative of the real world. Games make tradeoffs that simplify many aspects of the world in order to illuminate others. Players are often not the same as decision-makers, aspects of the environment are not included in the game’s scenario, and layers of bureaucracy are stripped away. In other words, the game cannot simulate everything about the real world and these simplifications could change the findings. As a result, it is critical to ask how simplifications affected the key results of the game.

Another way of thinking about this model is that the initial game generates a hypothesis that we will then evaluate further in later stages. For example, if the opponent took a surprising, but advantageous, action, can we find evidence in their doctrine or past behavior that indicates whether such behavior is likely to actually occur? Or can we consult a broader sample of subject matter experts to develop a “wisdom of crowds” prediction? Such follow-on studies add to the credibility of game findings.

RAND’s series of games on the Baltics has followed such a path. The initial series highlighted the problem of NATO defense by comparing results across a range of games with different players and conditions to gain an initial understanding of the problem. We have since seen more detailed studies of specific capabilities that can provide different types of information about the problem, such as technical analysis that wargames are not well suited to providing. The games were critical in determining what parts of the problem were most important to prioritize in these deeper dives.

In contrast, exploratory questions ask whether the results can be extended or transferred to a broader range of circumstance. We often want to know if changing key actors, environmental factors and capabilities will result in different patterns and results. Alternatively, we can think of this as a type of brainstorming — the game has helped to generate a set of compelling insights, and now we want to see if they may illuminate other problems or decisions. For example, if a new technology helps solve an operational problem in one theater, is it equally valuable in a game with a different scenario set in another part of the world? This type of analysis helps us understand how robust our findings are across many different possible problems.

An example of this type of analysis is a series of games RAND designed to help build an initial framework that described the roles of different stakeholders in cyber-security. The designers wanted to ensure that the framework was relevant for a wide range of incidents and actors, so games varied both the backgrounds of the players and the problem set.

It’s important to note that the two types of analysis answer different questions, so both may be appropriate to pursue in different stages of the cycle, depending on the overall objectives of the research program. For example, early iterations of the Baltic games were exploratory, which then fed into the confirmatory analysis previously described. However, it’s helpful to be very clear about the purpose of the current effort — interpreting exploratory analysis as confirmative can cut off discovery before a problem is really understood, and approaching research that is intended to confirm findings as exploratory can lead to unfocused analysis.

  1. Select an Appropriate Research Method

Confirmatory and exploratory questions both stem from the same types of wargames, but different methods are most useful for addressing them.

For confirmatory questions, analyses or exercises to test or otherwise examine the ideas generated in the game may be most useful. For example, in order to understand how typical a game’s results are, statistical analyses of a model or simulation may help to confirm the average and extreme behaviors of relevant actors across many different conditions. Alternatively, to understand if players’ decision making lines up with an actor’s historical behavior, a study of organizational decision making may be a better tool. In other cases, military exercises can provide data about how real forces would execute the concept conceived of in a game when considerations like fog and friction apply. Regardless of the approach selected, the goal is to triangulate the different evidence provided by the methods so that weaknesses in one approach can be counterbalanced by the strengths of the others.

In some cases, we may even want to move between methods multiple times. For example, to develop a model of crisis stability and escalation on the Korean Peninsula, a RAND team ran a series of workshops and wargames that allowed gradual development and refinement of a model that could capture complicated decision-making dynamics with considerable nuance. By bringing different tools and perspectives to bear, the resulting model was far more credible then the initial sketch would have been on its own.

By contrast, if we are interested in exploring whether the new idea holds in other situations, it might be more helpful to run a game, study, or analysis that tries to change as little as possible except for a key difference that is of interest. For example, running similar game scenarios but changing the opponent or theater could help provide insight into how different adversaries or environments affect those results, while running the same scenario against different force structures could illustrate the importance of a different mix of tools to respond to the problem.

  1. Be Intentional With Changes

When building on research, it can be tempting to change too many aspects of the problem. For example, between games held a year apart, a designer may need to add new players, update the scenario, and refine the rules to correct faults in game play. Alternatively, when transitioning from a game to a model it may be necessary to move from representing an organization’s decision-making process with a single human decision maker to modeling with a mathematical optimization formula. Such changes are often required due to both practical and methodological limitations but make it difficult to compare stages of research and limit how much sponsors can learn cumulatively. It is best to minimize the number of changes that are not driven by the research question.

When such changes are necessary, it is critical to document them transparently so that fair comparisons across games can be made and sponsors can understand how to best apply the results they receive. Even the best research has limitations, and it is most useful when these are acknowledged openly. This transparency allows the consumer of analysis to come to his or her own conclusions about the strength and reliability of findings — and is critical for rigorous analysis.

 

Elizabeth “Ellie” Bartels is a doctoral candidate at the Pardee RAND Graduate School and an assistant policy analyst at the nonprofit, nonpartisan RAND Corporation.

Image: U.S. Navy