Rethinking Risk in Defense

14411563007_f7f2668311_k

Admiral William Gortney, commander of the North American Aerospace Defense Command (NORAD) and Northern Command, was recently discussing the threat posed by advanced Russian long-range conventional cruise missiles. These weapons, Gortney testified, provided Russia with deterrent options “short of the nuclear threshold.” As a result, “NORAD will face increased risk in our ability to defend North America against Russian air, maritime and cruise missile threats.”

That term—“risk”—is cropping up more and more frequently in national security assessments. Senior military and civilian leaders constantly refer to the importance of dealing with risk. Just about every piece of testimony now employs the term. Dozens of risk management frameworks crowd the national security enterprise. The 2001 Quadrennial Defense Review (QDR) introduced a framework for assessing risk in the national security enterprise that has since become a standard approach throughout various parts of Department of Defense.

All of this is based on the idea that institutional or procedural risk management can be a powerful tool. So it can, and many sophisticated risk frameworks are making useful contributions to defense planning, for example within specific services. But the irony is that elements of the U.S. national security community are relying more and more heavily on an instrument that has been called into question in the field where it was most advanced—financial services. The financial industry had developed arguably the most elaborate procedures and models for measuring and mitigating risk, but those techniques didn’t prevent leading financial firms from often unknowingly swallowing massive amounts of risk that led to their destruction.

Over the last year I’ve conducted a study to consider the lessons of this experience. I have come to believe that, notwithstanding a number of well-designed risk frameworks being employed for very specific purposes, the way we use risk in national security has too often been ill-defined and misleading. We need a more focused and precise understanding of risk at the highest levels. In the process of developing one, we should judge risk processes by one fundamental criterion—the degree to which they contribute to the making of effective strategy.

Defining Risk

It’s commonly suggested that there are four basic elements of strategic logic: ends, ways, means, and risk. Most defense policy experts will have a pretty quick—and mostly shared—idea of what we mean by ends and means. “Ways” are a little more abstract, but there is a clear and well-established definition: the manner in which the means are employed to achieve the end.

Now ask yourself: What do we mean, in strategy, by “risk”? Chances are, some will say “gaps between ends and means.” Others will say “threats.” Still others will say, “Dangers created by my proposed strategy.” Some might answer the question by listing various categories of risk, such as operational, strategic and institutional. This is the essence of the problem at the moment: there are a half-dozen ways of thinking about risk in national security—which means, at the highest level of national policy, that there is none at all.

In its simplest sense, the concept of risk refers to things that can go wrong in relation to something we value. A fairly standard definition conceives of risk as the probability times the consequence of such malign developments, but not all approaches use probability or consequences in quite that way. Broadly speaking, risk is the potential for something bad to happen, and risk management is the effort to assess and mitigate those things.

Two challenges make any discussion of risk and risk management very tricky. One is that there are so many forms of and approaches to risk management that any general discussion could misinform. Any evaluation of a specific risk process needs to take into account its particular design, the problems it’s trying to address, and its methodologies.

A second challenge, closely related to the first, is that different types of national security issues demand very different risk approaches. National security issues fall across a wide spectrum, from very deterministic and predictable to totally uncertain—from the actuarial assessment of personnel costs to the choice of how big to make the Joint Force. Criticism of the way risk is used at one end of the spectrum might not apply to the other. Classic, data-driven risk analysis is entirely appropriate for some issues, and a number of services are using it in creative ways to assess key institutional choices. But a big lesson of the financial crisis is that large-scale strategic decisions under uncertainty are a very particular sort of problem: Value-laden choices full of unpredictable variables, nonlinear dynamics, and human factors on which no optimal answer will be available, such as what balance to strike among various domains of military capability, or whether to invade Syria. Risk is commonly used to assess these choices, too, but the limits of what risk assessments can do under uncertainty calls for extreme caution.

Challenges to the Effective Assessment of Risk

There are a number of obvious challenges to any effort to enhance our approach to risk. One is that, as noted above, the concept remains ill-defined. “Risk” is sometimes presented as the gap between requirements and capabilities, or as threats in the strategic environment, or as a synonym for chance, or the reservoir of possible negative consequences of actions. Both finance and national security have developed dozens of sometimes excruciatingly specific categories or types of risk: credit, reputation, regulatory, operational, strategic, institutional, and on and on.

If we want to use the term “risk” as merely a noun with a whole range of possible definitions, none of this poses a problem. But if we want a planning tool that offers meaningful analytical value to the making of strategy, we will need something more precise.

A second challenge stems from risk management’s origins as a probabilistic—that is to say, quantified—discipline. Formalized risk management has often been grounded in elaborate models and algorithms designed to promise reliable forecasts. From the infamous bond market models of Long-Term Capital Management in the 1990s to complex Value at Risk (VaR) approaches in pre-crisis investment banking, financial wizards convinced themselves that they had cracked the code of markets. Risk officers would report highly specific risk estimates to boards and CEOs (“there is a 5% chance of losing more than 25% of invested capital over the next year”). And the whole edifice was built—partly—on sand, promising a degree of mathematical reliability it could never deliver.

This was because financial markets—like the national security context—are highly non-deterministic environments, characterized by complexity and uncertainty. Mistaking nonlinear environments for deterministic ones is a guaranteed path to ruin, a point that folks like Nicholas Nassim Taleb, Benoit Mandelbrot and Paul Davidson have been making for years. The lesson for the emerging architecture of risk assessments in national security is clear enough—to be careful about using data-driven judgments to assess uncertain contexts.

Third, unless they are very carefully designed and executed, highly complex and seemingly deterministic risk presentations can camouflage more than they reveal. Critical and ultimately calamitous assumptions were baked in to finance industry risk estimates for subprime-based derivatives, but these assumptions were seldom conveyed to senior decision makers. The same is often true in defense today: What judgments, assumptions and outright guesses had to be made in order to produce a given level of risk? How many were close-run findings that could easily have gone the other way? Too often risk assessments have involved subjective judgments used to generate color-coded assessments without sufficient detail on their assumptions. Such singular verdicts (“moderate risk”) can offer leaders the opportunity to close their minds when any good risk process ought to be doing just the opposite—be very clear about the assumptions and nuances behind the results to force senior leaders to discuss and debate key issues.

Fourth and finally, the two most common uses of the term—gaps between ends and means, and threats—can perhaps be better served by other functions in the strategy process. Means-ends relationships are critical, but can be captured in straightforward sufficiency analyses. Threats ought to be covered in any good environmental scan. Each of these two tasks is arguably better accomplished, and contributes more effectively to strategy, if the term risk is left out of these two dialogues altogether: “sufficiency” and “threat” are more exact and properly suggestive indications of what the strategist is trying to determine in these areas.

Toward a More Precise Concept

At the broadest levels of defense and national security planning, then, our use of risk could well fall short of expectations for its contribution to strategy. The obvious next question is what can be done about it—the principles of more effective assessment of risk. That will be the subject of a follow-on article, but for now it’s worth mentioning two important steps: define a shared concept of risk that is most likely to contribute to strategy, and create disciplined, institutionalized and routinized stages in the strategy-making process to assess risk in that way.

On the first issue, my research suggests that the most profound risk disasters come from insufficient attention to and awareness of the potential risky consequences of intended or favored strategies. The incentive structure for senior leaders and the leading implications of a number of cognitive biases all tend to mute outcome-oriented risk analysis. When catastrophes strike, whether a financial crisis or the collapse of a company like Enron or a foreign policy debacle such as the Bay of Pigs or chaos in post-invasion Iraq, the culprit is often the same: decision makers simply refused to take seriously the potential consequences of their hoped-for plan.

A critical role for risk in strategy, then, is to focus on the potential negative consequences of strategic choices. Risk would be defined as “things that could go wrong through the implementation of alternative strategies.” The risk function would be structured to create a rigorous, disciplined dialogue about the possible implications of various options. The goal would not be to prevent bad outcomes. Instead the purpose of an outcome-oriented risk process would be to ensure that leaders make strategic judgments with eyes wide open to possible consequences.

Risk is, of course, sometimes used this way today. Many national security officials think of risk in terms of outcome analysis, whether consciously or not. When financial institutions conduct an operational risk assessment of a proposed new investment, they are doing outcome-oriented analysis. The best risk frameworks are designed to promote rigorous dialogues about the consequences, for operational or institutional objectives, of various choices.

But risk is not only used in this way, as we have seen, and certainly not always employed to open eyes to possible consequences. Dozens of historical cases show that deep thinking about consequences is brushed off—or even actively avoided—as much as it is embraced. No formalized process exists to ensure rigorous outcome-oriented risk assessment when making defense or national security choices. If we want such a function, the defense and broader national security establishment will need to make a conscious choice to ground that particular approach in doctrine, practice, and policy.

Institutionally, such a shift could be implemented with a few fairly simple reforms. A future defense policy document could build on the QDR risk framework with a definition of risk related to outcomes. Beyond Defense, a future administration’s equivalent of NSPD-1, the basic document outlining the national security process, could specify that outcome-oriented risk assessments would be required for all major national security decisions.

In both cases, the directives would need to lay out categories or criteria to measure outcome-oriented risk. A risk assessment would not be said to be complete, for example, unless it included an evaluation of effects on U.S. military institutions, reactions by other actors, possible second-order effects, the implications of failure, and so forth. The process should be structured not to generate singular findings (such as “moderate risk”), but rather to break out more discrete elements to generate disciplined and well-informed conversations about risk among senior leaders. An effective risk process should force decision makers to talk about potential consequences in rigorous and nuanced terms, with the goal of informing and shaping their judgment. The risk assessments themselves are not the goal—they are only means to the broader objective of risk-informed decision making.

As we continue to approach the use of risk in national security, a more focused, disciplined approach to outcome-oriented risk can help ensure that the issue offers the best value to strategy. Risk dialogues can be a major ally in enhancing the strategic judgment of senior decision makers, but only if the national security establishment comes to a more specific and shared understanding of the concept.

 

Michael Mazarr is a senior political scientist at the RAND Corporation.  He completed this research before joining RAND, as a nonresident fellow at the New America Foundation.  The views expressed here are his own.

 

Photo credit: Chairman of the Joint Chiefs of Staff

Do you like our articles?

Then you'll love this job opportunity! War on the Rocks is hiring another full-time editor. Help produce the articles you love to read.