Boxed In: The Bad Side of Best Practices in Intelligence

pfaff

“Best practices” is such an innocuous term. It’s casually tossed around the workplace when admiring a new development, complimenting a colleague’s hard work, or institutionalizing a process that claims to improve some outcome or another. And it nearly always succeeds in perking up the ears of those around us. A best practice, you say? Is this something we should be doing? How did those people do it? Let’s study their processes and apply it to our problems. As a species, we learn behaviors and repeat what works. This is a fundamental underlying principle of evolution. As any New Yorker will tease, “How do you get to Cooperstown? Practice, practice, practice!”

But it’s not entrance to the Baseball Hall of Fame the intelligence community is seeking. The more we practice, the better we get –  but not when we ignore potentially superior ways of doing things because we’ve always done them some other way. Or when we get so absorbed in ensuring that we’ve met every standard, applied every methodology, and explored each known avenue that we forget humans have metacognitive brains for a reason: to think.

Best practices are often distilled down to a checklist that purports to capture all of the relevant data on the matter – a meta-analysis of expertise and experiences in 10 easy steps. While this is convenient, it tends to thwart precisely what the intelligence community needs most: creativity, cognition, and risk-taking. Standards, methodologies, lists, exemplars, case studies — these are all things the intelligence community relies heavily upon to drive its assessments. It’s certainly not alone; best practices are used ubiquitously from the water industry to immigration services to impart the most effective strategies quickly and efficiently, then move on to the next problem. Just like any other industry, the intelligence community has found its creativity hampered by a reliance on checklists. However, the intelligence community faces an additional issue many companies do not: Its problems are often a great deal more complex, much less predictable, and don’t always have an identifiable cause and effect — even upon careful reflection in hindsight.

As the world grows increasingly complex, chaotic, and interconnected, best practices will become more and more obsolete. The intelligence community has collected its way into the problem it now faces: a check-the-box list of practices that worked much better in a world with cement walls instead of virtual ones. It is boxed in.

Limiting Creativity

Let’s examine the creativity constraint first. Pilots fly using manuals. When something goes wrong, the emergency checklists come out. A famous recent example is U.S. Airways Flight 1549, brought down in the middle of the Hudson River by Capt. Chesley “Sully” Sullenberger. When Sully realized the two engines had failed, his co-pilot Jeff Skiles began reading procedures from a “dual-engine failure” checklist. But, as the accident report from the National Transportation Safety Board found, the checklist was written for flights at a much higher altitude than Flight 1549, which had just taken off from La Guardia. Sully and Skiles accomplished only a fraction of the items on the checklist before the plane hit the water, instead relying on experience, intuition, teamwork, and critical and creative thinking to land the plane as safely as possible. Even if they had made it to the bottom of that particular checklist, they still would have lacked a key directive: to seal doors and hatches for a water landing. That guidance was on a different checklist, which Sully didn’t pick up.

The aerospace industry is known for its reliance on codified procedure, so it’s not surprising that the National Transportation Safety Board suggested a few dozen changes after Flight 1549 met the Hudson River, several of which included updates to checklists. But experts in other fields watched the crash and subsequent investigation carefully, and internal debates quickly ensued, such as the one in the “Letters to the Editor” section of the Journal of Anesthesia and Analgesia. Here, several anesthesiologists from across the world debated Sully’s actions in terms of their own field, discussing the clinical advantage of using cognitive aids during operations. The doctors argued compellingly that Sully’s judgment, and not the checklist, was the deciding factor.

Best practices checklists serve a few functions. They are memory and attention aids, they certify a task’s completion, they clarify the correct procedures, and they are monitoring tools. Nowhere in that list is there any indication of change. In fact, it reads like an awfully mundane assembly-line description: aids…task…procedures…monitoring. Best practices aren’t designed to encourage innovation or creativity; they’re designed to promote the status quo.

Consider the number of checklist suggestions that appear when any leadership term is typed into Google. Forty percent of the suggestions for completing “emotional intelligence” as a search term involve a test, quiz, or appraisal. Type out “ethical leadership” into Google, and every single one of the first ten suggested links includes some list that instructs you how to be an ethical leader. Type “innovative leadership,” and the first entry links to a white paper by the Center for Creative Leadership entitled “How to use innovation to lead effectively, work collaboratively, and drive results.” One chapter of the paper is “Six Innovative Thinking Skills” — another checklist.

Now, let’s apply this creativity constraint to the intelligence community. Sticking with the aviation theme, the most obvious example is 9/11, specifically the failure of imagination cited by the 9/11 Commission Report:

Since the Pearl Harbor attack of 1941, the intelligence community has devoted generations of effort to understanding the problem of forestalling a surprise attack. Rigorous analytic methods were developed, focused in particular on the Soviet Union, and several leading practitioners within the intelligence community discussed them with us. These methods have been articulated in many ways, but almost all seem to have at least four elements in common: (1) Think about how surprise attacks might be launched; (2) identify telltale indicators connected to the most dangerous possibilities; (3) where feasible, collect intelligence on these indicators, and; (4) adopt defenses to deflect the most dangerous possibilities, or at least trigger an earlier warning.

Note the language — these analytic methods were focused on the Soviet Union. The intelligence community employed methods suited to a problem half a century old, focused on collection and known indicators, rather than new methods developed to facilitate warning. They relied on reporting from multiple sources that confirmed what they already suspected, rather than giving serious consideration to an outlier. A lack of formal analytic methods, limited collaboration with foreign allies, and the tendency to work alone – these were  all traditional approaches not optimized for an evolving threat environment. The intelligence community either failed to recognize how operational differences between the Soviet Union and al-Qaeda would impact its approach, or lacked the agility to adjust its methodology quickly enough to be effective.

The commission recommended “institutionaliz[ing] imagination.” By definition, imagination is “given permission to play without pragmatic intent,” “an entirely private process of internal consciousness…Private imaginings may have no outcomes in the world at all.” To suggest that imagination should be bureaucratized — made routine, pragmatic, or characterized by a set of fixed rules — is a reflection of a community so wedded to rules and policies that it has to have a process for something that by definition cannot be a process, just to make it stick.

A notable difference between the intelligence community and other industries is that the intelligence community’s checklists always call for the acquisition of more information. Intelligence Community Directive 203 establishes the standards that govern the production and evaluation of analytic products. Peppered throughout the document is language that indicates how imperative more data is (regardless of how much is already available):

…analysts must consider alternative perspectives and contrary information…to be continually aware of events of intelligence interest…should be based on all available sources of relevant information…continued currency of information…quality and quantity of source material…

An Increasingly Chaotic World

In 2011, a fruit vendor lit himself on fire in protest in Tunisia, and a year later, Egyptian President Hosni Mubarak is removed from power and convicted of corruption in a cascading flurry of events now called the Arab Spring. We could have developed all sorts of additional collection mechanisms, and we still wouldn’t have predicted this development. Why? It falls into what Dave Snowden, a leading researcher in the theories of sensemaking, narrative, and knowledge management, terms “chaotic context” in his Cynefin Framework. Cause and effect are impossible to identify. There are too many players, an infinite number of possibilities, and the Butterfly Effect results in a tiny change having a huge (unpredictable) impact somewhere else.

Snowden’s framework posits five decision-making “contexts:” simple, complicated, complex, chaotic, and disordered. He explains why best practices are not appropriate for a complex, chaotic, or disordered context – that is, the contexts the intelligence community operates in every day:

Simple and complicated contexts assume an ordered universe, where cause-and-effect relationships are perceptible, and right answers can be determined based on the facts. Complex and chaotic contexts are unordered—there is no immediately apparent relationship between cause and effect, and the way forward is determined based on emerging patterns…The very nature of the fifth context—disorder—makes it particularly difficult to recognize when one is in it.

Best practices, Snowden argues, are only useful in a simple context, where facts are known and decisions can be made more easily.

Best practices may offer the “90 percent solution,” But they are ill-equipped to deliver the remaining 10 percent. They will get the intelligence community close to forecasting national security threats—but the intelligence community isn’t in the business of horseshoes and hand grenades. Sully’s checklist got him to the 90 percent solution when his engines failed. But it wouldn’t have landed him and his 155 passengers safely. His plane was at a much lower altitude and the checklist was approaching uselessness — he could never have arrived at the solution unless he started thinking about the situation’s complexities.

The 90 percent solution is arrived at via collection. It’s amassing the metadata to come up with a cogent solution to a known, predictable issue. We used to live in a world like this — when the intelligence community was formed in 1947, it was built around these types of problems. America’s adversaries behaved, for the most part, in a linear fashion: The whole was equal to the sum of its parts, cause and effect could be identified, behaviors in response to the same stimuli were repeatable, and input was proportionate — put in a little, get a little out. The noise compared with the signal was relatively manageable.

Today, as Josh Kerbel writes, the “global system is effectively defined by fluid, heterogenous, widely distributed, nonhierarchical networks—in contrast to the comparatively static, homogenous (state-centric) and dichotomous hierarchies.”  Globalization has expanded communications in a way previous technological improvements did not. People from disparate parts of the world are now not only aware of events in the furthest corners of the earth; they can also engage in them. Any one of the 7.6 billion people on the planet can feasibly participate in — and influence — the very problems the intelligence community is frantically trying to collect on. There are 44 zetabytes of data in the digital universe; and the next 9/11 could be hidden in any one of them.

While the digital age rapidly sped up the transformation from complicated to complex, there were indicators that should have alerted us to the paradigm shift  decades ago.  Between August 1964 and December 1968, the RAND Corporation conducted 2,400 interviews on the organization, operations, and morale of the Viet Cong, ultimately producing 62,000 pages for the Department of Defense. The U.S. government knew everything there was to know about this topic, yet “there was substantive disagreement among the RAND researchers involved in Vietnam research at the time, and contrary points of view with totally different implications for U.S. operations.” Policymakers had all the facts; there were no intelligence gaps, no unknowns, no additional collection requirements. But the Viet Cong were non-state actors. They did not exhibit a hierarchical structure; rather, they were networked, unpredictable, and asymmetric.

The three analysts examining the reports all came to different conclusions, and ultimately, the wrong conclusion was used to inform U.S. policy. Why? Because President Lyndon Johnson heard only the viewpoint he wanted to hear—that the Viet Cong were demoralized and about to give up, and the United States would win the war. And that made sense to him, because firepower was what made the United States successful in the past. The limits of intelligence best practices in Vietnam were just one of many early indications of how an emphasis on collection and a faith in best practices were poorly suited for the complexities of a changing world.

Conclusion

The intelligence community’s ultimate best practice — collection — will continue to produce intelligence failures. Collecting additional information — no matter how much — is not the solution to a 10 percent problem. The Great Wall of China has become the Great Firewall of China, phone books have become Facebook, and Germans living in East or West Germany are separated by their propensity to get flu shots instead of a cement wall. Collection, checklists, and analytic best practices may produce the 90 percent solution, but information has fundamentally changed and the number of people retrieving and disseminating that information means uncertainty abounds. We can no longer rely on collecting the information we need to analyze and predict national security threats. The intelligence community can continue to expect a rapidly changing landscape, and if it does not develop adaptive means of responding to those changes, it cannot hope to protect the nation’s security interests with any measure of success, let alone outdated ones.

And yet even the entities dedicated to innovation remain focused on more, more, more. The public minutes of the January 2017 Defense Innovation Board provide a key example. Noting that the “predominant technology that cemented a state’s military superiority has shifted over the past few decades from nuclear weapons to the technical ability to leverage massive amounts of data rapidly to gain strategic advantage,” the report goes on to describe “information as a core competency.” It recommends letting “humans focus on what they are good at and let[ting] machines focus on what they are good at.”

Humans are good at thinking. And yet we are obsessed with getting more information, versus thinking differently about the information we already have. That’s not to say that standards, practices, and checklists need to be thrown into the Hudson River. It’s still important that people identify and impart the most effective exercises in their respective professions. But that’s not enough. The corporate world is starting to recognize this, albeit slowly, and without a lot of actual change in approach. As business coach Paul David Walker put it, “a best practice can be useful, but it is not what creates the future…It is especially dangerous in a time of dramatic shifts in paradigms, which is what we’re going through now.” The intelligence community is going through a paradigm shift as well. If it doesn’t want to find its analysis ending up between the horoscope and the comics section of the president’s daily briefing, it needs to take conscious, cognitive steps to avoid the many hidden pitfalls of best practices.

Because I’m sure you can figure it out for yourself. Indeed, you will have to.

Dr. Debora Pfaff is the Vice Provost of Student and Academic Affairs at National Intelligence University. She has worked in the intelligence community for 14 years. She publishes and presents on topics related to leadership and management, national security, and ethics. All statements of fact, analysis, or opinion are the author’s, and do not reflect the official policy or position of the National Intelligence University, Department of Defense or any of its components, or the U.S. government.

Image: Defense Department