Experimentation Can Help Build Better Security Partners

iraqmortar

The United States has an imperfect record building security partners, to put it charitably. Iraq is a case in point. Since 2003, the Defense Department has spent at least $26 billion in security aid just to build Iraq’s military. The result is well-known. Iraqi security forces collapsed in 2014 as the Islamic State occupied large parts of Iraq and Syria. As this and other examples demonstrate, the Pentagon is adept at spending a fortune on training battalions, but has not been able, concurrently, to create the institutional capacity needed to sustain and effectively employ these units and capabilities.

Despite this checkered history, many assumptions ungirding the country’s approach to improving defense institutions abroad continue to go unchallenged. The Defense Department has established a range of security cooperation programs, from those aimed at improving tactical performance through programs assisting in the creation of new — or reform of existing — ministries of defense. Although current methods and models of security cooperation policy, management, and execution should be subject to rigorous experimentation, experts inside and outside the Defense Department have unfortunately been satisfied with an unworkable status quo.

The Status Quo Is Unsustainable

This status quo should be questioned on two levels. First, there are simply too many failures in institutional capacity building (i.e., an institution’s ability to absorb, use, and sustain military capabilities) to ignore — in Iraq, Afghanistan, and even Central and Eastern Europe. Second, none of the current security cooperation programs and their approaches have ever been subject to sufficiently rigorous validation. Providers of advice and assistance (e.g., the three services, their education institutions, and private contractors) and their Defense Department sponsors will often just assert that their methods constitute “best practice.” Usually, the only after-action review is drafted by providers themselves, which gives them the opportunity to grade their own homework rather being subjected to an objective evaluation of their methods and assumptions.

Fortunately, Congress is on the case. The 2017 National Defense Authorization Act mandated that new efforts be undertaken to assess, monitor, and evaluate the effectiveness of security cooperation activities. The Defense Department has made some progress, e.g., the Defense Department and State Department improved coordination efforts, ensuring greater integration of their individual programs.

As the Defense Department lead for security cooperation policy, the Defense Security Cooperation Agency (DSCA) should initiate a modest program to begin experimentation at the national level of ministries and general staffs, areas which have proven difficult to reform and remain a high priority in U.S. policy. Failures in assisting security partners to improve their defense governance are legion, and some examples will be examined in this article. As such, it is not too difficult to identify specific areas that have proven resistant to successive reform efforts over many years. Some of these areas include:

Diagnosing the Problem

A key explanation as to why U.S. efforts have been so unsuccessful is that the system has yet to come to grips with the fact that many of the impediments to building greater institutional capacity are conceptual in nature. Another is that reforms do not take account of the partners’ cultures, or their defense institutions’ organizational sociological realities. As to the former, all armed forces and bureaucracies are based on concepts (e.g., decentralized decision-making, empowering commanders, processional soldiers). Partners coming from a recent history of totalitarianism find that democratic concepts are antithetical to their legacy counterparts, and ergo, cannot coexist in harmony. Equally, if a partner’s culture has the trait of “high power distance,” the armed forces will find it very difficult to introduce decentralized decision-making, let alone introduce a professional noncommissioned officer corps. To date, these factors have been all but systematically ignored, or dismissed in the execution of both security assistance and security cooperation policies, and as others have pointed out, this allows a lazy one-size-fits-all approach to dominate.

Over the years, various combatant commands, the services, and Defense Department organizations have attempted to address these weaknesses in key partner and allied defense institutions, but to little avail. What is troubling is that there is little evidence that longstanding advice and assistance concepts (i.e., ideas and approaches), assumptions, and methods have been subject to a systematic review and validation. Rather, all too often, U.S. Defense Department practices are exported as “models” and are overly reliant on education and training, as opposed to trying to effect deep organizational change. Most disturbing, even when some models have been implemented, institutional capacity has not improved (for example, the Defense Department’s budgeting method for programs in Central and Eastern Europe).

Identifying a Solution

To address these challenges, DSCA should conduct field experiments that determine how institutional capacity building can be better conceptualized to improve the ability of partners to create and maintain modern capabilities. Breaking from previous practice, this effort needs out-of-the-box thinking (and maybe in some instances, no “box”) that focuses on proposing noncomplex approaches and methods, all of which must be tested and validated by rigorous and systematic experiments to determine what does and does not work in partner and allied recipient countries.

To initiate this effort, DSCA leadership should determine an area that has defied reform efforts over many years and across regions (e.g., defense planning). A task-organized group of tested experts should draft a proposal to identify new and innovative approaches to address this “wicked problem” area using new ideas and methods which can then be subjected to vigorous experimentation. To ensure responsibility for the success, or failure, of the experiment, it would be best if the same team is involved in the innovation, as well as the experiment. (Clearly, if the experts recommend tried-and-failed approaches and concepts, then others will need to be found until innovation is demonstrated).

Field experimentation design should include:

  • Concept development based on a thorough examination of relevant literature and target countries’ defense institutions, to create a sound basis from which to define the problem. This can be limited to a specific partner or allied country, a cohesive subregion, or an entire region. For instance, what are the conceptual, as opposed to the official, bases of the armed forces? Thus, if the partner has introduced the concept of professional noncommissioned officers, does the officer corps actually use them as such?
  • Hard qualitative analysis of factors that have impeded recipient defense institutions from effecting reform in an identified functional area. In lieu of simply identifying symptoms, what is the actual cause of the problems being addressed? For example, acquisitions could fail because inputs are insufficient (e.g., requirements are stated too vaguely), as opposed to execution.
  • Development of original conceptual options that address these factors that offer possible new methods and techniques. In lieu of an overreliance on training and education, a different approach could focus on gaining a full understanding of the conceptual baseline of the defense institution and identifying which specific concepts need to be retired before being replaced with modern methods.
  • Conduct table-top exercises within the experimentation team, followed by table-top exercises and focused seminars with regional defense officials, analysts, and experts to subject options to critical analysis and review. For instance, a systematic walk-through of the logic behind new planning procedures could identify, inter alia, residual old thinking that still must be retired and replaced.
  • Address in these experiments relevant prevailing political and cultural factors that can inhibit, or possibly facilitate, the adoption of organizational change. In short: identify who stands to win or lose in the bureaucracy to signal to senior leaders possible opponents to change.
  • Subject these proposed concepts and approaches to a series of table-top exercises with mid- and senior-level officials from select defense institutions in the region. This will help determine whether senior leadership is on board with recommended reforms and will also signal to the U.S. embassy that political top-cover is needed.
  • Assess responses and comments to proposed concepts and approaches and revise or caveat as warranted.
  • Present results of experiments to senior DSCA officials for review and possible endorsement for use in other allied and partner nations with necessary caveats.

Upon approval of concepts and methods, DSCA policy should approve draft guidelines on the possible applicability of experiments’ findings in assisting other defense organizations. These guidelines could also be used to inform how security cooperation funding is allocated, and serve as a basis for educating the security cooperation workforce, organized in modules for use in the Security Cooperation University, which is in the process of being created.

New Approaches Needed

New approaches to conceptualizing and delivering security cooperation are needed, particularly when addressing national-level weakness in partner states. The stakes are high. First, successful security cooperation advances important American national interests. Second, continued underperformance will make it harder to convince Congress that funding these programs is worthwhile. The Defense Security Cooperation Agency — the lead agent  in these efforts — can implement new and innovative concepts to improve allies’ and partners’ institutional capacity at a fraction of the costs of previous efforts. Failure to experiment with new methods and concepts will leave the Defense Department open to the criticism that it is dangerously close to fulfilling Einstein’s definition of insanity — doing the same thing repeatedly, while expecting a different outcome.

 

Thomas-Durell Young is with the Institute for Security Governance, Naval Postgraduate School, Monterey, California and Editor-in-Chief of Defense & Security Analysis. He is the author of Anatomy of Post-Communist European Defense Institutions: Mirage of Military Modernity (Bloomsbury, 2017). The views expressed in this article are those solely of the author and do not reflect the policy or views of the Naval Postgraduate School, Defense Security Cooperation Agency, or the Department of Defense.

Image: U.S. Army