Scholars Help Policymakers Know Their Tools


Critics bemoan the lack of policy relevant scholarship in academic international relations (IR). This perspective overlooks a burgeoning cluster of academic IR studies that address one of the most basic informational needs foreign-policymakers have: an understanding of what foreign policy tools work and what tools do not.

Consider the deep body of academic IR work addressing the effectiveness of foreign policy tools in three critical areas: counterinsurgency (COIN), counterproliferation, and conflict resolution.

The Iraq and Afghanistan wars launched a wave of academic studies assessing COIN strategy tools. These studies have explored many important questions in the context of these wars, such as whether:

The recent application of quantitative empirical methods injected new life into IR’s decades-long interest in proliferation. New studies have used quantitative methods to explore whether:

Academic interest in conflict resolution has expanded in recent years, especially towards the application of experimental methods. This work has examined questions such as whether:

All of these categories of studies I have mentioned (and linked to) assess specific foreign policy tools. They are not evaluating less mutable conditions like polarity, globalization, or democracy. These studies examine the tools themselves, informing policy in the most direct way.

That said, some might argue that not all of this academic IR work is directly relevant to critical U.S. foreign policy interests. Of course, any critic would recognize that at least some of it addresses key American national security interests such as proliferation and counterinsurgency.

But, echoing a point made recently by Laura Sjoberg, it is important to avoid the mistake of equating policy relevance with relevance to traditional American national security interests, both for instrumental reasons and as an end in itself. Instrumentally, key American security interests are directly affected by human security issues such as human rights, environmental degradation, gender equality, and labor migration. More broadly, many of us share the fundamental aim of improving the human condition globally, in addition to advancing American national security interests, and welcome the opportunity to study how a variety of actors, including the United States, other governments, international institutions, and non-governmental organizations, might help achieve that end.

Another possible critique is that academic IR work is general and insensitive to context, and therefore unusable by policymakers. IR academics are not producing the kind of work that a senator could read in the morning and use to decide in the afternoon whether or not to send more aid to fight rebels in Ukraine.

This critique is both narrowly true and narrow in perspective. Context is of course important, but foreign policy choices are not sui generis, there are patterns across space and time that inform decision-making. Policymakers recognize this and routinely draw lessons from history when making foreign policy decisions. As noted below, policymakers in other areas such as development and public health routinely rely on broader, more general studies to craft policy. And, broader scholarship can improve foreign policy performance, as evidenced by the ability of IR academics to build on their own work to predict outcomes, including for example forecasting the lengths of the conventional and insurgency phases of the U.S.–Iraq conflict in the 2000s.

But, even if one were to accept the limits of general work, there is a growing body of academic work that evaluates foreign policy tools as applied to a specific country or region. These studies ask questions such as whether:

This is not by any means a dismissal of professional intelligence work. Academics are not intelligence analysts: They do not have access to contemporary intelligence data, nor are they generally trained to do things like examine the latest satellite photos of North Korean nuclear activities and make judgments about North Korea’s current plutonium production. And certainly, academic IR work can never replace professional intelligence work. But the best policy decisions marry timely, specific intelligence with academic work that has a more general perspective.

A third critique is that much of this academic work on foreign policy tools is unusable by policymakers because it is too quantitative and technically complex. Here, echoing a point made by Erik Voeten, there is a danger in not appreciating the importance of rigorous research design, including sophisticated quantitative techniques, for crafting effective policy. Sophisticated research design is not the enemy of effective policy, it is critically necessary for it. Certainly, the current academic focus on building research designs that permit causal inference speaks exactly to what policymakers care about the most: if implementing a certain policy will cause the desired outcome.

Or, put differently, bad research designs make for bad public policy. A classic example is school busing. In the 1960s and early 1970s, some cities adopted voluntary integration programs for public schools, in which families could volunteer to bus their children to schools in neighborhoods with different racial majorities. Policymakers used the favorable results for the voluntary programs to make the improper inference that mandatory busing policies would also work. The result was bad public policy and violence in the streets.

Sophisticated technical methods can improve our ability to make causal inferences, and can help solve other empirical problems. Consider that the heart of successful counterinsurgency is, according to U.S. military doctrine, winning the support of the population. Assessing whether certain policies do win public support requires collecting opinion data. A conventional method for measuring popular opinion is the survey, but of course, individuals in insurgency-stricken areas may be unwilling to reveal their true opinions to a survey-taker out of fear for their personal safety. Methodologists have crafted sophisticated techniques for addressing this issue, improving our ability to measure public support for the government in these areas. These techniques have been used to assess better the determinants of public support in insurgency-affected countries such as Pakistan, Afghanistan, and India.

Going forward, we will continue to need advanced methodologies to address pressing policy questions. Consider the U.S. military’s commitment to gender integration. The implementation of this commitment will be best informed if it rests on rigorous social science that address outstanding questions. Is there a Sacagawea effect, in which mixed gender units engaged in counterinsurgency are more effective than male-only units? How might mixed gender affect small unit cohesion in combat? How might mixed gender units reduce the incidence of sexual assault, both within the military and of assault committed by troops against civilians?

Certainly, other areas of public policy understand the importance of rigorous research design. Economic and development policy communities read the work of and employ economics Ph.D.s. Policymakers incorporate the findings of sophisticated studies on policy areas such as microfinance, gender empowerment, and foreign aid, knowing the best policy decisions must incorporate these studies’ findings.

Or consider public health policy. Lives are literally on the line as decision-makers must make decisions about issues such as vaccinations, nutritional recommendations, and air quality. Policymakers know they must use sophisticated technical studies executed by epidemiologists and other public health academics to craft the best policies.

Critics will argue that some U.S. policymakers remain alienated from contemporary academic IR work, with the suggestion that if IR academics let go of an obsession with technique, they will then be better able to connect with policymakers and help them craft better policy. I agree that IR academics need to find ways to communicate their results in clear, non-technical language. But the technical components of the work need to be there. Stripping them out directly undermines the ability of the research to give the right kinds of policy recommendations.

Let me conclude by noting that I am sympathetic to the concern that IR academics should think about the big picture as well as smaller questions, the forest of grand strategy as well as the trees of foreign policy tools. IR academics have the potential to make real contributions to big picture debates, to think hard about the essence of grand strategy by assembling a framework that effectively integrates foreign policy means and ends. The nature of the IR subfield and its integration of political economy and security, and its ability to think about structure as well as units, make it especially well positioned to consider these broad questions. The ability of IR academics to contribute to contemporary foreign policy debates is one of many reasons why political science should retain the subfield of IR and resist the temptation to replace the traditional empirical subfields of IR, comparative, and American with new subfields of conflict, political economy, behavior, and institutions.

Like good carpenters, foreign-policymakers need to know their tools. Rigorous IR research is the only way to evaluate them effectively.



Dr. Dan Reiter is the Samuel Candler Dobbs Professor of Political Science at Emory University.  His books include ­­Crucible of Beliefs: Learning, Alliances, and World War (Cornell, 1994), Democracies at War, coauthored with Allan C. Stam (Princeton, 2002), and How Wars End (Princeton, 2009).



Photo credit: U.S. Naval War College