Artificial Intelligence Meets Bureaucratic Politics

bureaucracy-ai (1)

As the Joint Chiefs of Staff gathered in Key West, Florida for a private meeting in March 1948, the first U.S. secretary of defense, James Forrestal, posed a simple question: “Who will do what with what?” The Air Force and Navy tussled over strategic nuclear bombers, while the Army and Marine Corps bickered over limitations to their respective end strengths.

The resulting Key West agreement defined the primary functions of the services for the Cold War, but it didn’t end the debate — far from it. Interservice rivalries flared over atomic weapons and the “missile gap.” Turf battles wasted critical time and money and introduced new dangers in command and control. Failure to appreciate the limitations of strategic competitors and misaligned incentives were commonplace during the Cold War, and they are no less common today.

To the extent that new technologies trigger old bureaucratic rivalries, defense planners will once again confront the simple question: Who will do what with what?

Artificial intelligence, or AI, presents a host of strategic considerations for the United States. One of the most pressing is summed up in a series of recommendations last year from the Defense Innovation Board. Here’s the line that should catch your eye: The Department of Defense “does not have an innovation problem; it has an innovation adoption problem.”

Some argue that more people, better software, and greater resources will solve the innovation adoption problem for AI. That may prove correct. Yet the integration of new technologies depends on something more fundamental: bureaucratic politics.

AI on Autopilot or Too Many Pilots?

Bureaucracies protect their turf, fight for larger budgets, and resist change. They pay lip service to information-sharing while defending their autonomy. Organizations can run on autopilot; they can also suffer from too many pilots. This is what political scientist Graham Allison had in mind when he observed that national interests reflect “compromise, conflict, and confusion of officials with diverse interests and unequal influence.”

The construction, funding, and deployment of aircraft carriers, for example, requires more than the ability to write big checks and mobilize significant resources. Carriers also necessitate joint operational concepts and close cooperation across the naval air and sea domains. This task falls squarely on the shoulders of large bureaucracies and military services. As political scientist Michael Horowitz shows in his study of the diffusion of major military innovations, when bureaucracies are wedded to a particular task and resist experimentation, the adoption of new technologies becomes more difficult.

 

 

Effective use of new technologies requires different operational concepts. When assessing the potential success of the United States or China in AI, what matters isn’t simply who has more talent, more data, or more computing power. We also need to consider which countries have the organizational capacity to use and adopt AI and for what ends.

When and Why Bureaucratic Politics Matters

The study of bureaucratic politics gained currency in the 1970s following the publication of Graham Allison’s Essence of Decision and Morton Halperin’s Bureaucratic Politics. However, research on bureaucratic politics suffered from what political scientists call “under-specification.” As scholars Jonathan Bendor and Thomas Hammond observe, U.S.-focused theories of bureaucratic politics in the 1970s lacked a model of hierarchy that explains how the configuration of domestic political power affects decision-making. These theories also rested on a number of assumptions that are open to debate: that the president of the United States and cabinet-level appointees pursue divergent goals, that conflict reflects the absence of consensus on goals, that presidents feel compelled to bargain with subordinates, that other executive branch officials enjoy access to more or higher-quality information, and that policymaking is characterized by the push and pull of bargaining, coalition management, logrolling, and lowest-common-denominator outcomes.

Under certain conditions, these assumptions hold true. The trick is knowing not just whether bureaucratic politics matters, but when and how it matters. Four conditions are relevant for understanding the role that bureaucracies will play in shaping the adoption and use of AI among great powers.

First, bureaucratic politics matters less at moments of crisis. During a crisis in the United States, when events are fluid and time is of the essence, presidential authority is greatest. The public rallies around the flag and executive branch agencies tend to defer to the commander in chief. Bureaucratic politics is more likely to assert itself during periods of relative stability, when presidential authority is less concentrated and public attention is more fractured and dispersed.

Second, bureaucratic politics is likely to matter more during the formulation and implementation of policy. Executives tend to enjoy greater influence during the agenda-setting and policy-selection stages of the decision-making process. In theory, the president sets the agenda by giving broad direction to cabinet officials; the interagency process develops policy options; the president selects among the options, and then the responsible agencies implement that decision. In practice, bureaucracies can influence the outcome by framing or ordering the policy options to support their agendas, and by speeding or slow-walking implementation.

Third, bureaucratic politics is more likely to dominate the policy process when the individuals making those decisions are career employees who’ve come up the ranks instead of political appointees whose focus is on the policies of the current administration. Career employees tend to hold a longer-term view of policymaking.

Finally, bureaucratic politics matters less on issues that involve the use of force, which engage the state’s highest interests and therefore the time and attention of chief executives, and more on issues that require the allocation of scarce resources among competing ends.

How will these conditions influence the development of AI today?

Start with the fact that the United States and China are not making decisions about AI in the context of an immediate crisis. It’s true that China’s leaders approach the development of AI with a sense of urgency in the economic realm. China’s military is also undertaking reforms to improve its procurement systems. Still, the United States, European nations, Russia, and China are only beginning to grasp AI’s transformative potential. Leaders around the world have determined that AI is a priority and set the agenda accordingly. Already, 20 countries have put forward AI strategies of varying ambition and scope. We are now at the stage of policy formulation and implementation, which means that bureaucracies will have greater say in the shape of AI’s development.

Bureaucratic politics will matter for another reason: Most leaders have little understanding of or experience with AI and machine learning. This means that career civil servants and experts in bureaucracies are more likely to take center stage in answering crucial questions about how the technology will be developed, to what ends, with what safety protocols, and with what resources. Indeed, while the United States and China are exploring militarily relevant AI applications, the foremost challenge is determining how scarce resources will be allocated to AI research and development at a time of growing budget deficits, mounting debt, and competing governmental priorities.

Too often, however, we assume that states are centralized monoliths and overlook the friction that comes from large organizations and military services.

AI Turf Battles

How could the role of friction in bureaucratic politics impede the successful uses of AI? Consider a few examples already visible in the United States and China. Successful military implementation of AI will require changes to recruitment and retention policies, training regimes, operational doctrine, and force structures. It will upgrade the status of some occupations and downgrade the status of others. Horowitz makes the distinction between “sustaining technologies” and disruptive technologies. Sustaining technologies like automation of surveillance footage are less threatening to the status quo than disruptive technologies like autonomous battle management systems. He cites the U.S. Navy’s decision not to transition the X-47B drone, a potentially disruptive technology, to a program of record as a prime example of bureaucratic politics at work.

The Defense Department is grappling with rising personnel costs of the all-volunteer force, new demands for critical skills and a rapidly changing global security environment. In assessing future roles and missions, the U.S. military will need to adopt and integrate more uninhabited and autonomous systems and experiment with innovative human-machine teaming methods. The National Defense Strategy rightly emphasizes that “Maintaining the Department’s technological advantage will require changes to industry culture, investment sources, and protection across the National Security Innovation Base.” And yet, the Defense Department is planning to invest less than $1 billion in AI as part of its 2020 budget request.

The innovation adoption problem will become more acute in light of the structural limitations on the America’s ability to mobilize private sector innovation for AI capability development. This matters because the majority of research and funding for AI takes place in academic labs or the private sector.

Not only that, the U.S. government acquisition process is in need of reform. As one study notes, despite some progress in recent years, it takes the Defense Department more than seven years on average to go from concept to initial operating capability. Lack of technical expertise, the inherent challenges of adapting test and evaluation practices to complex environments, and information bottlenecks are already causing delays in AI research and development, adoption, and responsible deployment.

Bureaucratic politics also explains some of the challenges that China will face in adopting AI. If you’re wondering why China’s military focused on developing a mechanized land force at the expense of other capabilities relevant to the information age, a good place to start is the entrenched interests and corruption within the highest ranks of the People’s Liberation Army. China is downsizing and demobilizing personnel, and President Xi Jinping established the Strategic Support Force to apply AI to cyber and space missions. Reforms at the Academy of Military Science reflect China’s commitment to institutional transformation — although important barriers remain.

One notable barrier to Chinese military success with AI is China’s state-owned defense industrial base. The Chinese government relies on inefficient state-owned enterprises for much of its procurement needs. Despite reforms to the defense industrial base, these state-owned enterprises operate as large bureaucracies, with their own interests to serve, their own turf to protect, and their own budgets to defend. Compared with the United States, which has a long history of public-private collaboration, the flow of technology between the Chinese commercial sector and defense industrial base is quite low. Xi has therefore created a high-level commission on military-civil fusion to push through reforms. The commission is still in its early days, but the track record so far indicates that the Chinese leadership is committed to breaking through obstacles to create more collaboration between the military and private sector.

Much has been made of China’s data advantage in AI, thanks to its more than 800 million internet users. It’s doubtful that China derives a comparative military advantage from mass troves of consumer data. AI is highly context-specific: Algorithms trained on data from Chinese consumers can yield accurate predictions about the types of behavior that generate this data, such as future purchasing habits or internet browsing. But those same algorithms will not necessarily predict outcomes in military applications of AI, such as autonomous vehicles, satellite imagery analysis, or intelligence, surveillance, and reconnaissance capabilities.

Although China holds more consumer data, the United States benefits from a vast array of sensors and platforms that generate large quantities of militarily relevant data. The United States has a uniquely broad reach in military sensors and deployed systems that are capable of gathering data in different contexts.

What’s more, Beijing faces obstacles to integrating and accessing consumer data. The ubiquitous Chinese platform WeChat, for example, holds considerable data. And yet, bureaucratic politics and organizational dynamics have limited its ability to integrate the myriad data streams from its internet users and mobile payment schemes.

In a world where currency is data and data is power, it’s little wonder that bureaucracies in China bargain hard, defend their turf, and withhold potentially incriminating information on lower-level officials from higher-level authorities. Deliberate corruption of data could introduce bias. Chinese leaders have long railed against so-called “information islands,” but they have had little success in centralizing and harmonizing various important data streams.

In his comprehensive analysis of the sources of change in Chinese military strategy, scholar Taylor Fravel argues that the degree of party unity is central to whether and how change occurs in response to shifts in the conduct of warfare. The failure of China’s military to complete a full revision of a new generation of doctrine since 1999 is indicative of the role of “inter-service rivalry or bureaucratic and cultural impediments to change.” The push and pull of U.S. bureaucratic politics is familiar, but the United States should not treat the Chinese state as a monolith. This picture is instructive, given the centralization of power in China. It is also incomplete, and U.S. analysts would do well to study China’s bureaucratic politics in addition to their own.

While China is a one-party state, its system of authoritarian governance relies on dozens of ministries or departments, along with a plethora of state-owned enterprises and research organizations, each with its own interests and perspectives. Instead of assuming that the Chinese government enjoys uniform access to data and can readily leverage that data to advance national power, observers should consider the structural elements of the Chinese system as a whole, including the pressure that’s brought to bear on corporate leaders, the reach of its Central Military-Civil Fusion Development Commission, the degree of centralization in its command and control systems, and the evolution of declaratory statements in core policy documents. After all, it may not be the nation with the best technology that surges ahead, but the one with the most efficient and agile bureaucracy.

AI’s future trajectory is uncertain. What is certain is that bureaucratic politics will continue to play an important role in shaping that trajectory. From where the author sits, it seems reasonable to offer this stand: know thyself, know thy competitor, know thy org chart — and not necessarily in that order.

 

 

Andrew Imbrie is a senior fellow at the Center for Security and Emerging Technology at Georgetown University. He served previously on the policy planning staff at the U.S. Department of State and as a professional staff member on the Senate Foreign Relations Committee.

Image: Pexels, adapted