Was There a Nuclear Revolution? Strategy, Grand Strategy, and the Ultimate Weapon

rovner

Editor’s Note: This is the eighth installment of “The Brush Pass,” a column by Joshua Rovner (@joshrovner1) on intelligence, strategy, and statecraft.

In 1946, the dean of American strategy argued that nuclear weapons were too powerful to use. Vastly more lethal than all previous arms, the grotesque scale of nuclear destruction overwhelmed any conceivable policy goal. The danger of escalation also meant that even conventional war could lead to calamity. The upshot was that U.S. leaders had to fundamentally rethink the relationship between force and politics. “Thus far the chief purpose of our military establishment has been to win wars,” Bernard Brodie concluded in 1946. “From now on its chief purpose must be to avert them.”

In 1955, the president of the United States argued the opposite. At the height of a crisis with China over disputed offshore islands, Dwight Eisenhower suggested that nuclear weapons could be used “like a bullet or anything else.” Human beings had always invented new and more lethal weapons, and states had always figured out ways of using them. Nuclear weapons were no different. In fact, incorporating nuclear operations into warfighting doctrine was necessary for deterrence as well: No one would fear nuclear weapons if America was unwilling to use them.

The Brodie-Eisenhower debate never ended. Brodie’s intellectual descendants developed his key insight about the nuclear revolution, explaining why and how nuclear weapons altered traditional ideas of strategy and statecraft. They also urged policymakers to disabuse themselves of the idea that nuclear war was winnable in any meaningful sense. To Brodie’s followers, the fantasy of nuclear victory would lead to atrocious waste at best, and a terrible tragedy at worst. The theory of the nuclear revolution influenced scholarship on issues ranging from deterrence and the offense-defense balance to crisis stability and the problem of misperception.

But U.S. leaders never embraced the revolution. New scholarship on the Cold War reveals that their behavior rarely lived up to the theory’s predictions. Instead of viewing nuclear weapons solely as tools of deterrence, leaders invested in technologies to reduce the expected cost of war. More accurate warheads might be able to target enemy forces without threatening enemy cities. Better intelligence and surveillance might allow the United States to take out enemy nuclear forces at the outset of any war. And as warfighting became possible, deterrent threats would become more credible, improving everyone’s security.

The theory of nuclear revolution also failed to explain leaders’ fear of proliferation. If nuclear weapons were great for deterrence but lousy for battle, then Washington should have been sanguine as new countries went nuclear. It might even have been optimistic, since proliferation would, under the theory, lead countries to become cautious. Instead, U.S. leaders worried that the spread of nuclear weapons would spin out of control, and they spent decades trying to prevent it.

The dispute over the nuclear revolution thesis has returned with a vengeance in today’s debate over the U.S. arsenal. The Trump administration’s Nuclear Posture Review advocates for low-yield warheads so that the United States can control escalation during war. This, the document suggests, will deter enemies from fighting in the first place. Critics accuse the administration of indulging in fantasies about its ability to manage enemy behavior in the midst of a shooting war. The targets of a U.S. attack would not be able to discriminate between low-yield and high-yield weapons, and they would have strong incentives to assume the worst. Supporters of the review argue that these concerns are overblown, and that the United States has successfully incorporated limited nuclear options in the past.

Does all this mean the nuclear revolution never happened? Was it just an academic construct, interesting, but with no bearing on actual policy decisions? Did Cold War scholars accept Brodie’s ideas too easily, even though policymakers did not? And does that mean we should discount their arguments when considering present-day nuclear debates?

The answer is mixed. The theory doesn’t pass the historical test at the level of grand strategy, which is a state’s theory of security. If the nuclear revolution affected grand strategy, the United States should have settled for a small arsenal for the sole purpose of deterrence. It would never have sought to integrate nuclear and conventional forces, because nuclear weapons were fundamentally different in that they could never be used. U.S. leaders should have recognized that defenses against nuclear attack were futile, and avoided pouring time and money into such efforts. And they should have managed the process of proliferation so that states, great and regional powers alike, enjoyed the security benefit of a reliable second-strike capability. None of these things happened.

The theory fares better at the level of strategy, which is a state’s wartime theory of victory. While nuclear weapons haven’t changed everything, one critical fact remains: No state has used the weapons since 1945. This is unusual. States have been perfectly willing to use other novel military technologies on the battlefield, suggesting that there is something peculiar about these weapons that changes how leaders think about them. Either they are less confident that they can use nuclear weapons to achieve policy goals, or they are too frightened about the possible effects to try. Political leaders blanched when confronted with operational plans for nuclear use, apparently out of genuine shock at the devastation these plans would entail. And in crises they stepped back from the brink, over and over, regardless of their personal characteristics or the nature of the regimes they led. Leaders were not willing to take the kind of risks with nuclear weapons that they took with conventional military forces — precisely what the nuclear revolution thesis predicts.

The strategy-grand strategy distinction helps us understand the impact of nuclear weapons in the Cold War, and it provides a new way to evaluate the theory of the nuclear revolution. Nuclear weapons were perhaps less revolutionary than Brodie and others thought, though they still inspired a special kind of restraint among leaders standing at the brink. The distinction also sheds light on the current debate over U.S. nuclear forces. Nuclear hawks argue that policy should not be constrained by a theory that fails at the level of grand strategy. From their perspective, the arsenal provides a number of important benefits to U.S. foreign policy beyond their utility in wartime. But for more cautious critics, the strategic level matters most. If there is no rational argument for using nuclear weapons in anger, then there is little to be gained from posturing with them for other purposes. The risks of wartime catastrophe far outweigh both the marginal peacetime benefits of a larger force and the more clever ideas about how to use it.

Joshua Rovner is Associate Professor in the School of International Service at American University.

Image: National Archives