Whither Skynet? An American “Dead Hand” Should Remain a Dead Issue

OBRIEN

The United States faces nuclear-armed adversaries with ever-modernizing arsenals. Russia continues to field new nuclear forces, many of which seem exotic and threatening. This comes at the same time as the United States contemplates the dilemma of maintaining its current nuclear posture while it modernizes each leg of the triad, a combined effort projected to cost approximately $1 trillion dollars over the next three decades. And all this is happening at the beginning of what might be a renaissance in artificial intelligence.

Given technological changes, both in command and control systems and in threat weapon systems, Adam Lowther and Curtis McGriffin argue in War on the Rocks that the United States should consider fielding an autonomous nuclear command and control system. This would, in their view, guarantee nuclear retaliation in the event of a sneak attack, thus preserving strategic deterrence.

 

 

This is, of course, a bold solution. Indeed, it is too bold, and it rests on a poor understanding of current trends, an unrealistic assessment of nuclear threats, and an overly optimistic assessment of the utility of future automated systems. Rather than being the solution to future deterrent threats, an autonomous nuclear command and control system is a recipe for disaster. To understand why this is, we need to first consider the role of automation in future strategic warning.

Alexa, Start a Target Packet

It’s worth acknowledging something important: Artificial intelligence will be central in some way, shape, or form. It will be essential because it is already a key part of the way America manages large chunks of all its military forces, not just its nuclear ones. Artificial intelligence is a key aspect in the management and analysis of intelligence and is increasingly becoming an essential part of decision-making from the tip of the spear to the White House.

Why?

The scale of information being collected and processed by all the various parts of the military and intelligence community is like nothing ever seen before. As an illustrative example, the U.S. Navy as far back as 2014 was found to be using as little as 5 percent of all data its sensor platforms collected. Of that data, only about 20 percent of the information analysts were able to pull up was relevant to the task at hand. The amount of data collected on a routine basis is staggering, and the ability of U.S. military and intelligence organizations to exploit that data has always been outstripped by the rate of collection.

Yet, while the scale of the problem is unprecedented — along with the technology becoming available to cope with it — the problem itself is old. Consider the Cunningham Report, published in 1966 by the CIA’s inspector general: It assessed that U.S. intelligence analysts were being overwhelmed by the sheer torrent of information being collected. As the report noted, “we have come to realize that [analysts] are not the driving force behind the flow of information. Rather, the real push comes from the collectors themselves, particularly the operations of large, indiscriminating technical collection systems.”

During a nuclear crisis, this can have potentially catastrophic consequences for decision-makers. Look no further than the Aleksandrovsk: In 1962, this Soviet cargo vessel was photographed loading cargo at a major Soviet naval base on the Kola Peninsula, one known to be a nuclear warhead storage site that rarely saw civilian cargo vessels berthed at its docks. Later, that same vessel was photographed unloading cargo in Cuba just before the Cuban Missile Crisis. This vessel provided strong evidence that the Soviet Union had just installed offensive missile systems and nuclear warheads into Cuba.

So, what was the problem? All of these pieces of intelligence were not assembled by analysts until it was too late: January 1963. And this was only after CIA analysts noted this vessel re-loading warhead transport vehicles to return to the Soviet Union. The CIA held the evidence needed to determine that the Soviet Union had the ability to strike the U.S. mainland as well as any invading U.S. troops. Yet, this critical information was sitting unexploited. And U.S. military leaders were planning a major bombing campaign and invasion of Cuba.

Imagine then if the United States could create automated systems to help categorize and prioritize information like this in order to connect the dots for analysts almost instantaneously. This would prove absolutely invaluable to crisis stability. That’s one key reason it remains a priority for the intelligence community. The intelligence community’s strategic and operational warning capabilities exist as much to make decision-makers comfortable that they’re not under (or not vulnerable to) attack as they do to warn them when an attack is underway. That comfort is built and maintained by both providing useful and timely insight into world events as well as by providing useful early warning for lesser crises. Intelligence automation can make it easier for the intelligence community to accomplish this, if only at a minimum by automating the grunt work that consumes untold analyst hours.

But is applying this same level of delegation necessary for the actual command, control, and employment of nuclear weapons? Lowther and Griffin certainly argue so, but to test if they’re correct, we should look at the actual threats facing the U.S. nuclear enterprise.

Meet the New Missile, Same as the Old Missile

Lowther and Griffin make the assertion in their piece that recent technological advances by the Russians have placed excessive strain on the nuclear command and control apparatus, thus posing the risk that our land-based bombers and inter-continental ballistic missiles (ICBMs) could be destroyed after a successful decapitation strike on the United States. This risk thus necessitates the placing of U.S. nuclear forces under the control of an automated system and away from the squishy meatbags currently commanding and controlling the nuclear enterprise. Has the strategic picture really changed that much?

The answer is an obvious and emphatic “no.” Let’s consider the new Russian systems being fielded and decide if they will really change the strategic balance.

Russia is fielding a hypersonic glide vehicle, the Avangard, which will allow a nuclear warhead to ride to its target on an erratic and unpredictable flight-path. This theoretically would make them difficult to counter with existing mid-course missile defenses given the lower and more evasive flight path. What often goes unsaid, however, is that these systems are actually slower than a traditional ICBM on a long flightpath, thus increasing the amount of warning time for decision-makers. Yes, it is harder to track the flight and thus the potential point-of-impact of this weapon, which complicates things like providing timely civil defense warnings, but if the concern is decision-makers being attacked before they can either relocate or issue orders, then a boost-glide vehicle actually reduces pressure on this process. Bombers have more time to flush from their airfields, and the president has more time to consider relocating and could even possibly order a retaliation. The president may not know exactly where a given weapon is going to land, but he or she would likely know just how many are on their way, which would allow him or her time to gauge the scale of an attack and respond accordingly.

Russia is also arming itself with the Poseidon — a high-speed, deep-diving, and high-yield nuclear-powered torpedo with inter-continental range that can evade missile defenses and detonate upon arriving off coastal targets such as cities or ports. Of course, this target set should provide a hint as to the limits of this weapon. It would be an impressive feat indeed for this weapon to make its way to Offutt Air Force Base, Nebraska and — as anyone who has sailed around the Chesapeake can attest — it would also be unlikely for the weapon to get close enough to the District of Columbia to threaten leadership targets there. It could also target nuclear submarine bases, but there are always ballistic missile submarines at sea in wait (more on these later).

Russia is also developing a nuclear-powered cruise missile: the Burevestnik. Like all cruise missiles, this system will have a small enough launch plume as to make detection of launch difficult. Its extreme range will allow the missile to fly in an evasive flight path that might take it to the United States from a number of different directions not easily monitored by air defense radar. Leaving aside, however, the obvious technical and operational issues with making a system like this work (the United States, after all, abandoned its own attempt for a reason), the detection challenges are not that unique compared to other cruise missiles.

This brings us to submarine-launched cruise missiles that could be fired off the coast of the United States and target the capital. Like all cruise missiles, these are difficult to detect by radar, can fly to their targets following an evasive route, and can allow for precise targeting. This is, of course, a problem. But it is one that has existed for decades. It was one of the reasons for the ill-fated JLENS program and explains why there are missile interceptors located around Washington. Of course, one could argue that this shows that Washington was, is, and will remain vulnerable to cruise missile strikes. Yet it is difficult to see the threat as having grown in scale since the Cold War, when potentially dozens of nuclear-powered attack submarines could be used to deliver missiles to the capital, and even then steps such as dispersing leadership at multiple fixed and mobile locations provided the degree of redundancy needed to continue operations.

Ballistic missiles have the advantage of being relatively predictable in their flightpath, which makes predicting where they will fly and hit a relatively simple physics problem. This predictability, however, also makes them theoretically vulnerable to missile defenses. The unpredictable nature of a cruise missile’s path to the target should point to their probable role: penetrating U.S. missile defenses. These systems are designed to counter a persistent Russian fear: that U.S. conventional precision strike capabilities, when paired with missile defenses, could give the United States a first-strike advantage.

Lowther and McGriffin insist that non-Russian analysts cannot possibly grapple with Russian views and intentions. This is delusory. Leaving aside their abuse of the case of Crimea in 2013, surely these two authors, like most nuclear analysts, noticed that Russian President Vladimir Putin wasn’t bashful about discussing how these systems were intended for use penetrating missile defenses.

The thinking on display in their article, in short, isn’t likely to give anyone an informed and measured assessment of Russian capabilities. It’s the kind of thinking that risks another Team B debacle. And while that affair was a lot of things, it wasn’t exactly a model for useful policy development.

Like Bing, But for Armageddon

Let’s assume that none of the things listed above are true and that the strategic balance has appreciably changed. Does this change make an automated nuclear command and control system valuable? Can an automated system solve America’s deterrence problems? If one considers the necessary requirements and measures them against with the potential drawbacks, the answer is — yet again — an emphatic and resounding “no.”

Escalation control requires many sources of data to understand how to get to a desired end-state. If one wishes to approximate or supersede human decision-making, one must also capture and successfully manage the flow of intelligence information, status reports from one’s own forces, and understandings of one’s own domestic situation (e.g., how much damage has my civilian population suffered).  It is a deceptively complex endeavor. Herman Kahn once described decision-makers seeking to control escalation as engaging in so-called “dead reckoning,” whereby those leaders — despite “being cut off from all information external to [their] his own organization” — will work off the information that they had gained prior to the start of the crisis and with whatever limited feedback may come as events progress. In this construct, a decision-maker has only a few inputs (speed, compass direction, et cetera) by which to work his or her way to a desired goal.

The problem, however, is that the information that might come in prior to escalation, and most definitely the information that will flow in during a crisis, is likely to be compromised. Prior to a crisis, an adversary has every incentive to flood intelligence collection sensors with false or misleading data, if only to provide local freedom of action. Though humans are used to a degree of ambiguity in collected data, it is questionable if a system using more advanced automated methods could actually handle anything similar given the numerous examples of automated systems struggling with unambiguous data.

Worse, during an actual crisis and into the beginnings of a major conflict, the collection platforms that gather the data are often extremely vulnerable, a problem commonly referred to as “entanglement.” The communications networks that move them are equally vulnerable, especially as commercial providers take on increasing levels of communications support. If a given automated system struggles with ambiguous data, how will it handle losing data feeds? A human handling such a crisis is an open question. An automated machine-learning database handling a similar problem is another thing entirely.

Consider Lowther and McGriffin’s example of a supercomputer currently reigning as the world’s Go champion. While that game is notoriously complex, there is no ambiguity as to what color piece has been placed in what location on the playing board. Intentions may be obscured, but the physical reality of the playing board certainly is not.  As complex and interconnected communications systems, critical collection platforms, and analytical centers are attacked during a conventional conflict, how can this autonomous command and control system be sure what it’s “seeing” is reality? Humans alone would struggle with this. There’s no indication a computer would do any better and could quite possibly do far worse. And if the system was built in with such risk tolerance that it shrugs off these incidents, how does it address the problem it is designed to solve any better than the seemingly-inferior humans it is intended to replace?

Of course, this could also be a tempest in a teacup. Moore’s Law, considered such a constant in our contemporary discourse, is less a firm law than a rough guideline. There is no guarantee that AI can manage the level of control the United States needs it to handle. The answer is clear: AI should be a decision-aid, not the decision-maker itself.

Control, Alt, Kaboom

As an IBM training document from the 1970s put it, a “a computer can never be held accountable, therefore a computer must never make a management decision.” Deciding on the deaths of mission of people is and must remain a human role. As the United States modernizes its nuclear force, American leaders in and out of uniform should keep this in mind. No matter the threats, as humans we are attempting to achieve human aims. It would be inappropriate, unnecessary, and dangerous to delegate the intense responsibilities this brings to a machine, no matter what comes.

There are many more intelligent methods by which to ensure that decision-makers can sufficiently command their forces during crises, from mobile command posts to redundant infrastructure. One could even simply send ballistic missile submarines out with a Letter of Last Resort. Or one could just restore the mid-air refueling capability to the next Air Force One. The president can’t remain aloft forever; however, buying extra hours of flight time could provide the president with a greater window of time to determine a way ahead and to command-and-control his or her forces from a comparatively secure location. All of these steps are affordable, well understood, and avoid accidental nuclear escalation due to a misplaced line of code.

Americans love technical solutions to complex human problems. If you don’t believe it, spend about five minutes perusing the social media feeds of various Silicon Valley tech titans and you’ll be convinced. Nuclear deterrence is in many ways the most powerful embodiment of this urge. The United States wanted to secure its future in a very complex world, so it threw trillions of dollars at a technical solution. Yet, as our previous discussion shows, things can be a great deal more complex than mere technology can deliver. As the United States reorients itself towards great power competition and modernizes its strategic forces, American leaders do well to remember that, lest they invite disaster.

A computer should never be permitted to make a strategic decision. American lives and the lives of millions of our fellow human beings depend on it.

 

 

Luke O’Brien is a contributing editor at War on the Rocks and is a weapon of mass destruction analyst and historian. He is mid-career cadre at the Center for Strategic and International Studies’ Project on Nuclear Issues. He was previously a National Defense University countering weapons of mass destruction graduate fellow, where his research focused on information management and nuclear escalation control. He is also a U.S. Army reservist. The views expressed in this article are solely those of the author and do not constitute those of the U.S. Army, the Department of Defense, or any part of the U.S. government.

Image: U.S. Air Forces Europe