A Different Use for Artificial Intelligence in Nuclear Weapons Command and Control

NORAD

Artificial intelligence (AI) is expected to change the way the United States and other nations operate their nuclear command and control. For instance, a recent RAND report surveyed AI and nuclear security experts and notes that “AI is expected to become more widely used in aids to decisionmaking” in command-and-control platforms. The report also indicated the possibility that narrow AI could in the future act as a “trusted advisor” in nuclear command and control. In this article, I will examine the advice such an advisor might provide to decision-makers in a nuclear crisis, focusing on the possibility that an algorithm could offer compelling evidence that an incoming nuclear alert was a false alarm, thereby counseling restraint rather than confrontation.

Decision-makers who stand guard at the various levels of the nuclear weapons chain of command face two different forms of stress. The first form of stress is information overload, shortage of time, and chaos in the moment of a crisis. The second is more general, emerging from moral tradeoffs and the fear of causing loss of life on an immense scale. AI and big data analysis techniques have already been applied to address the first kind of stress. The current U.S. nuclear early warning system employs a “dual phenomenology” mechanism designed to ensure speed in detecting a threat and in streamlining information involved in the decision-making process. The early warning system employs advanced satellites and radars to confirm and track an enemy missile almost immediately after launch. In an actual nuclear attack, the various military and political personnel in the chain of command would be informed progressively as the threat is analyzed, until finally the president is notified. This structure substantially reduces information overload and chaos for decision-makers in a crisis.

However, as Richard Garwin writes, the system also reduces the role of the decision-maker “simply to endorse the claim of the sensors and the communication systems that a massive raid is indeed in progress.” While the advanced technologies and data processing techniques used in the early warning system reduces the occurrence of false alerts, it does not completely eliminate the chances of one occurring. In order to address decision-makers’ fear of inadvertently starting a nuclear war, future applications of AI to nuclear command and control should aspire to create an algorithm that could argue in the face of overwhelming fear of an impending attack that a nuclear launch isn’t happening. Such an algorithm could verify the authenticity of an alert from other diverse perspectives, in addition to a purely technological analysis. Incorporating this element into the nuclear warning process could help to address the second form of stress, reassuring decision-makers that they are sanctioning a valid and justified course of action.

Command and Control During the Cold War: The Importance of Big Data

In the world of nuclear command and control, the pursuit of speed and analysis of big data is old news. In the early 1950s, before the advent of nuclear intercontinental ballistic missiles (ICBMs), the United States began developing the SAGE supercomputer. SAGE, which was built at approximately three times the cost of the Manhattan Project, was the quintessential big data processing machine. It used the fastest and most expensive computers at the time – the Whirlwind II (AN/FSQ-7) IBM mainframe computers – at each of 24 command centers to receive, sort, and process data from the many radars and sensors dedicated to identifying incoming Soviet bombers. The SAGE supercomputer then coordinated U.S. and Canadian aircraft and missiles to intercept those bombers. Its goal was to supplement “the fallible, comparatively slow-reacting mind and hand of man” in anticipating and defending against a nuclear bomber campaign.

The proliferation of ICBMs in the 1960s, however, made the SAGE command centers “extraordinarily vulnerable.” The U.S. Air Force concluded that Soviet ICBMs could destroy “the SAGE system long before the first of their bombers crossed the Arctic Circle.” In 1966, speaking at a congressional hearing, Secretary of Defense Robert McNamara argued that “the elaborate defenses which we erected during the 1960s no longer retain their original importance. Today with no defense against the major threat, Soviet ICBMs, our anti-bomber defense alone would contribute very little…” The SAGE command centers were shut down.

 

 

McNamara formed a National Command and Control Task Force, informally referred to as the Partridge Commission, to study the problem of nuclear command and control in the early days of the ICBM era. The commission concluded “that the capabilities of US [nuclear] weapon systems had outstripped the ability to command and control them” using a decentralized military command and control structure. The commission recommended streamlining and centralizing command and control with much stronger civilian oversight. The commission also advocated the formation of the modern-day North American Aerospace Defense Command, better known as NORAD, with its advanced computer and communication systems, early warning satellites, and forward-placed radars designed to track any missile launch on the planet before it could reach the continental United States.

NORAD and its computer and communication systems were designed to resolve the stress from information overload by compartmentalizing and automating the process of evaluating a threat. Depending on its particular trajectory, an enemy nuclear missile may take anywhere between 35 minutes to just eight minutes to reach its target. When the launch of an enemy missile occurs, it is first picked up by early warning satellite sensors within seconds. The satellites track these missiles while the engines are still ignited. Once the missile comes over the horizon, forward-deployed radars independently track them. The data from the two systems is then assessed in the context of the prevailing geostrategic intelligence by NORAD. NORAD would then pass the assessment up the military and political chain of command. This sequence of steps ensures that senior decision-makers are not overwhelmed with information. By the time decision-makers are notified, the decision to retaliate to an apparent attack “must be made in minutes.” Future advances in AI might only add incremental improvements to the speed and quality of information processing to this already advanced nuclear early warning system.

Using AI to Prevent Inadvertent Nuclear War

These advances in nuclear command and control still do not directly address the second form of stress, one that emerges from the fear of a nuclear war and the accompanying moral tradeoffs. How can AI mitigate this problem? History reminds us that technological sophistication cannot be relied upon to avert accidental nuclear confrontations. Rather, these confrontations have been prevented by individuals who, despite having state-of-the-art technology at their disposal, proffered alternate explanations for a nuclear warning alert. Operating under the most demanding conditions, they insisted on a “gut feeling” that evidence of an impending nuclear war alert was misleading. They chose to disregard established protocol, fearing that a wrong choice would lead to accidental nuclear war.

Consider for example a declassified President’s Foreign Intelligence Advisory Board report investigating the decision by Leonard Perroots, a U.S. Air Force lieutenant general, not to respond to incoming nuclear alerts. The incident occurred in 1983 when NATO was conducting a large simulated nuclear war exercise code-named Able Archer. The report notes that Perroots’ “recommendation, made in ignorance, not to raise US readiness in response” was “a fortuitous, if ill-informed, decision given the changed political environment at the time.” The report also states:

the military officers in charge of the Able Archer exercise minimized this risk by doing nothing in the face of evidence that parts of the Soviet armed forces were moving to an unusual level of [nuclear] alert. But these officers acted correctly out of instinct, not informed guidance.

Perroots later complained in 1989, just before retiring as head of the U.S. Defense Intelligence Agency, “that the U.S. intelligence community did not give adequate credence to the possibility that the United States and Soviet Union came unacceptably close to [accidental] nuclear war.”

In the same year, Stanislav Petrov, a commanding officer involved in Soviet nuclear operations, also dismissed a nuclear alert from his country’s early warning system. In the face of data and analysis that confirmed an incoming American missile salvo, Petrov decided the system was wrong. Petrov later said, “that day the satellites told us with the highest degree of certainty these rockets were on the way.” Still, he decided to report the warning as a false alert. His decision was informed by fears that he “didn’t want to be the one responsible for starting a third world war.” Later recalling the incident, he said: “I had a funny feeling in my gut. I didn’t want to make a mistake. I made a decision, and that was it. When people start a war, they don’t start it with only five missiles.” Both, Perroots and Petrov feared the moral consequences of a nuclear war, particularly one initiated accidentally. They distrusted the data and challenged protocol.

Conclusion

Fred Iklé once remarked, “if any witness should come here and tell you that a totally reliable and safe launch on warning posture can be designed and implemented that man is a fool.” If that is true, how close can AI get us to reliable and safe nuclear command and control? AI-enabled systems may aspire to reduce some of the mechanical and human errors that have occurred in nuclear command and control. Prior instances of false alerts and failures in early warning systems should be used as a training dataset for an AI algorithm to develop benchmarks to quickly test the accuracy of an early warning alert. The goal of integrating AI into military systems should not be speed and accuracy alone. It should also be to help decision-makers exercise judgment and prudence to prevent inadvertent catastrophes.

 

 

Jaganath Sankaran is an assistant professor at the Lyndon B. Johnson School of Public Affairs at the University of Texas at Austin. He can be reached at jaganath@austin.utexas.edu.

 

Image: North American Aerospace Defense Command photo