Prove It Before You Use It: Nuclear Retaliation Under Uncertainty
It is 2028, and the United States Space Force’s early warning radar modernization is complete. Technical Sergeant Jack Nichols works at Buckley Space Force Base operating systems that detect and assess ballistic missile threats against the United States and Canada. Since arriving at the Colorado base, Nichols has experienced his share of false alarms. However, these are no ordinary false alarms; the system Nichols watches provides early warning that the United States is under ballistic missile attack. While these existential alerts would distress most, he maintains an “old school” validation protocol: He evaluates the warning against his sensor’s input settings and raw data output, resolving any concerns.
But today, the warning that flashed across his screen was different. Recent modernization efforts introduced next-generation sensors and machine learning–powered tools to manage the increased flow of information. These purported improvements made the raw data inaccessible to Tech Sgt. Nichols. The system had identified an incoming missile, but he couldn’t help but wonder: What if this was a mistake? What if the system had been hacked or had malfunctioned? And, just as unsettling, what if the newly implemented algorithm had made a decision based on flawed or biased data?
To some extent, his concerns do not matter. His training dictates that he has less than two minutes to evaluate and report the warning. This expediency ensures the president maintains the option to launch a retaliatory nuclear strike before an adversary’s weapon — if a first-strike weapon is, in fact, inbound — strikes the American homeland. Nichols understood that the president’s decision to retaliate requires balancing the inherent limitations of early warning accuracy with the concern that presidential control may be lost if the warning turns out to be true. But, he wondered, could the pressure from this uncertainty be alleviated if the president could issue a delayed order?
A New Nuclear Era
Russian President Vladimir Putin’s threats to deploy tactical nuclear weapons in Ukraine as well as North Korea’s provocative ballistic missile testing have renewed concerns about the possibility of nuclear escalation. Meanwhile, China’s burgeoning submarine-launched deterrent capability and Iran’s rebuilding of its nuclear capability have provided additional reasons for concern.
This unease is exacerbated by the advanced offensive capabilities in cyberspace demonstrated by these same actors. U.S. adversaries, such as Russia and China, have targeted critical national infrastructure, including electrical grids and nuclear facilities. Perhaps what is most destabilizing is that these adversaries are incentivized to hide their capabilities until they are ready to be used, so the true extent of the cyber-nuclear threat is unknown.
Given this security environment, the Biden administration continues the push to modernize the nation’s nuclear deterrent. This modernization effort includes investing in the capacity and hardening of the nuclear command, control, and communications architecture. Furthermore, it potentially entails the integration of machine learning systems and other emerging technologies — despite objections from experts writing in these virtual pages — as outlined in the Nuclear Posture Review.
However, as the hypothetical vignette in our introduction illustrates, modernizing equipment and systems may not be enough to achieve the administration’s goals of “non-use and to reduce the risk of a nuclear war.” President Biden — and any future U.S. leader — still retains the “launch-under-attack” option. In this approach, when early warning sensor data indicates a “medium or high confidence” of a threat, the White House is alerted, and the president and their advisors convene. At this emergency conference, the president will be briefed their options and decide whether or not to launch nuclear weapons, even if the warning’s legitimacy is not conclusively determined.
This approach is a remnant of the Cold War. We argue it is inadequate in today’s strategic landscape, given the proliferation of nuclear weapons and cyber capabilities, as well as the technical limitations and human biases associated with the use of automated and machine learning systems. Instead, we argue that this administration should break from its predecessors and adopt a “decide-under-attack” posture. This action would shift the retaliation posture from a time-constrained decision in the fog of war to deliberate action based on evidence of an attack.
Cold War Posture Endures
In the 1970s, the United States was concerned that the Soviet Union could launch a surprise attack using thousands of land-based missiles against then-vulnerable Minuteman missiles and command and control nodes. The concern was that after this attack, the United States would be unable to retaliate with nuclear weapons. To deter this threat and maximize response options, a launch-under-attack posture was adopted in 1979. Under this posture, Minuteman missiles were required to launch within 30 minutes of receiving reliable warning that the United States was under attack. Later, in the 1980s, submarine-launched ballistic missiles were also configured to this posture.
This policy was extended even after the fall of the Soviet Union. Planners determined that an effective counterstrike required, at minimum, a five-minute launch sequence. This left the remaining 25 minutes for satellite and radar detection, operator assessment, communication to the president, and a nuclear-use decision. These time constraints encouraged successive U.S. administrations to maintain the launch-under-attack policy.
However, simulations by Massachusetts Institute of Technology researchers have demonstrated that hundreds of silo-based Minuteman missiles would likely survive a first strike. In fact, according to recent analysis published on this platform, the United States would maintain “more warheads per retaliatory target than before the Russian strike,” weakening the primary rationale for the posture. This somewhat puzzling result is due to the survivability of U.S. silo-based missiles and the fact that there will be fewer military targets remaining, since many Russian missile silos will be empty after a first strike.
“Launch-Under-Attack” in a Complex World
A launch-under-attack posture exposes the United States to an increased risk of accidental or mistaken launch in the modern nuclear era. To be available as an option, launch-under-attack relies on accurate warning data and a viable launch capability. The follow-on corollary is that to be effective in its deterrence role, adversaries must believe that a first strike would be detected and retaliatory weapons would be employed. Underpinning these capabilities is the nuclear command, control, and communications architecture. But unlike during much of the Cold War, modernized command and control systems are more reliant on computers and thus are susceptible to cyber exploitation. This is a significant risk when combined with an outdated retaliatory option, as it impacts incentives for preemptive or retaliatory nuclear launch decision-making.
Two cyber risks are routinely discussed in policy circles. First, critical hardware and software components may be compromised in supply chains. Adversaries can introduce malware or malicious code to digital and automation components to infiltrate both networked and non-networked elements of communications systems. If U.S. nuclear systems were compromised by a supply chain attack, it could either undermine the national command authority’s confidence in its second-strike capability or, from the adversary’s perspective, reduce the risk of a retaliatory strike.
The second cyber risk is spoofing, which involves the injection of false data into key computer-mediated systems. Spoofing can take two forms in early warning systems: hiding actual inbound missiles or creating fake signals of inbound missiles. The former is more likely to originate from a nuclear peer in an effort to further compress Washington’s decision-making window by obfuscating early warning data in hopes of increasing the effectiveness of a first strike. The latter, on the other hand, is more likely to be injected by a non-peer or terrorist group aiming to manipulate global perceptions of American brinkmanship or trigger catalytic nuclear war between two or more powers.
During a crisis, cyber vulnerabilities can increase the risk of a preemptive strike or a mistaken launch. This is because cyber attacks can disrupt critical systems, which can reduce trust in early warning and second-strike capabilities. Additionally, such attacks can create confusion and make it difficult to distinguish between a genuine attack and a false alarm, potentially resulting in a mistaken launch from the side that thinks it is under nuclear attack. The launch-under-attack posture exacerbates this problem because it requires a decision to be made. Even if the president opts for nonretaliatory measures, this is still a deliberate choice amidst the prevailing uncertainty.
The rationale for this posture has also been challenged by proliferation, which has driven increased demands on technical systems. When the launch-under-attack posture was first implemented, there were only two major nuclear powers. This is not the case today. The 2022 Nuclear Posture Review recognizes both Russia and China as major nuclear powers and strategic competitors. In the absence of strategic intelligence suggesting an imminent first strike, the already short decision timeline is further compressed by the need to collate early warning system detection with ever-growing sets of radar and intelligence data. Command, control, and communications systems — particularly early warning system components — that are modernized and integrate machine learning will help alleviate some of this information-induced pressure. However, technical limitations and human biases introduce additional risks.
Fundamental to machine learning systems are the data and algorithms that train the system. Data, which is used to train algorithms, can be poisoned or biased, while the algorithms themselves may produce results of indeterminable quality. Moreover, training machine learning systems based on infrequent occurrences is challenging. In the case of implementing machine learning tools for early warning systems, the infrequency of missile launches poses a unique challenge for training these systems. And in the absence of real-world data, simulations will be used to generate the necessary data sets. Effective simulation data will rely on intelligence about adversary delivery capabilities. Inaccurate intelligence risks creating bias in the system’s training, and there may be insufficient opportunities to validate the models using real-world events.
Accurately assessing nuclear capabilities is a challenge because intelligence is fallible, and open source data reveals only so much. But these assessments and the follow-on technical challenges may be more pronounced in a scenario where the primary nuclear threat is temporarily a non-peer, like North Korea. If a machine learning system is overtrained on particular data, it can make inaccurate predictions when presented with new information. For example, if early warning systems are overtrained on data from known Russian and Chinese capabilities, the model may misclassify sensor data from a new North Korean capability. More generally, proliferation — to include both new states developing nuclear weapons and existing powers expanding capabilities — generates greater uncertainty in model outputs. This uncertainty may make it more difficult for decision-makers to assess a threat.
Compounding these technical weaknesses is an operator’s tendency to overestimate the system’s accuracy, particularly as operators are further removed from the original data. For instance, when an operator interprets radar data, they will determine whether a missile is there or not. When an algorithm performs this interpretation, it may simply output whether an attack is in progress or not. Again, because actual events are infrequent, a system will frequently and correctly evaluate “no attack,” convincing operators and decision-makers that the system is more accurate than it is. This can lead to an overconfidence dubbed automation bias, and it is especially prevalent in military settings due to training and organizational trust. The human-machine interaction at the operator level, combined with the launch-under-attack option for the president, are conducive to facilitating a positive launch decision, even without certainty of a threat.
Building Resiliency Through Policy
The Swiss cheese model of accident causation is a risk management tool used in a variety of industries. The model uses a slice of cheese to represent individual safeguards. Each safeguard has inherent weaknesses, which are portrayed by the holes in each slice. In the visual analogy, by stacking multiple slices of cheese together, the likelihood of an unwanted outcome is reduced. Ideally, enough cheese slices are stacked so that the holes do not align, and threats are thwarted.
In the U.S. nuclear architecture, multiple safeguards are stacked to prevent weaknesses in each component from aligning. However, the launch-under-attack posture creates an opportunity for system weaknesses to align by creating incentives to overly trust early warning systems, which is where the nuclear-use decision chain begins. Even as just an option, the president will face a “premium on haste in a crisis” to launch from a high confidence warning, or otherwise face the strategic and political repercussions of indecision. Thus, the posture’s mere availability paradoxically constrains the president’s decision-making process, which is informed by vulnerable machine-produced data in a time-compressed, high-stress environment.
Decide-Under-Attack in the Electronic Environment
The current retaliatory posture must consider two factors: first, the inherent and increasing vulnerability of systems that inform decision-making, and second, the fundamental importance of presidential control in U.S. nuclear policy. It is crucial for a retaliatory posture to ensure the availability of weapons and command and control from the use decision to execution.
Retired Adm. James Winnefeld, former commander of North American Aerospace Defense Command, proposed an approach that better balances deterrence and safety. This posture, called “decide-under-attack,” introduces a delayed response option to reduce the time pressure inflicted by launch-under-attack.
Ultimately, an attack warning will prove to be real or false. But the president will decide whether to launch weapons or not without knowing if it is the former or latter. Among the four possible scenarios, two outcomes must be avoided. The first would be that the president fails to launch when an attack warning is real. The second would be an irretrievable retaliatory strike even though the warning is false. The cyber- and system-based vulnerabilities highlight the uncertainty inherent in the information that feeds this decision-making process. And due to the induced time constraints, a launch-under-attack posture increases the likelihood of these unwanted outcomes.
Decide-under-attack improves upon launch-under-attack by allowing the president to opt for a delayed response. This option extends the reach of command and control and reduces the pressure caused by uncertainty and time constraints. Upon receiving a warning, the president can choose to order specific or all components of the nuclear triad to execute a delayed attack. For example, the president may decide to ready the submarine- and land-launched components while keeping the long-range bombers grounded to minimize the potential for escalation if the warning proves false.
In a scenario where the president has a higher degree of confidence that the warning is real and is concerned about the survivability of the land and sea components, they may also order the strategic aircraft to take flight. Even if it is a real warning and the president becomes incapacitated (or communications are lost), weapons would be available and the command and control concept would be intact, enabling a retaliatory strike.
However, if the warning proves false, the president can cancel the strike. The risk of a premature decision is reduced because the president knows that the order could still be carried out even in the event of their death or disrupted communications. Decide-under-attack effectively addresses the risk of mistaken launch in today’s posture by pivoting the retaliation decision from time-constrained to proof-based.
Furthermore, the proposed posture serves as a deterrent to adversaries with cyber capabilities. A strategic adversary could launch a real strike and use cyber-based tactics to induce additional uncertainty. This heightened uncertainty may overwhelm the president, making it difficult to initiate a retaliatory response. Consequently, this situation may create incentives for adversaries to launch a first strike. However, if adversaries believe that a delayed retaliatory response is likely, the incentive to launch such a cyber-nuclear attack is reduced.
Other actors, namely terrorists with cyber capabilities, may try to provoke a preemptive launch by fabricating a false signal. The decide-under-attack posture addresses this by delaying the response until there is greater evidence, such as additional sensor correlation or confirmation of weapons impact. A potential weakness of this approach would be if an adversary could convincingly deliver a false signal across multiple systems to provoke a launch order and then disrupt communications. However, the time delay, combined with the availability of alternative communications methods (since the warning was false), adds layers of resilience to prevent a mistaken launch.
Moreover, this approach accounts for system and human biases that could potentially lead to actions based on a false warning, which have been the sources of near-accidental or mistaken launches. As such, the decide-under-attack option builds resiliency by expanding the decision space. That space can be used to recall an order, modify an order to achieve a proportional response, or validate the inbound weapon’s origin. This posture not only increases the credibility of Washington’s retaliatory capability but also accounts for false nuclear alarms caused by anything from equipment malfunction and algorithmic error to deliberate spoofing and human fallibility.
Conclusion
Returning to the scene at Buckley Space Force Base: Tech Sgt. Nichols stared at the warning on his console. For a moment he wondered if this was his Colonel Petrov moment, a Soviet officer credited with “saving the world” when he deliberately failed to act on an erroneous report of an incoming American strike. But unlike the dilemma facing the Soviet colonel, Nichols knew that modern nuclear brinkmanship was more complex than ever before, with many different nuclear actors and the constant threat of terrorism. And although he knew that advanced systems were imperfect, who was he to question the machine?
Fortunately, Tech Sgt. Nichols had been briefed on a new launch policy. The president had abandoned the old launch-under-attack posture for a decide-under-attack approach. This meant that before any nuclear exchange began, the retaliatory decision would give greater weight to proof than “time to impact” of an inbound threat. He was assured that he could report the notification and then take additional time to verify its origin, validity, and accuracy without fear that it would be too late to alter his original report. This renewed his confidence in the systems, both machine and human, that are responsible for the world’s safety.
Johnathan Falcone is an active-duty U.S. Navy officer currently serving as a chief engineer in the Littoral Combat Ship program. He was awarded the 2022 Alfred Thayer Mahan Literary Award by the Navy League of the United States and is a graduate of the Princeton School of Public and International Affairs and Yale University. @jdfalc1
Jonathan Rodriguez Cefalu is the founder and Chairman of Preamble, Inc., a company on a mission to provide ethical guardrails for AI systems. Jonathan holds a computer science degree, with honors, from Stanford University. He created the Snapchat Spectacles augmented reality glasses when his first startup Vergence Labs was acquired by Snap Inc. in 2014.
Michael Kneeshaw is a bioinformatics scientist and researcher with a focus on machine learning and simulations. He is currently leading the development of a wargame simulator called SIMC4, which is special-built for simulating catalytic nuclear war scenarios. The project is funded by the Preamble Windfall Foundation, a 501(c)(3).
Maarten Bos is a quantitative experimental behavioral researcher, with expertise in decision science, persuasion, and human-technology interaction. He has worked in academia and industry research laboratories, and his work has been published in journals including Science, Psychological Science, and the Review of Economic Studies. His work has been covered by the Wall Street Journal, Harvard Business Review, NPR, and the New York Times. Maarten received his Ph.D. in the Netherlands and postdoc training at Harvard Business School.
All vignettes are fictitious and have been developed from open source information. The authors’ opinions are their own and do not reflect the official stance of the U.S. Navy or other (previous) affiliations.