Accidents and Escalation in a Cyber Age

coast guard decv2

Sometimes wars, from small ones to big ones, start with accidents. In early American history, for example, accidents associated with good-faith errors and unauthorized acts of violence precipitated several conflicts between the United States and Native American groups. On a larger scale, Scott Sagan has argued that the 18th-century Seven Years’ War was sparked by a false warning of French invasion and that the Japanese invasion of Manchuria in 1931 was an unauthorized attack orchestrated by army officers against the expressed wishes of the civilian government. The potential for these sorts of accidents — human or technical errors and unauthorized actions — to induce subsequent escalation has produced much diplomatic effort to mitigate the risks of stumbling into armed conflict. There are hotlines, summits, “open skies” agreements, confidence-building measures, and more. Yet today the risk of accidents producing escalation persists, especially where states see a first-mover advantage and fear the consequences of underreacting more than the risks of over-reacting.

The rising salience of cyber warfare contributes dangerously to the traditional set of concerns about the onset of armed conflict. Now, in addition to wars waged with bombs and bullets, militaries have become highly dependent upon the security of the bits and bytes that empower their “sensing and shooting” capabilities. And the link between incidents starting in cyberspace and ending up in battlespaces on land, at sea, and in the aerospace environment is getting the attention of policymakers. So much so that, as President Joe Biden noted that “we’re going to end up, if we end up in a war — a real shooting war with a major power — it’s going to be a consequence of a cyber breach of great consequence.”

 

 

Some have argued that cyber operations are actually unlikely to prompt military escalation, but we would argue that this possibility ought to receive policymaking attention to reduce its probability. There remains much uncertainty about the frequency and severity of cyber accidents that are occurring now and that will arise in the future. This uncertainty extends to whether any given cyber accident will spark military escalation. But a growing body of evidence, beginning with the Moonlight Maze cyber intrusions into military systems in the late 1990s and other serious events that have continued to this day, make clear that potential adversaries are testing out virtual ways to disrupt physical operations.

What sorts of accidents might occur in cyberspace, and how might they prompt military escalation? We have three basic categories in mind. First, there are human actions in cyberspace that could provoke escalation. These include human errors, such as mistaken perceptions. During the Cuban Missile Crisis, for example, a U.S. destroyer’s use of training depth charges to target a Soviet submarine very nearly led the submarine’s captain to launch a nuclear torpedo because he reportedly believed a war might have already started. The difficulties of ascertaining intent and attributing responsibility for actions in cyberspace may produce similar human errors. Alternatively, intentional actions not authorized by any proper authority would fit this category. Non-state-affiliated hackers, for example, could individually or collectively target or unintentionally affect highly sensitive systems such as those related to critical infrastructure, conventional military systems, or nuclear command-and-control processes. Similarly, state-linked cyber proxies could engage in such actions of their own volition and have their actions incorrectly attributed to their state sponsor.

As a case in point, in 2001 through 2002, Gary McKinnon, a Briton whose autism played a central role in his legal defenses after he was caught, hacked into U.S. military systems in a purported effort to uncover information about unidentified flying objects he believed the U.S. government was hiding. This hack kept about 2,000 military computers offline for days, caused alarm in the Pentagon — American air defenses and the Atlantic Fleet’s logistics had been compromised — and led to questions about who was targeting the United States and who ought to face retribution. The prosecutor in charge of the effort to extradite him from the United Kingdom described the incident as “the biggest military computer hack of all time.” Needless to say, had this occurred during a major international crisis — and had the United States failed to sufficiently harden those military networks despite being in a crisis — the consequences could have been dire.

Our second category of concerns is about technical errors that could lead to military escalation. During the Cold War, for example, there were occasions on which the United States and the Soviet Union each thought the other had begun a nuclear attack. In the American case, a war-scenario training tape was mistakenly inserted and played at a strategic command. On the Soviet side, a computer malfunction reported that five American missiles were heading toward the Soviet Union. Fortunately, those involved in both cases waited long enough before retaliating to ascertain that their identification of an incoming attack was mistaken, but there is no guarantee that such events will always resolve so well. That we are all still here speaks to the caution that nuclear weapons can induce. In the case of escalation to conventional military action from cyber operations, however, those involved may be less cautious.

The third category of potential problems consists of intentional cyber operations that may have unintended consequences. Due to the complexity and interconnectedness of advanced information systems, any offensive cyber operation engenders uncertainty as to the effects it will cause. Thus, to the extent that human and/or technical errors could be the source of such effects, it is worth considering them here. One could imagine, for example, that either Israeli or Iranian cyber operations targeting the other in their current “virtual conflict” might yield greater consequences than the attacker intended. In an already tense, volatile situation, such a perceived escalation, even if unintended, might prompt an armed, “kinetic” response.

How might states reduce the risk of accidents in cyberspace escalating into open warfare? Bringing arms control into cyberspace could be both feasible and worthwhile. Given the inability to monitor cyber capabilities in the same way that states can monitor more observable nuclear systems, however, any such efforts will need to focus on behavior rather than “bean counting.” With respect to the prospect of accidents in cyberspace, the 1972 U.S.-Soviet Incidents at Sea Agreement may offer an excellent model of the sort of agreement that could limit military escalation in the wake of cyber accidents.

The Incidents at Sea Agreement stemmed from a similar concern that accidents could provide incentive for escalation and established a framework for information-sharing when an accident did happen. The agreement stipulated, for example, that U.S. and Soviet ships would use “accepted international signals when ships maneuver near one another” and would inform surface-going vessels “when submarines are exercising near them.” It also established that the reporting of any accidents would go through naval attachés in the respective capitals and that the parties would meet annually to review implementation.

Any such effort in cyberspace should similarly take a two-fold approach in trying to reduce the risk of accidents and provide mechanisms for the provision of information on accidents that do occur. With respect to risk reduction, states could establish clear guidelines on which systems are off-limits (such as critical infrastructure or nuclear command-and-control systems), attacks on which would prompt reciprocal action, even military escalation. As for the provision of information on accidents, the sensitivity of cyber operations and exploited systems will necessitate that this information-sharing be kept limited, but a similar system of national attachés could work here. It is unlikely that states would be willing to share such information with a centralized, international body run through the United Nations or elsewhere. Quiet bilateral diplomacy is more likely to work.

There are, however, important differences between the maritime and cyber operations that any analogous agreement would need to address. First, given the significance of private stakeholders in cyberspace, any effort at behavior-based cyber arms control will need to bring these actors into the policymaking process in a way that was not necessary for the functioning of the Incidents at Sea Agreement. This may cut against the desire for secrecy in information-sharing, but non-state stakeholders need not be involved in every part of the process. Second, bilateral agreements may be easier to establish than multilateral ones. Given the dozens of states with significant cyber capabilities (as opposed to fewer with significant blue-water navies), any single bilateral agreement might not do much to address broader multilateral concerns. Policymakers might thus consider starting with a bilateral agreement — perhaps beginning with confidence-building measures — and expanding carefully from there. More ambitious work might eventually be undertaken to address wartime incentives for escalatory information operations.

Nonetheless, there are significant limitations and degrees of uncertainty that will make it difficult to agree on and implement any such efforts at behavior-based cyber arms control. The Incidents at Sea Agreement involved accidents in which it was generally clear which actors were involved and who was at fault, something that may less frequently be the case in cyberspace. Relatedly, any framework for the avoidance of accidents in cyberspace would require the offending party to admit responsibility and acknowledge that the operation’s effects were unintentional. Moreover, when an offending state claims to have accidentally affected a target state or claims that a non-state actor was the origin of the attack, the target state may not believe the offending state (whether it was being truthful or otherwise).

All these factors will make it rather difficult to establish and sustain cyber arms control. Yet, even very narrow agreements would be helpful in reducing risks and the potential costs associated with escalation, and continued innovation and investment in digital forensics may ameliorate some of the difficulties by piercing the veil of anonymity that too often shrouds cyber malefactors. Moreover, agreements and advances of this sort need not stand on their own. As in the U.S.-Soviet relationship, deterrence — particularly in its “denial” aspect, based on strong defenses — may complement shared understandings of the “rules of the road,” and strategies of deception or information camouflage may bolster cyber defenses in ways that reduce the severity of cyber accidents.

Like any other tool of statecraft, diplomacy is not a cure-all. But if accidents are going to happen — in cyberspace or in elsewhere — it is worth taking steps to mitigate the risks associated with them. This is clearly a concern on Biden’s mind. It should also be on the minds of all whose duties consist of ensuring the national security.

 

 


Andrew A. Szarejko is a Donald R. Beall Defense Fellow in the Naval Postgraduate School’s Defense Analysis Department and a non-residential fellow at the U.S. Military Academy’s Modern War Institute. 

John Arquilla is Distinguished Professor Emeritus of Defense Analysis at the Naval Postgraduate School and author, most recently, of Bitskrieg: The New Challenge of Cyberwarfare.

The views expressed here are those of the authors and do not represent the views or positions of the Naval Postgraduate School, the Department of the Navy, or any part of the U.S. government. 

Image: U.S. Coast Guard (Photo by Petty Officer 2nd Class Hunter Medley)