Swaggering in Cyberspace: Busting the Conventional Wisdom on Cyber Coercion

Cyber-Class-AF

Recent years have seen a steady evolution in the sophistication and aims of cyberattacks. While cyberespionage continues to threaten the sanctity of government and private sector data, cyberattacks have also been used to support real-world military operations; Georgia and Crimea easily spring to mind. Now, a new class of cyberattacks is being carried out in the absence of military campaigns. Cyber prophets have long discussed how independent cyberattacks could target critical infrastructure. A recent hack of Ukraine’s power grid brought these predictions to life.

Given increases in the ability and willingness of various actors to target a nation’s critical infrastructure, David Gompert and Hans Binnendijk have argued that the United States should use cyber operations to “amp up the power to coerce.” This is a reasonable objective, but it ignores the conventional wisdom about cyber coercion that says it doesn’t work. A major component of successful coercion is detailing the pain your enemy may endure. Communicating that capability in the cyber realm is likely to induce your enemy to “patch” the vulnerability you were hoping to exploit. How can actors ever coerce targets with cyber weapons if threatening them effectively neutralizes their utility?

We propose one possible way of resolving this problem: selectively revealing an individual cyber tactic to your opponent. Exploiting the “perishable” nature of certain cyber weapons helps to address some of the problems with cyber coercion, though many problems will remain. This is true in at least three ways. First, it can reduce the uncertainty surrounding your capabilities by hinting at the breadth or depth of your remaining cyber arsenal. Second, because these weapons can be costly to develop, burning a tactic or vulnerability can serve as a “sunk cost” signal of resolve. Third, since some cyber weapons may be more damaging than others, the choice of which vulnerability to burn can communicate your level of interest in the dispute.

While much attention has been paid to cyber deterrence and defending U.S. SCADA networks and infrastructure, we propose one way of beefing up cyber’s offensive potential. The 2015 Department of Defense Cyber Strategy seeks ways to “build and maintain viable cyber options [to] shape the conflict environment at all stages.” Our hope is to begin filling this gap by examining prospective ways states may use cyber threats to impose their will.

To do so, we will review the problem of coercion in cyberspace, outline our proposed solution, and touch on some of the advantages and disadvantages associated with this method. It is also worth noting up front that the primary focus of our piece — use of zero-day exploits — constitutes a small (but growing) fraction of cyberspace operations. Indeed, some reports rightly recognize that zero-days receive a disproportionate amount of attention given that most cyberattacks don’t rely on them. Nevertheless, to the extent that zero-days still represent an important tactic in a state’s cyber arsenal — or to the extent that our logic generalizes to other domains — the prescriptions contained below should still be of interest to policymakers. Generally speaking, this logic should hold for any secret and costly technique that generates an opening in a target’s system. This could be an intrinsic defect in the code (the zero-day vulnerabilities discussed above) or even a back door left behind through social engineering of humans (spear phishing).

The Fundamental Problem of Cyber Coercion

Coercing someone may be hard, but it is not complicated, at least theoretically. Coercion involves presenting your enemy with a choice and using carrots or sticks to incentivize the “right” choice. Simply put, the target must prefer giving in to your demand than paying the price for resisting. To make this choice, your opponent must judge two attributes: (1) your resolve to carry out the threat and (2) the damage you can inflict if you do carry out the threat.

In the conventional and nuclear arenas, the damage you can inflict is often fairly clear. States use publicized parades, air shows, and missile tests to reveal the existence and sophistication of certain weapon systems. When observers witness the Cobra maneuver of a MIG-29, they can be certain that it will be a formidable dogfighter. When a regime televises a road-mobile ballistic missile with multiple stages, opponents understand that this will be a potent weapon in the future. The U.S. government also signals its capabilities this way, routinely publicizing successful flight and intercept tests from the Missile Defense Program, for instance. Because one can usually demonstrate capabilities in the conventional and nuclear domains, the remaining issue with successful coercion typically turns on how to credibly signal resolve.

Demonstrating capabilities in cyberspace is trickier than in the conventional and nuclear domains. Actors who wish to threaten specific attacks face a unique problem: secrecy is a necessary ingredient at the planning and operational stages of specific cyber operations. Discussing or showcasing a weapon effectively sacrifices it forever. Threatening to attack the flood gates on a specific dam should induce the target to respond by hardening systems or even disconnecting them from the Internet. At this point, any threat to leverage that tactic becomes useless since the method of entry and exploitation has been eliminated.

Therein lies the rub for would-be coercers. You want your enemy to know how much pain he will suffer for resisting you, but clearly threatening that pain vis-à-vis a specific vulnerability undermines your ability to apply it, at least in that particular instance. How, then, can a target decide to give in if he’s unsure of both your resolve to attack and the pain he’ll suffer if you do?

Swaggering in Cyberspace

Counterintuitively, we believe that the perishable nature of cyber weapons can actually be useful for demonstrating resolve and capability, provided that a state has a suite of cyber weapons at its disposal. Consider the following scenario. A challenger wants to force a potential victim to change its behavior or policies in some way. To clarify its intent and the consequences for defiance, the challenger sends a private message to the leadership of its victim. This message details the challenger’s demands and simultaneously notifies the victim of a vulnerability in her system that she hadn’t yet discovered (a zero-day exploit). The challenger may even provide directions to the faulty code, a pre-installed logic bomb, or other details of this vulnerability without actually attacking or exploiting it. But why, you might ask, would anyone in their right mind do this?

“Burning vulnerabilities” in the way just described does three things to address the problem of cyber coercion. First, it allows the potential target to update its assessments of the challenger’s capabilities. The target can be reasonably sure that the remaining exploits in the challenger’s arsenal are as damaging or more so than the one they revealed. Although the target may still be uncertain of the maximum capabilities of their opponent, it can now develop a more informed judgment of the damage it is likely to suffer should it continue to resist. The coercive pain it must weigh comes from these remaining weapons, not the exploit revealed. The burned vulnerability does not inflict pain itself, but works by revealing the sophistication of these latent weapons.

Second, burning exploits is costly, and absorbing costs has long been shown as a way to signal resolve. By exposing a known vulnerability, potential challengers allow would-be victims to patch it, thereby relinquishing any opportunity to use it for nefarious purposes and sacrificing the significant time, effort, and skill needed to prepare this weapon. For example, the development of Stuxnet involved a variety of actors and demanded sophisticated intelligence about the Siemens centrifuge controllers and Iranian enrichment practices. Furthermore, the 2015 attack on Ukraine’s power system required months of planning, significant reconnaissance, and the tight collaboration of multiple teams to unleash a “synchronized assault in a well-choreographed dance.”

Therefore, bearing these costs can demonstrate resolve, and only states with sophisticated and diverse cyber arsenals can afford to burn tactics in this way. Of course, it is both theoretically and practically possible for states to reveal much about their arsenals without burning zero-days. Our point is not that this is the only way of revealing capabilities, but rather that it may be a particularly effective means of doing so.

Third, the choice of which weapon or vulnerability to sacrifice can also work as a signal. If cyber weapons can be “ranked” in order of how much damage they can cause, challengers can strategically select from this menu of weapons (and their accompanying potency levels) to signal their intent and interests at stake. Revealing your capability to block access to a database is far different than detailing the vulnerability or logic bomb that disrupts power and water to a region’s cities or military bases. Burning highly impactful zero-days should be especially meaningful, since states are unlikely to have large stockpiles of these in their arsenal.

Burning vulnerabilities holds the potential to solve both halves of the cyber coercion problem. It is both a costly signal for the challenger to use, and it can reduce uncertainty over how damaging the challenger’s cyberattacks may be. It hints at the breadth and depth of the remaining arsenal, demonstrates the resolve to engage, and can signal the gravity of the issue at stake.

Weighing the Advantages of Swaggering Quietly

A main advantage of leveraging cyber assets in this way is that it can be done quietly, out of the public eye. While there are advantages to public demands that stake a state’s reputation, states can also take advantage of quiet diplomacy to coerce their opponents. The private nature of this threat allows the victim to concede without an embarrassing public loss of face. Actively exploiting the vulnerability imparts pain on the victim’s citizens and pressures it to retaliate or escalate. In this vein, burning a vulnerability may be perceived by the target as a measure of good faith; the challenger can simultaneously clarify its capabilities without doing harm.

Privately revealing the vulnerability credibly reveals the pain the enemy would endure for resistance, but without creating this public spectacle. Unlike a flyover of nuclear-capable bombers, a patrol of warships, or the test of a Minuteman III ballistic missile, cyber threats can occur in the shadows. This makes it easier for the victim to concede in private and justify its changed policies in any way it chooses. No one needs to know that the challenger “won” because of a cyber-threat, and this secrecy minimizes any damage to the target’s reputation.

There is another advantage associated with “burning vulnerabilities,” albeit one not directly related to coercion. Quietly alerting a target to a vulnerability without pairing it with a threat may be seen as an act of good faith, forging the basis for future cooperation. To be sure, such an act may still provide a potential adversary with a better sense of your capabilities. The key takeaway, however, is that disclosing zero-days may not only be a means of coercing an opponent, but also a means of demonstrating goodwill.

This strategy is not wholly without risks. Burning one capability may inadvertently burn others. Rather than serve as a credible signal, revealing a vulnerability may backfire and nullify a broader range of the cyber arsenal than initially intended. At the tactical level, some cyber weapons may rely on a similar “menu of exploits” to penetrate an opponent’s system, so revealing a vulnerability may impact other operations that rely on this same means of access. Dave Aitel and Skylar Rampersaud usefully outline this exact risk when describing the danger associated with disclosing zero-days.

Though a very real possibility, this concern may potentially be less potent than it seems. As former CIA Director General Michael Hayden attests, “you need a tailored tool to create the desired effects. Very often this has to be a handcrafted tool for the specific target.” The more that future cyberwarriors are able to carefully craft their techniques to target very specific pieces of equipment or software,  the risk that burning one tactic will adversely affect other specialized tools should hopefully be minimized. Even still, the possibility that burning one zero-day will compromise other vulnerabilities is one of the major risks associated with this strategy and something that should be thoroughly accounted for.

A second risk associated with this strategy turns on misinterpretation. It is always possible that the prospective target will not interpret disclosure in the way the perpetrator intended — as a signal of latent capabilities. To the extent that the target fails to correctly interpret what’s happening, the strategy may not work and could even backfire.

Finally, the coercive power of cyber weapons may be limited. The United States is not going to give up Hawaii to forestall a power blackout. Taiwan isn’t likely to surrender its sovereignty under threat of a train derailment. However, cyber damage may be sufficient to shape lower levels of behavior or force concessions on lesser interests. Like economic sanctions, cyber weapons may become one tool in a state’s coercive arsenal that can either accompany or substitute for military action.

In the Zero-Days Ahead

Many observers have long thought that offensive cyber operations were useless instruments of coercion. Rather than threaten their use, a would-be cyber warrior could only influence his opponent by actually unleashing an attack. Our aim in writing this post was to identify some ways in which actors can work around the standard issues of coercion in cyberspace. The perishable nature of cyber weapons leaves room for political coercion. In the future, cyber weapons will fill greater roles during conflict, disabling an opponent’s air defense network, hampering command and control systems, or disrupting the flow of combat supplies. States can also quietly wield the potential punishment of cyberattacks to force others to do their will. Of course, the logic articulated above may not apply to all cyber tools in all contexts. Our aim was to identify the problem with cyber coercion at a theoretical level and explore some plausible ways policymakers might resolve them.

 

Major Craig Neuman is an Air Force officer and AC-130 gunship pilot with multiple overseas deployments. He earned his PhD in political science from Stanford University where his research focused on coercive threats and crisis signaling. The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Air Force or the U.S. government.

Michael Poznansky is a PhD candidate at the University of Virginia. During the 2015-2016 academic year, Michael was a research fellow at the Belfer Center for Science and International Affairs at the Harvard Kennedy School of Government. In Fall 2016, he will join the University of Pittsburgh’s Graduate School of Public and International Affairs as an Assistant Professor of International Affairs and Intelligence Studies.

Image: U.S. Air Force photo by Raymond McCoy