Attribution and Secrecy in Cyberspace


Between 2014 and 2015, hackers repeatedly broke into the U.S. Office of Personnel Management (OPM), stealing personal and biometric data for millions of federal employees. Several years earlier, Stuxnet wreaked havoc on centrifuges at the Natanz nuclear enrichment lab in Iran. Last year, numerous organizations and companies, including the New York Times, Twitter, Facebook, and the U.S. Army were victims of cyber attacks carried out by the Syrian Electronic Army. Each time, the perpetrators had slightly different modus operandi: China denied any involvement in the OPM attacks; current and former officials in the United States, acting under the condition of anonymity, acknowledged responsibility for Stuxnet; and the Syrian Electronic Army brazenly advertised complicity by branding compromised websites with their personal logo.

Variation in credit-claiming behavior poses a puzzle for the so-called attribution problem in cyberspace, which depicts a world where the victims of cyber attacks must utilize an array of complex yet imperfect tools to identify their adversaries. If some actors willingly claim credit for an operation, however, the attribution “problem” ceases to be a problem at all. We propose a framework that can help explain this variation, shedding light on the tactics and strategies of different actors in cyberspace.

A key component of our framework entails distinguishing between two qualitatively different types of secrecy in the cyber domain. The first — the use of secrecy at the planning and execution stages of an attack — is often a technical prerequisite for success. The second type of secrecy — whether to claim credit for an attack privately or publicly — is a political decision. While many factors plausibly drive credit-claiming or credit-shirking behavior, two in particular stand out as significant: (1) whether target compliance is the objective; and (2) whether the perpetrator is a state or a non-state actor.

All Secrecy is Not Created Equal

The U.S. military and intelligence communities distinguish between two distinct types of secrecy. The first, known as “clandestine operations,” refers to concealment of an operation during the planning and the execution stages. Actors operate clandestinely when there is a tactical advantage to be gained from the element of surprise. The second type of secrecy, known as “covert action,” refers to operations in which actors take steps to conceal their sponsorship. Covert action enables perpetrators to completely hide their identity or at least plausibly deny involvement. While leaders can, and often do, combine these different forms of secrecy in the course of a single operation, it is possible to have one without the other. To date, these distinctions have not been treated systematically in debates about cyber operations.

The Logic of Secrecy During Cyber Operations
The Logic of Secrecy During Cyber Operations

Unlike many other tools of statecraft, cyber operations are unique in that secrecy is a de facto requirement during the planning and execution stages. While actors technically face a choice here (T0 in Figure 1), the disadvantages of announcing precise attacks beforehand virtually preclude doing otherwise. Denying the victim an opportunity to patch a vulnerability, for example, is one of many reasons to carry out specific operations clandestinely (see here and here).

If secrecy is a requirement during the planning and execution stages of an operation, the decision to deny or embrace complicity is more complicated. At T1, or what we call the acknowledgement stage, actors decide whether to keep an operation covert (by outright denial or non-acknowledgment) or whether to claim credit. Credit-claiming can happen in one of two ways: through private acknowledgment or public acknowledgment. With private acknowledgment, perpetrators can leave signatures or quietly alert the victim of their identity. With public acknowledgment, perpetrators can brandish logos or make a public pronouncement. Irrespective of the form it takes, the decision to credit-claim is political rather than tactical. With this in mind, why do some actors claim attacks while others deny them?

Coercion in Cyberspace

One factor affecting credit-claiming and credit-shirking behavior is whether an operation requires target compliance. Credit-claiming is least likely when compliance is unnecessary. Consider cyber espionage, where the objective is simply to gather intelligence on another actor’s intentions, capabilities, and the like. In such cases, the target need not change its behavior for the operation to be successful. American and British efforts “to hack into Israeli drone and fighter jet surveillance feeds” or China’s successful OPM hack fit this bill. This logic also holds for cyber operations intended to steal financial assets or intellectual property, or to disrupt or damage physical infrastructure. Here, credit-claiming may be counterproductive. Announcing that one has penetrated an adversary’s network can jeopardize continued access without providing any tangible benefit.

Credit-claiming is much more likely when actors wish to use cyber operations for coercive purposes. Although threatening specific attacks won’t be particularly effective (see above), threatening targets with future attacks may facilitate successful coercion. The trick, however, is that potential targets must believe that the challenger has the chops to follow through on these threats. By claiming credit for past attacks, actors may be able to signal to adversaries that they have the means to conduct similar attacks in the future, rendering coercive threats in cyberspace more credible.

States can credit-claim in two ways: through public acknowledgment and private acknowledgment. The upside of public acknowledgment is that actors can convince both the victim of an attack as well as any bystanders that they have the capacity to inflict real harm in cyberspace, rendering coercive threats involving future attacks all the more credible. All else equal, public acknowledgment should give perpetrators the biggest bang for their buck in terms of building a general reputation for proven capabilities.

There are, however, distinct advantages to private acknowledgment. By leaving signatures or quietly communicating their identity to a victim, perpetrators avoid the sometimes-messy ramifications of publicity. As Austin Carson has shown, rivals often collude in the act of secrecy to avoid escalation. Publicly denying knowledge of the attacker’s identity may enable victims to concede to a set of demands without facing internal or external pressure to retaliate. While private acknowledgment limits reputational gains, allowing targets to give in while saving face should increase the chances of successful coercion and limit the prospects for unwanted escalation.

The biggest problem for outside observers seeking to study this phenomenon is observability. Given the incentives for victims of privately acknowledged cyber attacks to keep the identity of the perpetrator secret, it is impossible for outsiders to distinguish between true ignorance and feigned ignorance. Compounding this problem is that if threats are made quietly, it may be impossible to know whether a potential victim’s behavior is the result of successful coercion or free will.

It is worth pointing out that there exists a subset of cyber coercion — known as cyber blackmail — in which perpetrators need not reveal their identity to achieve success. When a perpetrator leverages stolen assets for coercive purposes — as opposed to threatening future attacks — credit-claiming may be unnecessary. Selectively revealing compromised information and assets can increase the veracity of threats even when the perpetrator remains anonymous or hides behind a front group. This may be what happened during the Sony hack in which the so-called “Guardians of Peace” (#GOP) made explicit demands that Sony not release The Interview or else face reprisals. While North Korea, which was widely suspected of being responsible, denied involvement, coercion partially succeeded without the government having to explicitly acknowledge responsibility. At least part of the reason for this success turns on Sony’s understanding that certain assets (e.g. personal e-mails, financial records) had indeed been compromised in the attack.

Actors and Attribution

The second factor influencing the decision to credit-claim turns on the characteristics of the perpetrator. Politically motivated non-state actors like Anonymous and the Syrian Electronic Army are more likely than states to credit-claim for two reasons: first, they lack the credibility enjoyed by states; and second, the likelihood of reprisals is lower and less threatening.

Compared to states, the capabilities of non-state actors are relatively uncertain. Though many groups will threaten aggression, few are as capable as they would suggest. Consequently, weak actors must use costly signals to overcome their uncertain credibility and resolve. For terrorists, these “costly signals” include suicide bombings and assassinations. For hackers, they are cyber operations. It would therefore be foolish for these groups to conduct cyber operations covertly. Absent acknowledgement, the responsible organization misses out on any reputational gains from the attack. On the contrary, evidence from terrorist groups suggests that it is far more common for multiple organizations to competitively claim a single event owing to the inherent value of being the responsible party. Thus, compared to states, non-state actors should be the most likely to claim their operations.

Another reason driving politically motivated non-state actors to claim their cyber attacks is that they face a low probability of reprisal. Because states are visible, stationary targets, they may shirk acknowledgment to avoid retaliation. Non-state actors — often comprised of individuals spread out across the globe — are a much more elusive adversary.

The 2007 cyber attacks against Estonia illustrate this point. Although the attacks were traced to non-state groups in Russia, authorities refused to cooperate or hand them over, providing de facto immunity to the perpetrators. In other cases, as with attacks by Anonymous, group members are not concentrated in one country but spread across many, complicating efforts to identify and bring them to justice. As a result, non-state actors can claim their attacks and reap their rewards with significantly fewer consequences to dissuade them.

Of course, some non-state actors have lower incentives to claim their operations than others. Cyber criminals, or those motivated by personal or financial gain, are a prime example. Groups with personal motivations (e.g. financial gain and intrusion) likewise have little reason to do so. As before, their operations are most successful when they remain invisible, in part because they preserve the method for future use.


Better understanding the drivers of tactical and strategic behavior in cyberspace is important. Some actors — depending on their objectives, capabilities, and characteristics — face potent incentives to willingly come clean about their complicity in a particular attack. While we ought to be careful not to directly infer intentions from behavior, an actors’ decisions to privately or publicly acknowledge sponsorship of attacks may provide crucial information about their motives and their identities. In the information-starved domain that is cyberspace, such clues may be all we have to rely on when designing policies and crafting responses.


Michael Poznansky is a PhD candidate at the University of Virginia and a research fellow at the Belfer Center for Science and International Affairs at the Harvard Kennedy School of Government. In Fall 2016, he will join the University of Pittsburgh’s Graduate School of Public and International Affairs as an Assistant Professor of International Affairs and Intelligence Studies.

Dr. Evan Perkoski is a research fellow at the Belfer Center for Science and International Affairs at the Harvard Kennedy School of Government. In Fall 2016, he will join the the Sié Chéou-Kang Center for International Security and Diplomacy at the University of Denver’s Joseph Korbel School of International Studies as a post-doctoral research fellow.


Photo credit: Simon Lesley