war on the rocks

When Routine Isn’t Enough: Why Military Cyber Commands Need Human Creativity

December 5, 2017

Former Secretary of Defense Ashton Carter recently published a report on the campaign to destroy ISIL. Particularly notable was what Carter said about the “cyber component” (or lack thereof) of the U.S. efforts:

I was largely disappointed in Cyber Command’s effectiveness against ISIS. It never really produced any effective cyber weapons or techniques. When CYBERCOM did produce something useful, the intelligence community tended to delay or try to prevent its use, claiming cyber operations would hinder intelligence collection. This would be understandable if we had been getting a steady stream of actionable intel, but we weren’t. The State Department, for its part, was unable to cut through the thicket of diplomatic issues involved in working through the host of foreign services that constitute the Internet. In short, none of our agencies showed very well in the cyber fight.

The statement sounds alarm bells about the current organizational efforts of U.S. Cyber Command. In fact, the United States is not the only one struggling. A growing number of countries are said to be establishing military cyber commands or equivalent units to develop offensive cyber capabilities, and they all seem to have their growing pains stemming from the unique nature and requirements of offensive cyber operations.

Carter’s statement primarily refers to interagency problems, for instance, on how the use of militarized cyber operations by CYBERCOM may endanger current or future intelligence collection operations by the NSA. But the problems with successfully carrying out offensive cyber operations are deeper and more complicated. Specifically, military cyber commands require individual creativity — which is too often is sacrificed on the altar of organizational routines.

Routines are considered to be the oil that keeps government institutions running. In the academic literature, routines are defined as ‘‘an executable capability for repeated performance in some context that has been learned by an organization in response to selective pressures.” One benefit of routines is that they provide stability, which in turn leads to predictability. In the cyber domain, where there is already considerable uncertain and imprecise information, predictability of actions is certainly a welcome asset.

Yet offensive cyber capabilities are inherently based on unpredictability. As the RAND Corporation’s Martin Libicki observes, there is no “forced entry” when it comes to offensive cyber operations. “If someone has gotten into a system from the outside, it is because that someone has persuaded the system to do what its users did not really want done and what its designers believed they had built the system to prevent,” Libicki argues. Thus, to ensure repeated success, one must find different ways to fool a system administrator. Repetition of an established organizational routine is likely to be insufficient when conducting military cyber operations. The command must foster an environment in which operators can depart from routine and nimbly adapt their actions to stay ahead of their adversaries.

More specifically, Jon Lindsay and Erik Gartzke note that “cyber operations alone lack the insurance policy of hard military power, so their success depends on the success of deception.” Deception as a strategy is based on two tactics: dissimulation, or hiding what’s there; and simulation, or showing something that’s not. The cyber weapon Stuxnet, for example, utilized both tactics. Through what is known as a “man-in-the-middle attack,” Stuxnet intercepted and manipulated the input and output signals from the control logic of the nuclear centrifuge system in Natanz, Iran. In this way, it was able to hide its malicious payload (simulation) and instead replayed a loop of 21 seconds of older process input signals to the control room, suggesting a normal operation to the operators (dissimulation). To ensure that an offensive cyber attack is successful, the attacker needs to constantly find innovative ways to mislead the enemy — which may mean deviating from routines, or crafting routines that permit individuals to make adjustments at their discretion.

There is no easy resolution of this dilemma. Few of the mechanisms organizations use to encourage creative behavior can be applied to military cyber commands. Instead, what governments can focus on to foster creativity in these organizations is workforce diversification and purpose creation.

First, a common form of encouragement is to reward risk-takers in the organization. Yet military cyber commands need to be risk-averse and cautious. It is essential for “cyber soldiers” to stick to the rules to avoid escalation and possible violation of the laws of armed conflict, just as it is for more traditional soldiers. Despite the need for unpredictable and deceptive responses, military cyber commands cannot simply try things out and see what happens. Indeed, though offensive cyber capabilities are not inherently indiscriminate, without careful design and deployment there is a high potential for severe collateral damage. The Morris Worm of 1988 is an illustrative case in this regard. Robert Morris “brought the internet to its knees” due to a supposed error in the worm’s spreading mechanism. The worm illustrated the potential of butterfly effects in cyberspace – small changes in code can escalate into large-scale crises.

Similarly, military cyber commands will find it more difficult than private companies to grant autonomy to individuals. The underlying management logic for granting personal autonomy was perhaps most famously spelled out (and radically implemented) by Brazilian entrepreneur Ricardo Semler: Let employees decide how to get something done, and they will naturally find the best way to do it. For cyber operations, while outcomes are important, precisely how the job gets done is equally relevant. After all, unlike most conventional capabilities, the modus operandi of one cyber operation may greatly affect the effectiveness of other operations.

This is partially due to what’s known as the “transitory nature” of cyber weapons. Cyber weapons are often described as having “single-use” capabilities. The idea is that once a zero-day vulnerability – that is, a publicly undisclosed vulnerability – has been exploited and becomes known to the public, the weapon loses its utility. Although I’ve argued before that this view lacks nuance – as in reality it often still takes time before patches are installed and vulnerabilities closed (and only the minority of cyber weapons exploit zero-days) – the likelihood of successfully accessing the target system does nonetheless reduce after initial use. In other words, the use of a zero-day exploit by one operator may complicate efforts for other operators.

So, what can be done? At a minimum, military commands should make sure they attract a diverse group of people. Only recruiting people within government organizations for the command, as for example the Netherlands supposedly does, should be discouraged. Conventional human resource matrices (i.e., the candidate should have a university bachelor’s degree, good grades, courses in certain areas etc.) should be reconsidered too.

We have already seen various encouraging initiatives on this front. The U.S. Army recently launched the cyber direct commissioning program, so (qualified) civilians can now directly apply to become officers. Countries like the United Kingdom, the Netherlands, and Estonia are also setting up cyber reserve units to attract civilians with the right skill set. Yet these programs are not yet widely adopted across states, nor do they tend to extend far enough (the responsibilities of reserve officers are often unclear).

Military cyber commands should also make sure they create an inspiring workplace to capitalize on people’s intrinsic motivation. Senior leaders have generally been good at providing a vision for their cyber command; this is normally expressed as a desire to become a world leader in offensive cyber operations (see, for instance, the UK’s cyber security strategy). They are also explicit about their mission. Yet, hardly ever do they provide purpose: how does the command fit into the big picture, and what is the strategic framework being followed? Jim Ellis, the former commander of U.S. Strategic Command, has noted the shortcomings of the cybersecurity discourse, saying the debate is “like the Rio Grande, a mile wide and an inch deep.” A deeper focus on purpose-driven values is needed to motivate people to enter a field like cyber operations.

As more countries look to get into the business of offensive cyber operations, the inherent tension between the requirements of these operations and the regimented tendencies of national security bureaucracies will become starker and starker. If governments want to bring together different minds, inspire creativity, and maximize human performance, they need to clearly communicate the value of cyber commands to their people.

 

Max Smeets is a cybersecurity fellow at Stanford University Center for International Security and Cooperation (CISAC). He is also a non-resident cybersecurity policy fellow at New America. You can follow him @SmeetsMWE

Image: Defense Department