System Shutdown: Does the Government Shutdown Help Us Understand Cyberwar?

System Shutdown: Does the Government Shutdown Help Us Understand Cyberwar?

Print Friendly

For twenty years, we have heard much about the threat of a “cyber Pearl Harbor.” The term usually refers to a catastrophic computer attack that cripples federal government capabilities and/or significantly disables some local system of interest. Now, with substantial parts of the US government offline due to the government shutdown and a debt ceiling default looming, we have a natural experiment that we might utilize to test the theory of a cataclysmic computer-induced system shutdown.

The specter of a cyber Pearl Harbor has haunted military and security planners as long as men have dreamed of killing each other with QWERTY keyboards. On the other side of the argument, dissenting voices such as Sean Lawson and Thomas Rid have argued that the vision of such a cataclysm is either very unlikely or may not even remotely resemble the apocalypse envisioned by cyberwar prophets.

Evaluating the likelihood  and nature of potential cyber catastrophe, however, is inherently difficult for reasons that are largely unique to this area of inquiry. As Purdue University professor Sam Liles argues, the technology itself is evolving in ways that render it difficult to take understandings of one “generation” of cyber weapons and extrapolate them forward into future projections. Cyberspace on a basic level may issue from fundamental physical laws, but at the same time, it is also a built environment very much shaped by changes in human society and technology.

Another possibility to consider is that, as the social complexity and underlying composition of cyberspace shifts dynamically, the environment itself will generate qualitatively different regimes of collective behavior – especially as algorithms increasingly come to structure (and often dominate) human choices. One does not have to reach for visions of Singularity or Terminator 2 to come to the conclusion that a large mass of artificial intelligence agents all interacting with each other may produce unanticipated – and perhaps horrific – behavior.

For instance, scientists claim that high-frequency-trading stock market robots, which can trade far beyond human response times, are giving rise to a qualitatively new “all-machine” phase of organization that is distinguished by extreme, unanticipated “black swan events.” In plain English, you should be less worried about Skynet slaughtering humanity or being hooked into the Matrix than about the possibility thatt killer robots might bankrupt you. And as far as cyberwar is concerned, this shift will make it even more difficult to accurately argue about whether and how a ‘cyber Pearl Harbor’ might occur.

More philosophically, we might also question whether standard ideas of human social life and risk are robust enough to justify inductive extrapolation to the future from large quantities of past experiences. The fundamental nature of war may not change, but phase transitions in political, economic, and social systems do transform its conduct. Often, these transitions are only observable in retrospect – the participants themselves unfortunately are often cursed to struggle onwards in ignorance.

This is where the idea of natural experiments about system becomes an interesting line of inquiry. When it comes to the question of cyber Pearl Harbor(s), it seems that that the supposed strategic effects of such a barrage might be mirrored by the current situation in Washington. In some ways, the possibility of such experimentation isn’t specific to the particular shutdown scenario being experienced now. For instance, while there have been no attacks against the greater DC area’s power grid, antiquated infrastructure have left countless individuals without power during recent natural disasters. Fears of a cyber attack leading to persistent violent social disorder also have a Washington equivalent. Many areas of DC are plagued by violent crime, food insecurity, and nonfunctioning public services.  And why turn to fictional scenarios about a cyber attack on the Washington-area transportation system when most experienced Beltway residents already assume that WMATA is a textbook demonstration of a random process?

Indeed, cyber horror stories about attacks that blind, cripple, and disable government processes can ahrdly be more frightening than the disturbing possibility that we may now be on the cusp of an long-posited failure in the presidentialist system of governance. Why dream up scenarios of catastrophic Asian or Middle Eastern cyber aggressors when legislative dysfunction has taken down everything from the National Institutes of Health to segments of the military-industrial complex? Even the Amber Alert system’s website – which alerts Americans to the disappearance of young children – went offline late Sunday morning.

Professor Liles hypothesized on Twitter that the shutdown tests the relevance and meaning of federalism to the nation, and also makes an interesting natural experiment for disaster scenarios. Therefore, it is imperative that we carefully collect data – on national, state-wide, and local scales – about the effects of this calamity in order to better understand the basic plausibility of computer catastrophe. Perhaps we may see, as Liles mused, that the weak link in cyber catastrophe theory is the notion that optimal system-wide federalism is what prevents us from reverting to a Hobbesian state of nature. Or we might find that the variable of interest is not strength of computer attack but the degree to which the American periphery is bound to the political center. Who knows?

And it is also true, as per the previous paragraphs on theory and prediction, that the cyber Pearl Harbor scenarios we  evaluate may not even be the right ones. Trapped by the prism of nuclear war, assumptions about strategic bombing, or our own Weberian assumptions about the nature of government, we may be attributing to a theoretical cyber disaster effects that are in fact out of sync with the nature of the technology and the underlying political and economic assumptions that might structure its current or future use.

Still, in the absence of the ability to test mass cyber aggression in laboratory conditions, we will have to take such imperfect opportunities as we can get them. No simulation, however realistic, will quiet fears of cyber apocalypse. Perhaps we will only be sure about foreign enemies’ capabilities to digitally destroy our way of life once we finish observing our own elected officials do so through more old-fashioned methods.

 

Adam Elkus is a PhD student in Computational Social Science at George Mason University and a columnist at War on the Rocks. He has published articles on defense, international security, and technology at CTOVision, The Atlantic, the West Point Combating Terrorism Center’s Sentinel, and Foreign Policy.