Beyond the Cyber Leviathan: White Hats and U.S. Cyber Defense

kollars

When the WannaCry ransomware created a global hospital crisis in 2017, locking emergency rooms and medical centers out of their systems, it was Marcus Hutchins, a British security researcher, (@MalwareTech) who discovered the kill switch that made it possible to enable those systems again. Months later, when NotPetya/Petya (a variant of WannaCry) again threatened global systems, Amit Serper of Cybereason (@0xAmit) discovered and published the workaround. The Dyn attack, made possible by the Mirai virus, which brought down internet traffic across Europe and America’s Eastern seaboard, was initially analyzed by the research group MalwareMustDie.

Who creates cyber security? Who creates the systems, tools, and technical knowledge necessary to defend U.S. civilians and their networks? As it turns out, much of the world’s front-line knowledge about vulnerabilities, threat patterns, and malicious code is derived from the efforts of the cyber defender community. This is a global association of security firms, independent researchers, and not-for-profit organizations. They are the foundation of cyber defense in the United States and much of the rest of the world.

To be sure, in the United States government, the Department of Homeland Security, the National Security Agency, and U.S. Cyber Command all have jurisdiction over cyber defense. But those agencies’ scope is narrowly applied to .mil, .gov, or critical infrastructures. Insofar as government does provide defense domestically, it does so through policy and law — neither of which produce technical knowledge, nor do they generate a full picture of emerging threats. In fact, both policy and law derive from the existing ecosystem of practices and knowledge created by the community. For the rest of the American cyber security landscape, Silicon Valley has long since taken over where government control and funding once prevailed. Today, U.S. cyber defense is largely a decentralized, polycentric (with multiple sources of power), nonstate affair. The country has no cyber Leviathan — no center, no top, no central governing “power able to over-awe them all.”

And it should stay that way. The attack surface is too large, the rate of technological change is too fast, and the threat models — the weighing of threats, vulnerabilities, and prioritizing how to mitigate them — are too diffuse and varied from system to system. There is not, and cannot be, a government-centered solution to the vulnerabilities created by national and global connectivity. What is needed — in part, and at least in a system that values democracy, and any global system that values some version of a free and open internet — is a thriving ecosystem of white hats. If Sen. Ben Sasse’s Cyber Solarium Commission wants to get closer to a realistic approach to cyber security, it would do well to better understand the cyber defender community.

Meet the White Hat Hackers

The somewhat clumsy term for these defenders is “white hat hackers.” To hack is to have hands-on knowledge of how a system works and can be manipulated. White hats work to ensure the confidentiality, integrity, and accessibility of data within and between the 10 to 20 billion or so internet-connected devices currently in use. However, in the mainstream media, hacking is used more colloquially to refer to the disrupters — the black hats.

Black hats (and grey hats, the ones who mostly sell to governments) disrupt and manipulate data for fun, profit, and notoriety. Everyone knows about the black hat threat to national security (both as lurking thieves, and as proxies for state-sponsored shenanigans). And even casual followers of cyber security news are familiar with Chinese hacker groups and the Russian espionage and hacking collective Fancy Bear.

Conversely, the work of the white hat defender community is largely unrecognized in the discourse surrounding national security and cyber strategy. The community is not new, but it has grown. Bottom-up, collaborative research on cyber attacks dates all the way back to 1988, when the first major internet worm, the Morris worm, began attacking systems across the United States. It was the public’s first serious warning that systems that connect also infect. The first to notice the attack — at MIT, Berkeley, and Purdue — were the system administrators of the universities that relied upon it for research and communication. If Morris raised the attention of the university system, it was the Conficker worm more than two decades later that revealed the shape and capacity of the cyber defender community. In late 2008, the Conficker worm had propagated itself onto millions of personal computers, forming among the earliest virulent global examples of a botnet using vulnerabilities in the Windows operating system. Those responding to Conficker eventually became known as the Conficker Cabal, an ad hoc collective of security researchers, university research groups, computer emergency response teams, and internet regulation organizations.

The white hat community is global in footprint and obsessed with information systems. Anything that parses information on networks is a target. The community leverages its technical expertise to track, pick, tinker, brute-force, and fuzz whatever it can get its hands on. That might be a piece of malware extracted from an office email, Playstation 4’s Orbis operating system, or the chintzy $9 webcam you just attached to your home security system whose default password is “password.” If white hats have a formal cyber security day job, their official job titles are likely pretty nondescript — CISO (chief information security officer), SysAdmin (system administrator), or just plain old security researcher. But underneath these titles, there are more intriguing roles — hunter, malware reverser, developer, pen tester, or even “Hacker Princess.”

White hat research is often part of their formal jobs — as in the threat researchers within Palo Alto’s Unit 42, or FireEye’s Threat Research blog. But just as often, security researchers like Nex (Claudio Guarnieri, @botherder), Michael Gillespie (@demonslay335), and Troy Mursch (@bad_packets) research vulnerabilities and threats on their own time. There are even human and legal rights groups like the Electronic Frontier Foundation and CitizenLab who conduct threat research and release that data to the public.

At its worst, the community makes headaches for technology companies by disclosing vulnerabilities the companies would rather not have to fix, or by trolling government espionage agencies and hate groups in the spirit of hacktivism. At its best, the community operates as a polycentric network of overlapping defenders that keep watch over the resilience and integrity of information systems great and small. And this matters for defense writ large: While .mil and .gov sites may be targets, they are targets in a broader network. They do not have to be compromised directly. We live in a time in which broad-based attacks like the WannaCry ransomware shut down hospitals across the United Kingdom, and infected more than 45,000 computers globally — including utilities, as well as government offices. Simultaneously, we also exist in a world in which targeted attacks on political leaders through their private email accounts are commonplace. All of these happen on the same interconnected system, a system that cannot be guarded by any single entity, a system that grows with every new device, line of code, and network brought online. No single government, or even a global consortium of governments, can reasonably conduct the kinds of research and develop the best practices needed to keep users safe.

Gain, Glory, and Diffidence

The white hat community is motivated largely by gain, glory, and diffidence (defined here as distrust or uncertainty as Thomas Hobbes, author of Leviathan, intended) — that is, the high salaries of the tech industry, the prestige of public problem-solving, and a general distrust of security in machines.

There is money to be made in information and cyber security. Whether they work at a security firm like Cylance, in private business, or as an independent consultant, many community members work on the white side because the money is good. Yes, criminality pays in hacking, but it is much less attractive when you want to have a normal life, live in a nice house, and raise a family.

For community members who don’t want to join a firm, or don’t have enough of a reputation to draw the big bucks, there’s freelance work in vulnerability hunting. Just as there are black markets on the dark net where malicious actors buy and sell hacking services, there are vulnerability markets, “bug bounty” programs, that connect hackers with companies that need their expertise. Casey EllisBugCrowd, as an example, is a collective of over 8,000 professional hackers around the world. The bounties can be as low as $100 or as high as $50,000. Given that $100 is roughly a third of the monthly cost of living for a person in Islamabad, and slightly less than the cost in Delhi, it’s no surprise that many of BugCrowd’s herd come from places like Pakistan and India.

Hacking is a mindset. Playing the role of white hat means harboring an innate distrust of machines, networks, and (frankly) the users who operate them. It is to be creatively curious about subverting or manipulating the machines that transmit our data. Hacking, particularly for security researchers, is about testing the fallibility of a system’s construction. “Hackademics” like Dartmouth-based Sergey Bratus argue that the hacker mindset is a unique approach to security. Hacking moves beyond thinking about passwords and firewalls, and starts to think about how a system’s own design can be used against itself. It picks at the seams and unanticipated gaps in a system’s construction.

Tension Between White Hats, Companies, and Countries

This organic, decentralized, knowledge production and defense system creates challenges for policymakers and business leaders. Because the community’s primary purpose is the protection of information systems and users, it can find itself working in direct confrontation with countries and companies: countries, because they want to ensure their access to vulnerabilities to use them on other countries, and companies because it costs money to fix those vulnerabilities.

There is no small amount of animosity in the white hat community toward governments for spying on their citizens. It isn’t just Edward Snowden’s whistleblowing of the National Security Agency’s data collection systems. (The community continues to debate the effects — good or ill — of the release of the data.) But Snowden is a cautionary tale of what happens to hackers who serve as whistleblowers against companies and governments. There are laws that make it more difficult, or at least more ambiguous, for white hats to conduct research. The Computer Fraud and Abuse Act, and the Digital Millennium Copyright Act, both make it possible to prosecute white hats for attempting to make users safer. In fact, hackers are regularly threatened with these laws, including Princeton University’s Ed Felten, whose own research team was intimidated by companies using the Digital Millennium Copyright Act.

The conflict between businesses, nations, and the community often comes to the forefront at any of the numerous hacking “cons” (conventions) across the United States. One of the largest and oldest U.S. cons is DefCon — a four-day event that showcases the community’s most cutting-edge hacks and research. It is also a target for federal law enforcement and businesses looking to hide their vulnerabilities. This year, much ado was made about the voting village (a hands-on event where attendees can try their hand at breaking into systems) at DefCon when an 11-year-old girl hacked into an electronic voting machine in under ten minutes.  In theory, the nation is safer when these vulnerabilities are revealed — even by an 11-year-old. But shortly before the start of the conference, the supplier of the voting machine, ES&S, released a statement denying the validity of DefCon’s voting village as proof of vulnerabilities, reaffirming their rights to control over their software under the Digital Millennium Copyright Act, and accusing DefCon of making it easier for U.S. adversaries to hack elections.

As cyber security experts began joining the debate, members of the Senate Select Committee on Intelligence fired back in defense of DefCon’s village, stating, “Currently, there are significant barriers that prevent states from working with independent, qualified, good faith researchers to conduct cybersecurity testing on election systems. … In addition, legal ambiguities about contracts and software licensing chill this valuable practice.” The incident highlighted the sometimes-tense interplay between U.S. policy, private companies, and the white hat hackers who hope to help both but are answerable to neither.

The process of recognizing the value of independent research and white hats has been a slow one. Since 2017, the Department of Justice and the Federal Trade Commission have encouraged online firms to implement Vulnerability Disclosure Policies — clear policies on how to report a vulnerability, and how that vulnerability will be handled within a firm. The Digital Millennium Copyright Act has been amended to make security-based hacking less fraught with potential lawsuit risk, but there is more work to be done. Currently, additional exemptions — like conducting research on internet-enabled vehicle systems, smartwatches, and the Amazon Echo — are up for consideration. The final ruling on those recommendations will occur in October this year. The United States should move toward making the laws clear, and safer for research — particularly for independent researchers and nonprofit research teams making their way up the ranks of the community. Security firms like Palo Alto Networks can afford legal defense. But a surefire way to harm the community’s autonomy is to threaten rising stars who are less well-protected.

A Glimpse of the Leviathan

Of course, there is more than one way to leverage a country’s native hacking talents. The largely self-directed research is one approach. Or, a country can treat its cyber security experts like a military force. This appears to be the direction the Chinese are moving in. The Chinese system leverages its hacking community either as offense against other states, or as a means to clamp down on domestic dissent. In doing so, China uses its cyber talent to increase state control, whether it is the state-sponsored attack group APT16’s spear-phishing attacks on Taiwanese media, or APT17’s espionage attempts on U.S. government and military systems. Thus, there isn’t so much of a white hat ecosystem in China as much as a patriotic one, used offensively. William Howlett’s thesis reveals a collective of freelancers, independents, and state-sponsored agents who make up the core of China’s hack base. Howlett’s work makes it clear that this group largely serves the regime’s interests in expanding its control over the digital domain domestically and internationally. China’s cyber security experts are effectively a military or police hack force. If there ever was a leading example of a country attempting to create its own cyber leviathan, China is decidedly on that path.

A Benign Leviathan?

How should the United States be thinking about how best to deploy its hacker community? There is a vast difference between leveraging the community’s capacity for research and knowledge and positing them as a defense force that responds to attacks on critical infrastructures. Japan and South Korea have both taken steps toward having civil cyber militias. The notion is that countries could create some sort of deputized system for a call to arms in case of an all-out assault on the United States through a coordinated cyber attack.

In my view, there should be some such emergency response group — a cyber national guard, as Monica Ruiz wrote about in War on the Rocks in January. Such a group, whether deputized civilians or as part of police and military, would be charged with responding to immediate critical attacks. But that shouldn’t be the work of the whole community. That isn’t its power or its contribution to security. The community is a knowledge creation and diffusion system, and should not be thought of as a force. Recruiting white hats to create a decentralized response force for defending the nation would make an already noisy and ambiguous internet-connected environment even more noisy with random attacks and exploits in an effort to fight back. The community’s strengths are not in its unity, nor in its capacity to coordinate through hierarchy. Moreover, the majority of attacks that weaken information systems have nothing to do with geopolitics. Governments need to start thinking about defense as systemic health and resilience, not simply military conflict.

Should the United States, or other actors, at least try to centralize the white hats’ diverse research agendas? Perhaps, but proceed with extreme caution. Even a well-intentioned attempt to centralize the community will — particularly in a heavily bureaucratized and lethargic system like the U.S. government — not only slow the pace of research, it could also introduce new weaknesses. There’s a risk that cyber defenders will start to tailor their research to what they think government wants to hear, rather than following their curiosities organically. As tempting as it might be to attempt to formally harness the energies of the community for national defense, this approach weakens those defenses. Given the pace at which the networked technological landscape changes, it makes more sense to have thousands of lines of research distributed across thousands of systems, finding and fixing vulnerabilities, than to focus solely on trying to predict where attacks might emerge. The work of espionage and countering adversaries’ efforts in their own networks within well-defined parameters of attack and defense — we can leave that to the federal agencies.

The value of the community is, quite simply, its autonomy. Support for truly independent security analysis should be a cornerstone of any cyber security strategy. The U.S. white hat ecosystem is bottom-up and emergent rather than top-down and directed. And the effect is more secure devices and networks across a broader portion of the attack surface. When the attackers find it too difficult to break into the Central Intelligence Agency’s front door, they’ll go after personnel and their devices. When a country like North Korea suffers from economic hardship, state-sponsored attackers will divert their energies to cryptocurrency wallet stealing, and ransomware. Harnessing or directing is not the answer to these diverse threats. The better long-term returns are likely to result from creating systems that facilitate more independent research and reducing legal ambiguities that chill the research environment. That’s a worthwhile endeavor for Congress to consider in its cyber security strategy going forward.

There are, of course, complications surrounding a fully autonomous research community — particularly one tasked with securing networks and devices. First, the community is powerful, not just because of its knowledge, but because of its capacity to create and destroy systems. There is a thin line between being a knowledge creator or defender who hacks “ethically” and “going black” — particularly for independent researchers just finding their way in the field. The temptation to go grey or black certainly exists. As a first step toward creating a national system that can support the growth of the community, policymakers should think about how to push hack talent in the right direction before they commit their first felonies.

Second, as policymakers and lawmakers continue to explore how to incorporate the work of white hats into securing the nation, sharing vulnerabilities and exploits between public and private entities becomes more complex. There is harm in revealing too much about mechanisms of attack, even in the name of better defense. Working out how and when to share is at the core of the push for vulnerability disclosure policies for private firms, but more work needs to be done in this area. A truly autonomous, highly technical cyber defender community needs to be able to debate and develop new practices in a way that is transparent and understandable to the governments that foster its growth. Governments, for their part, should want input on those debates. No formal system for sharing this information currently exists, but it should.

Securing the Future

As research begins in earnest with the staffing of Sasse’s Cyber Solarium, it is my hope that lawmakers will make a sincere effort to consider the power of the white hat community and what it brings to the table.

The conversation about cyber defense should be far broader than the National Security Agency, Cyber Command, and the Department of Homeland Security. First, because that’s not the full picture of how defense is conducted across the nation’s domains. But second, because cyber security efforts at the government level derive much by way of knowledge and practices from the community. Finally, and most importantly, because the informal networked system that already exists can be made stronger if cyber policy decision-makers are willing to learn how it works. To take advantage of the community’s potential for increased cyber defense, the government should fund increased independent research hubs and encourage the growth of hacking spaces like nullspacelabs in Los Angeles. The discourse should be about growing a healthy network of disaggregated researchers, seeing hacking as a path to resilience, and researching the growth of white markets (e.g., bug bounties) and their effect on black markets. Cyber defense should actually be about defending, and that requires understanding the white hat hacker community and the role it plays.

 

 

Nina Kollars is associate professor of the Strategic and Operational Research Department of the Naval War College. She holds a Ph.D. from Ohio State University, and writes frequently on topics of military innovation, technological adaptation, collaborative problem-solving, and cyber security. Kollars tweets pseudonymously but not anonymously as @NianaSavage aka Kitty Hegemon. The views expressed here are those of the author and do not represent the views or policies of the U.S. Naval War College or the United States Navy.

Image: Thomas Hawk/Flickr