How to Defend Against Foreign Influence Campaigns: Lessons From Counter-Terrorism

zuck

Two weeks ago, a grand jury in Pennsylvania indicted seven Russian intelligence officers for state-sponsored hacking and influence operations. Both U.S. Attorney General Jeff Sessions and FBI Director Christopher Wray affirmed the gravity of the crime. The same day, Vice President Mike Pence warned that China is launching an “unprecedented” effort to influence public opinion ahead of the 2018 and 2020 elections. From Russia to China to Iran, America’s adversaries are increasingly using influence operations — the organized use of information to intentionally confuse, mislead, or shift public opinion — to achieve their strategic aims.

To most Americans, the recent onslaught of influence operations at home may feel like a novel threat. But the reality is that while the battlefield has changed in important ways, nearly two decades of countering terrorism taught the United States a great deal about how to approach this latest challenge.

From 2010 to 2016, I worked as a counter-terrorism analyst supporting special operations forces, with three tours in Afghanistan. I went on to lead an intelligence team at Facebook that focused on counter-terrorism and global security. As such, I’ve had a front-row seat to observe how the government and tech companies dealt with terrorism’s online dimension, and to consider the similarities to today’s state-sponsored disinformation campaigns. Five key lessons stand out: improving technical methods for identifying foreign influence campaign content, encouraging platforms to collaborate, building partnerships between the government and the private sector, devoting the resources necessary to keep the adversary on the back foot, and taking advantage of U.S. allies’ knowledge.

Lesson 1: Hack It

A critical goal in any information battle is rooting out your adversary. In the tech sector, companies like Google, Twitter, and Facebook have employed a combination of methods to identify and address terrorist content. These techniques include automating content identification through machine learning, mitigating the amplification of nefarious content, and reducing anonymity.

Tech firms seeking to root out terrorism on their platforms have trained a variety of “classifiers” to help identify content that violates companies’ terms of service. Companies experimented with natural language understanding to help machines “understand” this content and categorize it as terrorist propaganda or not. On Twitter alone, their algorithms flagged all but 7 percent of accounts suspended for promoting terrorism in late 2017. And of those 93 percent flagged by machines first, 74 percent were taken down before launching a single tweet. (There are no widely accepted, publicly available indications of how many violating accounts were not caught by Twitter’s internal tools or human review.) Additionally, companies like Microsoft and Facebook bank the text, phrases, images, and videos they characterize as terrorist propaganda and use this data to train their software to recognize similar content before it can proliferate. Finally, companies reduced anonymity and improved attribution by tightening verification processes (e.g. checking accounts that show signs of automation rather than human control) to combat the automated spread of malign messaging.

These techniques can be applied directly to policing influence operations on social media platforms. Social media companies should identify the methods that have most effectively made their platforms “hostile” to terrorist content. The three ways of countering terrorism highlighted above — content identification through machine learning technologies, mitigating the amplification of nefarious content, and reducing anonymity — are good starting points. For example, creating a bank of commonly recycled disinformation campaign terms or phrases can be a source for automated flagging of this content for human review. Algorithms like those that detect potential terrorist propaganda, but which instead detect bots and track trolls, can help reduce the amplifiers of state-led disinformation campaigns.

Already, Facebook and Google are implementing practices along these lines, like de-ranking content rated false by third-party fact checkers and recalibrating search algorithms. Twitter’s suspension of 70 million accounts in May and June also signals a commitment to getting this right. However, there is much more to be done. Tech companies should make “hacking” the disinformation problem a genuine priority by directing a percentage of engineering capacity to automating the identification of state-sponsored influence campaigns. This can be incorporated into existing traditions, like a disinformation-themed Facebook “Hackathon,” and will help counter malicious foreign actors seeking to scale their operations using emerging technology.

Lesson 2: Make Companies Work Together

Industry cooperation has been essential to the counter-terrorism fight. For instance, the hash-sharing consortium introduced in 2016 — between Facebook, YouTube, Twitter, Microsoft, and other companies — created a shared industry database to identify potential terrorist content that violated individual companies’ policies. It flagged this content for partner firms before it was posted to their platforms. This partnership grew into the more formal Global Internet Forum to Counter Terrorism as smaller tech companies and government entities bought in to its efficiency. The same approach can be used to counter propaganda and disinformation proliferation.

Since different tech companies often face similar challenges in this new “battlespace,” knowledge of one another’s community guidelines can help streamline the disinformation fight. Just like industry cooperation helped to catch terrorist propaganda posts on YouTube before they were uploaded to Twitter, it can help prevent false state-sponsored narratives from spreading between platforms.

Google, Facebook, and Twitter’s September pledge to work together to fight “fake news” in Europe can act as a test case for expanding this collaboration globally. And Facebook Chief Operating Officer Sheryl Sandberg recently declared that the platform has already organized industry meetings on election protection, another positive step. In the meantime, tech companies should formalize these partnerships by creating and funding an enduring disinformation-related consortium between willing participants, modeled after the Global Internet Forum to Counter Terrorism. Not only could this speed up the detection and mitigation of foreign influence campaigns, it could also be a move toward establishing self-imposed industry standards on disrupting disinformation.

Lesson 3: Sharing is Vital

In the years following 9/11, information sharing — among government agencies and between the government and the private sector — was essential, and demanded effective collaboration at the ground level. Following the philosophy that intelligence drives operations, Special Operations Command units and other community representatives were co-located in Joint Interagency Task Forces around the world for limited periods of time. As subject-matter experts, we were in the same room as the commanders of the teams that “operationalized” our assessments. This direct access between the support and action arms cut out the middleman and saved valuable time. The mix also encouraged innovation, as agency officers informally shared best practices and even threat intelligence for early indications and warning. When we returned to the United States, we brought that institutional knowledge back to our respective organizations.

On the social media side, companies like Facebook integrated former practitioners into intelligence programs and other areas to sharpen responses to terrorist threats, especially in the case of imminent real-world harm. These companies also grew robust law enforcement response apparatuses to liaise with their federal government counterparts in the counter-terrorism realm. This model should be expanded and refined for the disinformation fight.

Creating smaller, more forward-leaning fusion cells that integrate public and private sector analysts can provide the agility needed to counter foreign disinformation campaigns in concert. U.S. intelligence agencies and major tech firms should co-locate their analysts on a voluntary basis to facilitate granular level information sharing. This would recreate the Joint Interagency Task Force construct to include private sector analysts (since they are today’s “operators” in the digital age), and formalize a public and private information-sharing mechanism. Private threat-intelligence analysts and government all-source analysts should be the initial focus of this effort. Voluntary, one-for-one analyst exchanges between the government and the tech sector, for predetermined time periods, are also an option. In fact, the FBI is already laying the groundwork for a supporting, government-led anti-disinformation apparatus with its Foreign Influence Task Force. The goal is to generate momentum to respond quickly to the threat of foreign influence operations, and to create an enduring dialogue to share threat-oriented tactics and techniques at appropriate levels of classification.

Lesson 4: Keep the Pressure On

A mentality of “taking the fight to the enemy” to prevent terrorism at home served as a general framework for post-9/11 counter-terrorism. The application of this concept to the cyber realm, while a work in progress, shows promise. For instance, Operation Glowing Symphony temporarily disrupted Islamic State propagandists on a variety of platforms through content removal, setting the stage for more robust digital disruption efforts. Each operation was an opportunity to improve on the last and keep the enemy busy playing defense.

Tech companies took notes and leaned into the digital counter-terrorism fight as well. They kept up the pressure on their side by hiring thousands of content moderators, recruiting former counter-terrorism practitioners, and continuing to suspend accounts that violated their counter-terrorism policies. For instance, Facebook employed 7,500 content moderators, and Twitter suspended 1.2 million accounts for counter-terrorism policy violations between 2015 and 2018. Facebook even invested heavily in counterspeech efforts to respond directly to propaganda with alternative messages, and Google’s Jigsaw created a program to actively “redirect” potential extremists away from terrorist propaganda. The United States — both its public and private sectors — can apply this philosophy of sustained pressure to create a non-permissive operating environment for digital disinformation as well.

To combat foreign influence operations, policymakers should embrace the concept of tactical friction, defined as “continuous engagement” that “imposes strategic costs on our adversaries, compelling them to shift resources to defense and reduce attacks.” This type of persistence presence is clear in the recent Presidential Policy Directive-20 revisions and the new Cyber Strategy, which outline potential ways to impose costs on would-be sponsors of foreign influence operations. Efforts with a lighter touch, like government cooperation with internet Sservice providers to re-think internet access for attackers, are easy ways to start. On the private side, tech companies can follow suit by aggressively prioritizing and resourcing their efforts against influence operations, just like they did with counter-terrorism.

Lesson 5: Get a Little Help from Your Friends

In both the counter-terrorism fight and the fight against foreign influence operations, the United States possesses a largely untapped advantage: its democratic allies. At peak levels, the International Security Assistance Force in Afghanistan included troops from 51 partner nations. From 2010 to 2014, I worked with analysts and operators under British, German, and Australian commands on different deployments to Afghanistan, sharing targets and hunting recidivists. Today, 38 nations other than the United States supply troops to Operation Resolute Support in Afghanistan to counter the terrorist threat, and the NATO Alliance is an official member of the Global Coalition to Defeat ISIS.

A similar concept of burden sharing should apply in the information domain, particularly given U.S. allies’ long experience as victims of disinformation campaigns (Estonia in 2007, the United Kingdom in 2016, France in 2017, and Germany in 2017). And, given the extensive investigation of the Internet Research Agency’s “Translator Project” and Russian hacking group CyberBerkut, we have an idea of how tomorrow’s enemy will operate. America’s French allies are already codifying critical lessons from their 2017 presidential elections and the Macron leaks, and the Germans are expanding their legal framework to include offensive cyber measures that may encompass fighting back against influence campaigns. The know-how of America’s friends can lay the foundation for its own efforts.

As such, the United States should invest in convening democratic allies to exchange best practices from their own experiences with disinformation, especially given those countries’ forays into loosening restrictions on offensive cyber techniques.

Of course, state-sponsored influence operations aren’t identical to violent extremism. Disinformation plays heavily on the susceptibility of human beings to manipulation and subversion, especially within an ideologically divided public, thereby casting a wider net than most terrorist propaganda. Further, tech companies, governments, and allies are only part of the fulsome response needed. Still, policymakers and private sector leaders would be wise to take a hard look at what the United States has learned over the past two decades as they confront the intensifying problem of disinformation campaigns.

 

Kara Frederick is a researcher for the Technology and National Security Program at the Center for a New American Security. She was an intelligence analyst at Facebook and the Department of Defense, with three deployments to Afghanistan in support of special operations forces. She has an M.A. in War Studies from King’s College London and a B.A. from the University of Virginia.

Image: Flickr