How Democracies Can Defend Against Disinformation

polyakovafried

In March, a former Russian intelligence officer and his daughter were found unconscious on a park bench in Salisbury, England. The culprit behind the nerve agent attack, according to the British government, all E.U. member states, the United States, and others, was the Russian government. Moscow’s response to this incident followed a now-familiar pattern: dismiss the facts, distract with “alternative narratives,” deceive with outright lies, and dismay by warning of harsh consequences. In the weeks following the attack, Russian state media, officials, and social media accounts pushed out over twenty “false narratives” aimed at muddling the reality and sowing confusion.

Moscow’s response confirmed what the United States belatedly realized after revelations of Russian meddling in the U.S. presidential elections and what many Europeans have known for years: Russia has returned to its past practices of hostile propaganda and various forms of active measures — disinformation, political subversion, and corruption — directed against the West. President Vladimir Putin’s Russia seeks to weaken Western governments and transatlantic institutions, discredit democratic and liberal values, and encourage a post-truth world, with the aim of shielding Moscow’s autocracy from liberal influence and facilitating domination of its neighbors.

There is nothing new about the Kremlin’s use of disinformation — the intentional spread of inaccurate information to undermine public confidence, sow confusion, and destabilize democracies. But the new digital tools for spreading disinformation present different challenges. The Kremlin may have developed digital disinformation techniques, but as malicious actors learn from one another, the challenge transcends Russia or any single actor. How should democracies respond to this broad challenge?

Government policy can help expose and limit the damage of foreign disinformation. So, too, can corporate commitments to norms of behavior that align with shared international security objectives. Still, barriers that democratic societies build will be imperfect. No one fix, or set of fixes, can eliminate the weaponization of information and intentional spread of disinformation. Still, policy tools, changes in practices, and a commitment by governments, social media companies, and civil society to exposing and building long-term social resilience to disinformation can mitigate the problem.

A democratic response to malign influence must engage the whole of society. This response need not be centralized or coordinated from the top down, and democracies must remain true to their values, including freedom of expression. They should not appoint “arbiters of truth:” in facing authoritarian regimes, we need not become them as we fight them. But it is time for democracies to stop admiring the problem and focus on practical solutions, both short- and long-term. Our recent report, Democratic Defense Against Disinformation, proposes concrete solutions for how to do so.

Old Strategy, New Tools

During the Cold War, the Soviet Union deployed information warfare as part of its “active measure” operations — overt and covert techniques aimed at influencing the politics and policies of other countries. For decades, the KGB, the Soviet intelligence agency, tried to plant fabricated stories in the Western media, forged documents alleging misdeeds by Western political leaders, and spread conspiracy theories in order to undermine, discredit, and divide Western societies. In the 1980s, the Soviets invented and spread the story that the CIA created the HIV virus to use as a chemical weapon. In 1976, the KGB launched a disinformation attack against Senator Henry “Scoop” Jackson, a hawkish Democrat competing for his party’s presidential nomination, alleging that Jackson was a member of a gay sex club. The campaign included forged FBI documents and letters that the KGB sent to major U.S. newspapers.

In the digital age, the Kremlin has adapted the old Soviet playbook to take advantage of the speed, anonymity, and low cost of new technologies. The strategic objective remains the same — to weaken and destabilize the West — but today’s Russia is no longer seeking to be an ideological challenger to the West in the way that the Soviet Union had. The Kremlin is not out to prove that its model is superior to liberal democracy. Now, it is enough to sow doubt in Western institutions and confuse the very notion of truth with a barrage of alternative narratives mixing fact, distortions, and outright fabrications. Social media platforms, automation, and sophisticated micro-targeting tools for reaching specific audiences make this possible. Disinformation campaigns spread through a multitude of overt outlets, such as RT and Sputnik, and covert means, such as bot networks and trolls. These methods amplify and reinforce one another, creating a multilayered and evolving disinformation ecosystem that is particularly challenging to deal with.

The challenge is tough, but not unprecedented. We should be mindful of historic time lags in the development of social and legal norms to limit the destructive potential of new media. The introduction of the printing press, cheap, mass-circulation newspapers, radio, and television all gave tools to dictators and demagogues in addition to spreading knowledge. So too with digital media. It takes time to develop legal, social, and ethical norms to limit the exploitation and manipulation of new media. Still, lessons from the past can help shorten the time lag.

A Limited But Important Role for Governmets

Governments and multilateral institutions should treat disinformation attacks by foreign actors as a matter of national security. Most European governments, the United States, NATO, and the European Union have established task forces that monitor disinformation attacks, track false narratives, and (in some cases) inform the public and policymakers. The EU’s East StratCom Task Force, NATO’s StratCom Center of Excellence, the US State Department’s Global Engagement Center and similar bodies established by other national governments (including Lithuania, Latvia, Finland, Estonia, the United Kingdom, Sweden, the Czech Republic, and Germany) have launched counter-propaganda and counter-influence operations. These units’ specific functions should vary based on national context: just as disinformation campaigns target different audiences differently, the responses should be targeted as well. But the common thread should be to coordinate government activities on disinformation, alert policymakers of active disinformation campaigns, and, as needed, liaise with social media companies and the media to provide information and analysis.

The U.S. government should establish an interagency task force that would work with the State Department’s Global Engagement Center to design, plan, and coordinate operational activities. As recommended in a January 2018 report by the Senate Foreign Relations Committee, such a unit could be modeled on the National Counterterrorism Center NCTC). To be effective, the unit would have to include representatives from the U.S. intelligence community, in addition to the department of defense and other relevant agencies in the State Department, most notably the GEC. As with the NCTC, the unit would share analysis and intelligence across the U.S. government. Unlike the NCTC, which employs thousands of officials, this counter-disinformation task-force should be smaller and thus agile enough to respond to threats as they emerge. One of its first tasks should be to establish a 24/7 rapid response capability which would allow real-time assessment of threats posed by emerging disinformation campaigns in addition to creating a set of metrics for assessing when and what type of response is necessary.

Second, governments and multilateral institutions should support the independent civil society groups and media — the troll hunters and bot trackers — who have emerged in Europe and the United States. These are the real heroes of the battle against disinformation. Tech-savvy civil society groups, such as Ukraine’s Stopfake.org, the Brussels-based EUDisinfoLab, the U.K.-based Bellingcat, the German Marshall Fund’s Hamilton 68, and the Atlantic Council’s Digital Forensic Research Lab have shown an ability to identify bot networks and trolls in real time while exposing their campaigns. These groups should continue to expose malicious activities and inform governments and social media companies of their findings. Governments should establish funding streams to support these independent efforts much in the same way as Western governments currently fund independent civil society groups working to improve transparency and accountability in the democratic process. A plurality of independent voices who can unleash a “firehose of truth” will undermine the “firehose of falsehood” strategy deployed by Russia and other potentially malicious actors such as China and North Korea.

Lastly, governments can apply legislative and regulatory tools to introduce more transparency and accountability into the digital domain while limiting the activities of foreign state propagandists. The European Union and the United States have effectively used financial sanctions to punish Russian entities and individuals for their activities in Ukraine. The Countering America’s Adversaries Through Sanctions Act gives the U.S. administration broad authorities to impose sanctions on individuals and entities close to the Kremlin and involved in malicious cyber and influence operations against the United States. On March 15, the U.S. Treasury used these authorities to sanction individuals and organizations affiliated with the St. Petersburg troll factory — the Internet Research Agency. The existing authorities should be expanded, via executive order or through additional legislation, to also include banks, Kremlin cronies, and cut-outs often used in such operations.

The United States and Europe should expand ad-disclosure regulations to require that the chief donors to organizations sponsoring political or issue ads be named. An ad-sponsor group called “Americans for Puppies” might appear in a different light if its chief donors are identified as a foreign government or its agents. Just as with money laundering through shell companies, the final beneficiary of any ad (political or otherwise) should be identified. The U.S. Congress or the European Parliament could mandate social media companies’ compliance with such policies, or the firms could do so voluntarily. In Europe, several countries are experimenting with regulating online content. Germany’s NetzDG law requires social media platforms to remove hate speech within 24 hours of receiving complaints or face fines of up to 50 million euros. In France, President Emmanuel Macron has asked for a counter-disinformation law, which is likely to be unveiled later this year. Unlike laws aimed at limiting content, the EU’s General Data Protection Regulation , which went into effect on May 25, focuses on transparency and consent. The law mandates that firms provide users with extended information on how their personal data are being used.

In mounting a response to disinformation, democratic governments should continue to respect freedom of speech and expression. In the United States, this shouldn’t be especially difficult since First Amendment protections seem weaker when applied to foreign persons, especially those outside the country. For example, foreigners are banned from contributing to candidates or political parties; placing or financing ads in a campaign context; or engaging in other campaign related activities, broadly understood (unless they are lawful permanent residents). While disinformation is also spread by American groups and individuals, the government has no obligation to extend those rights to automated accounts or foreigners.

Similarly, while freedom of expression is enshrined in the European Convention on Human Rights, the European Union nonetheless has regulatory leeway to protect citizens from “activities aimed at deliberately manipulating their views and covertly influencing their decisions.” So far, the European Commission, the executive arm of the European Union tasked with assessing the potential for legislative responses to counter disinformation, has stopped short of regulation. An April 2018 strategy for tackling online disinformation laid out guidelines for a voluntary code of conduct by social media companies coupled with increased funding for fact checkers and media literacy. If companies’ self-regulation does not prove satisfactory to the commission, the European Union will likely move toward regulation. Still, member states will have to tread carefully when considering legislation so as not to slip into censorship.

Social Media Companies Should Be Held Responsible

Social media companies should not and cannot be the “arbiters of truth” — deciding which content is truthful versus not. However, Facebook, Twitter, Google, and others have a responsibility to prevent and get ahead of malicious manipulation of their platforms. These firms could rely on independent third-party assessments of what qualifies as weak versus credible content based on transparent metrics. Reference points for determining content quality, such as Stanford University’s Web Credibility Project, could be integrated into the algorithm code.

There are other indicators that help to distinguish credible websites from non-credible ones, such as site longevity, a measure of the amount of time a specific URL has been operational. Disinformation is often spread through fly-by-night sites that are set up just for the purpose of publishing a fake news story and spreading it through social media or turning a quick profit by publishing inflammatory “clickbait.” These disinformation sites, whether set up by either malicious state actors or “entrepreneurs” looking to turn a quick profit, typically haven’t been online very long, so Google, Facebook, Twitter, and other sites could demote such content from appearing at the top of users’ newsfeeds and searches. In ambiguous cases, content from short-lived sites could be flagged for review by human editors.

In general, social media companies should review how their algorithms, which first and foremost prioritize content with which users interact the most, are unwittingly promoting false information and extremist content. This will require a rethink of the online advertising model in which platforms seek out revenue from advertisers in exchange for offering access to a large target audience. One of the most crucial lessons we have learned from Russian disinformation campaigns is that malicious actors seeking to undermine democracies use the same ready-made advertising tools that Facebook, YouTube, or Twitter offer to companies seeking to sell products to target consumers. A marketing campaign looks the same whether it is Nike trying to sell sneakers or the Kremlin trying to push false stories. Facebook and others must devote more time to examining advertising on their platforms — banning known propaganda outlets sponsored by authoritarian states or extremist groups from advertising would be an important step. For example, the Department of Justice has forced RT to register as a foreign agent and it has been found to be in violation of journalistic principles by the United Kingdom’s broadcasting regulator.

Social media firms have taken some steps in this direction: Twitter banned both RT and Sputnik from advertising on its platform and Google has said it is “de-ranking” search results from these outlets. Still, RT and Sputnik results continue to be prevalent in some search results, especially in languages other than English, signaling that much more needs to be done.

These and other necessary steps probably mean that social media companies will lose some advertisers and users. But if we are to trust Facebook CEO Mark Zuckerberg who, in his testimony to Congress in April, said that his company is serious about tackling disinformation, a willingness to forego some ad revenues would be a critical sign of seriousness. The company will also need to be more forthcoming in sharing information (while protecting real users’ data) with governments. With increasing regulatory scrutiny, social media companies that have resisted oversight may now be more amenable to rebuilding trust with governments. The United States, the European Union, and individual governments should collectively use their leverage — the threat of regulation and continued political pressure — to bring firms to the table and keep them there.

A Transatlantic Approach

The scope of the challenge is broad and evolving, demanding commitment by governments, private companies, and general publics on both sides of the Atlantic. The transatlantic community should establish a broad coalition, bringing together on a regular basis like-minded government and nongovernmental stakeholders, including social media and traditional media companies, internet service provider firms, and civil society.

This Counter-Disinformation Coalition could develop best practices for confronting disinformation in ways consistent with democratic norms — initially non-binding, these guidelines could serve as basis for developing regulatory measures. It could take the form of a new multilateral group, as was the case with the Global Coalition to Defeat ISIS, or be convened by an existing multilateral organization, such as NATO, the European Union, or the Organization for Economic Co-operation and Development, expanded to include democratic governments beyond Europe and North America. This body would bring together representatives from existing counter-disinformation initiatives, such as NATO’s Center of Excellence in Riga and the EastStratCom Taskforce in Brussels, as well as social media companies, media, and civic groups for a regular dialogue. It would address issues such as transparency in advertising, procedures to identify bots and trolls, identification and labeling of overt propaganda, and free speech and general internet freedom in the context of the spread of disinformation. The coalition should be flexible and could be loosely organized: A democratic response to disinformation should not be a top-down state-driven effort. Rather, it should harness the strengths of democracies, which are rooted in pluralism and independence and thus inherently decentralized.

The coalition could start by developing a voluntary code of conduct on the basis of the European Union’s April strategy, outlining agreed public and private procedures for dealing with disinformation and identifying gaps in knowledge. Recent precedent exists for such an initiative: In 2016, E.U.- and U.S.-based social-media companies agreed on a voluntary code of conduct to combat hate speech. The group should also discuss regulatory initiatives but these will vary based on each national context. The principles and recommendations should reflect the practical complexity of distinguishing between domestic- and foreign-origin bots and trolls.

The recommendations here are near-term steps to resist and restrict disinformation. In the longer term, democratic societies will need to invest in long-term tools of resilience: education on digital literacy, investment in independent media, and better understanding of the emotional appeals of disinformation. Russia’s aggressive use of disinformation has drawn immediate attention to the challenge, but Russia is merely a pioneer. The problem will grow. Democratic societies may be at a short-term disadvantage in contending with propaganda and demagogues, but Cold War history demonstrates that democratic ideals have longer-term advantages over authoritarians.

 

Dr. Alina Polyakova is the David M. Rubenstein Fellow for foreign policy at the Brookings Institution. An expert in Russian political warfare and European populism, Dr. Polyakova is also an adjunct professor of European studies at the Johns Hopkins School of Advanced International Studies (SAIS) and author of the recent book, “The Dark Side of European Integration.” She previously served as Director of Research at the Atlantic Council. She is the co-author, with Daniel Fried, of the Atlantic Council’s report, “Democratic Defense Against Disinformation,” on which this article is based.

Ambassador Daniel Fried is a Distinguished Fellow with the Future Europe Initiative and Eurasia Center at the Atlantic Council. He served as NSC Senior Director and Special Assistant to Presidents Clinton and Bush, Assistant Secretary of State for European Affairs, and Ambassador to Poland; in these and other positions, Ambassador Fried helped design U.S. policy toward Europe after the fall of the Iron Curtain.  The final position of his 40-year foreign service career was as Coordinator for Sanctions Policy at the State Department.

 Image: Maurizio Pesce/Flickr