The Complicated Truth of Countering Disinformation

NEMR

In 2017, Myanmar’s military led a protracted propaganda campaign that resulted in mass murder. Over 25,000 of the country’s Rohingya people were raped or killed, while over 700,000 more were forced to flee. Though divisions between Myanmar’s Buddhist majority and Muslim Rohingya minority have existed for over half a century, social media platforms — notably Facebook — amplified the disinformation and propaganda efforts that ended in horrific violence and a refugee crisis.

Focusing exclusively on Facebook’s role in the crisis, however, misses the point. The truth is more complicated — social media’s unprecedented ability to spread disinformation succeeds in part because of vulnerabilities in the way people process and evaluate information. In an information environment characterized by an oversaturation of content and algorithms designed to increase views and shares, narratives (true or not) can quickly go viral by appealing to our biases. This new, decentralized world of content creation and consumption is ripe for exploitation by nefarious actors who seek to spread doubt and untruths.

 

 

To counter modern disinformation, then, we cannot focus solely on social media platforms or current technologies — we should also understand the psychological factors that underpin our identities and perceptions of the truth. Acknowledging the ways in which people are vulnerable to biased narratives is a key part of developing necessary multi-dimensional responses. Armed with this framework, stakeholders can then explore using disinformation tactics for the purpose of elevating credible information, supporting collaborative long-term research on psychological approaches, and creating public-private platforms to share key information across organizations.

Of note, the ongoing fight against terrorist narratives online offers lessons which can be leveraged by technology companies and governments to help counter disinformation. After all, disinformation and violent extremism are in many ways two sides of the same coin, fueling each other to create dangerous and potentially violent environments, such as in Myanmar.

The following piece aims to explore our psychological vulnerabilities to disinformation and our options in addressing them, as well as to highlight strategies the public and private sectors might emulate to counter online terrorist narratives in the fight against disinformation. Of course, the following discussion represents merely a slice of the many challenges we face.

Cognitive Constraints in an Online Jungle 

There are significant vulnerabilities in the information ecosystem that malicious actors can exploit, and they derive from three primary, interconnected elements: the medium, the message, and the audience.

The first two elements, the medium and the message, operate in tandem. Social media and news platforms are inherently vulnerable to sensationalist content like propaganda and disinformation, which are designed to attract views and shares and therefore generate revenue. And unlike Justice Stewart’s famous aphorism about pornography (“I know it when I see it”), propaganda and disinformation can be difficult to recognize and define. Content can range from biased half-truths, to conspiracy theories, to outright lies, but the overarching intent is to manipulate people or blur the truth.

Unfortunately, the most useful emotions to create such divisive conditions — uncertainty, fear, and anger — are the very characteristics that increase the likelihood a message will go viral: On average, a false story propagates six times faster than a true one.

To understand why, we must consider the third factor creating vulnerabilities in the information ecosystem, and the one most difficult to address: the audience.

People are not rational consumers of information. Propaganda and disinformation exploit fundamental biases and behaviors, preying on desires for certainty, identity, and belonging. Such content amplifies the fear, anger, or shock that humans feel in uncertain times, and taps into the desire to reject ambiguity.

Psychologists refer to this as cognitive closure — during troubled times or when facing uncertainty, individuals seek swift clarity or closure, which can make them vulnerable to ideas that offer clear, black-and-white messaging. By embracing binary beliefs and the resulting identity those beliefs provide, people tend to aggregate into like-minded “in-groups” and insulate themselves from alternative viewpoints.

In such contexts, the truth matters little. When presented with alternative or opposing ideas, research shows that individuals with strongly held beliefs process the new information as a threat that provokes the same biological response as facing actual physical danger — fight or flight. Even after individuals have been informed and accept that a news story is not entirely truthful, one-third will still share the story. Given these quirks of human behavior, researchers contend that instead of fake news exacerbating polarization, it is more likely that polarization exacerbates fake news.

The challenge of propaganda and disinformation is therefore not one that falls solely within the purview of any one industry, government, or company like Facebook. An effective response requires understanding and addressing the converging factors of technology, media, and human behavior. But that hasn’t stopped people from calling for various one-dimensional remedies, from fact-checking to content moderation. Would any of these proposed remedies work?

Breaking Down Options to Counter Disinformation

A standard response to growing disinformation online has been to promote media literacy. Consequently, there are now 188 fact-checking outlets in over 60 countries, a 26 percent increase since February 2018. Likewise, a number of browser extensions and dedicated fact-checking sites now help information consumers discern fact from fiction. However, as detailed above, certain narratives are difficult to counter because of how they resonate cognitively and emotionally with their intended audiences — a phenomenon that will be familiar to countering violent extremism practitioners who have been working to push back against terrorist propaganda. Furthermore, the evergreen question remains: How do you ensure that the fact-checking or debunking reaches the target audience? When consumers of fake news are presented with a fact check, they almost never read it, and when they do, they are often unable to contend with or accept it.

As a related solution, many have argued for improving critical thinking skills. However, researchers have found that such efforts may have the counterproductive effect of creating doubt about the credibility of news in general. Perhaps more alarming, some research indicates that many consumers of disinformation already perceive themselves as critical thinkers who are challenging the status quo. What the literature makes clear is that humans are neither well-equipped nor motivated to identify certain messages as false or misleading.

It’s not all bad news, however. There are social elements to fact-checking that can encourage the creation and sharing of accurate information. Strong social connections between individuals who generate false content and individuals who fact-check that content can make the former more likely to correct their own statements. This finding fits with additional research that shows individuals depend on their social networks as trusted news sources and are more likely to share a post if it originates from a friend. This of course can be either a benefit or a threat, depending on the quality of the news your friends are sharing.

Another study found that counterfactual information can change partisan opinions when strong evidence accompanies the information. Interestingly, this study also found that participants were generally open to opposing information unless they were primed to feel adversarial or if the opposing arguments were presented in an insulting or condescending manner. Combined with the aforementioned research on fight-or-flight reactions to undesirable information, an age-old lesson emerges: If you’re trying to counter misleading information on a polarizing topic, avoid disparaging those whose views you seek to change.

Here, media can play a key role in countering disinformation. Research shows that repetition helps people process information more easily and, relatedly, that repeating a false claim can increase its believability. Reputable news outlets should seize on this research and emulate the ways disinformation purveyors replicate the same narratives ad nauseam. By producing high quality reporting that constantly repeats the facts on polarizing topics, without referring to or publicizing disinformation claims, these outlets could help people more easily draw upon credible information.

Other approaches offer ways to build psychological resilience against disinformation. Research has demonstrated the effectiveness of treating disinformation as a contagion against which a person can be inoculated. Studies also show the value of increasing people’s ability to integrate multiple perspectives on topics so they avoid falling prey to black-and-white thinking. Such efforts show promise on an individual level and merit additional research to understand how they might scale to larger populations.

The Promise and Peril of Public-Private Relationships

Given the vulnerabilities in both our information environment and the ways in which we process information, how should we be thinking about the role of industry and government in countering disinformation?

Ultimately, governments have a responsibility to defend against propaganda and disinformation that threaten national security, whether it originates from state-sponsored or non-state sources. Yet the battleground exists on the technology platforms where users host, view, and share content.

This divergence — companies owning the platforms, but governments owning the responsibility to counter foreign disinformation — suggests the need for governments to engage more constructively with technology companies on content-related issues tied to national security, an argument the manager of Facebook’s global counterterrorism policy comprehensively lays out. A productive public-private relationship would enable transparent information sharing from government analysts to technology companies’ policy teams (and vice versa) in real-time, improve fact-finding and lessons learned after concerted disinformation attacks, and facilitate the deployment of targeted solutions to more quickly counter disinformation campaigns on technology platforms. However, policymakers should not underestimate the difficulty of adequately balancing civil liberties and national security interests in this mission. The same ambiguities, privacy concerns, and scoping questions that apply to terrorist content extend to propaganda and disinformation, with the latter posing some especially difficult challenges.

For example, proxy accounts seeking to imitate real citizens engaging in real political discourse launder a significant volume of propaganda and disinformation. Purveyors of disinformation have increasingly sidestepped the creation of sensationalized false content, which platforms and fact-checkers can more easily identify, and instead amplify existing partisan narratives to further polarize and divide target audiences. In this respect, disinformation content can be indistinguishable from authentic partisan exchanges.

Fortunately, there are examples of successful public-private models focused on countering terrorist content that can serve as a starting point for tackling disinformation. For instance, the Global Internet Forum to Counter Terrorism is led by major technology platforms in partnership with governments, the United Nations, non-governmental organizations, and academia to curb terrorist content online while seeking to preserve core values such as free speech and privacy. It has established a shared database of identifiers for terrorist media content, called “hashes,” which member companies then collectively block. Trained artificial intelligence identifies relevant material to restrict, thereby augmenting the abilities of human moderators. And while cases exist where current tools have failed to comprehensively identify relevant material, public-private collaboration has nonetheless generated real results — the forum’s database currently contains more than 200,000 hashes of terrorist content blocked across the member platforms.

A similar system might work for countering various aspects of propaganda and disinformation, so long as stakeholders are clear-eyed about the limitations. For example, a major source of known disinformation comes from external fake news sites meant to imitate legitimate news. A coalition of relevant stakeholders could coordinate social media platforms, adtech companies, and fact-checking technologies to identify and share URLs of known disinformation sites to block or de-prioritize such content across platforms. Sharing this intelligence with adtech platforms would also help starve these sites of their advertising revenue.

But fake news URLs is just one of many areas where information sharing could improve efforts to counter disinformation. Collaboration could extend to the latest methods of fake news detection, such as algorithms or other tools that technology companies have developed for their own use, but which might have the potential for greater impact if shared across platforms. It could include sharing unique account characteristics associated with malicious actors who create webs of disinformation accounts. Peer platforms could simultaneously block accounts with those identifiers to ensure a concerted response. It would similarly be valuable for individual technology platforms to share disinformation-related threat intelligence with one another. This initiative might emulate the recently-formed Cyber Threat Alliance, a collaboration established by cyber security firms in 2014 to facilitate the sharing of cyber attack intelligence between members.

Combined efforts should also prioritize collaborative research into addressing the psychological resonance of disinformation. While some technology platforms have recently supported such studies, academic researchers and advocates have long lamented the delays and roadblocks on accessing data from companies like Facebook. Where sharing research data is possible, public-private initiatives should consider the merits of a number of proposals that need validation — from labeling high- versus low-quality information, to weighting search results for news based on the quality of the advertisements, to analyzing the long-term efficacy of media literacy efforts to see which approaches have the greatest impact.

Moving Forward

There are two constants in this complex information environment: the use of propaganda and disinformation as tools for influence and obfuscation, and the underlying psychological factors that make humans vulnerable to such narratives. What is subject to change, however, are the technologies by which such content is created and spread.

Given these factors, it is important not to overstate the impact of technology, but rather to understand and address the interwoven complexities disinformation poses. In the near term, social media platforms are best positioned to lead counter-disinformation efforts, and these efforts should be made as transparent as possible in collaboration with government and other partners. However, all stakeholders should approach it as the multi-dimensional problem that it is, contending honestly with the cognitive limitations of people’s information intake and the ambiguous nature of what constitutes disinformation. Only then will we craft effective policies, regardless of the technologies involved.

 

 

Christina Nemr is Director of Park Advisors, overseeing counter-disinformation and countering violent extremism programming. She previously worked with the U.S. Department of State’s Bureau of Counterterrorism. Will Gangware consulted for Park Advisors and previously worked with the NYPD Intelligence Bureau. This article contains select excerpts from a larger report on psychological and technological vulnerabilities to disinformation found here. The views expressed in this article are those of the authors alone and do not reflect the views of Park Advisors or its clients.

Image: Voice of America