The 26 Words That Guard the Open Internet and Open-Source Intelligence

kosseff2

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Section 230 of the Communications Decency Act

In 1996, Congress quietly included 26 words in a massive overhaul of U.S. telecommunications laws. Those words, part of Section 230 of the Communications Decency Act, provide broad immunity to websites, apps, and other online platforms for claims arising from user-generated content.

It is difficult to overstate the impact that Section 230 has had on the modern Internet. In fact, Cornell University Press is publishing my history of Section 230 next month entitled The Twenty-Six Words That Created the Internet. Section 230 has been one of the greatest enablers of online speech in the world, allowing companies like Facebook, Twitter, and Wikipedia to offer vast amounts of user content without fearing liability for each word and image. Section 230 has also enabled harmful speech, such as defamation, harassment, and sex trafficking ads. Section 230 has also faced criticism for creating open forums for the self-proclaimed Islamic State and other national security threats. Indeed, some judges have cited Section 230 as a reason to dismiss claims against online platforms brought by the families of the Islamic State’s victims.

It is worth understanding the intelligence, security, and law enforcement benefits of the open Internet that Section 230 has created. Rather than communicating exclusively via encrypted and anonymized tools on the dark web, some bad actors are fully visible to U.S. intelligence and law enforcement, providing valuable intelligence about threats. Changes to Section 230 may reduce or eliminate many of these benefits.

What is Section 230?

Section 230’s origins trace back to CompuServe and Prodigy, which were among the earliest nationwide services that connected home computers, offering bulletin boards, chatrooms, and online newsletters. CompuServe took a “Wild West” approach to its services, and it did not moderate content of user posts or newsletters. Prodigy, on the other hand, developed user content rules and employed moderators to manage forums and delete objectionable content. A federal judge in 1991 held that because CompuServe was merely a “distributor” like a bookstore, under the First Amendment, it could only be liable for third-party content if it knew or had reason to know of the illegality of that content. Less than four years later, a state court judge on Long Island ruled that because Prodigy moderated content, it was liable for all user content even if it had no reason to know of the illegal material. In other words, U.S. law punished online services that attempted to block user content that was harmful to children or otherwise objectionable.

Recognizing the perverse incentives that these court opinions created, Republican Rep. Chris Cox and Democrat Rep. Ron Wyden proposed the Internet Freedom and Family Empowerment Act. They had two primary goals: encourage responsible content moderation and allow a new industry to thrive without fear of lawsuits or government regulation. The bill contained the 26 words that prevented online services from being treated as the publishers of third-party content, as well as immunity for “good faith” actions to block access to objectionable material. The bill contained only a few exceptions, for federal criminal law, intellectual property, and the Electronic Communications Privacy Act.

The Cox-Wyden bill was folded in to Title V of the Telecommunications Act of 1996, along with the Communications Decency Act, a Senate bill that restricted the transmission of “indecent” communications.  The Cox-Wyden bill became known as Section 230 of the Communications Decency Act.

 

 

Section 230 was virtually unnoticed at the time of passage, as the focus was largely on the Senate’s indecency language, which would soon be struck down as unconstitutional by the Supreme Court. Section 230 remained on the books, but the new law’s impact was unclear. What did it mean to prohibit the treatment of an online service as the speaker or publisher of third-party content? In 1997, the first federal appellate court to address the issue held that those 26 words provide sweeping immunity for the online services, and courts nationwide soon followed that court’s lead. Although some courts have set some limits — such as when a website requires users to answer illegal questions for online profiles — the immunity largely has survived.

Section 230 provides the strongest protection for user content on online platforms in the world, so it is not a surprise that so many of the most successful platforms are based in the United States. Section 230 allows platforms to decide if and how to moderate user content.

As platforms have grown to play an increasingly central role in our lives, the platforms’ use of this responsibility and power is under unprecedented scrutiny. Terrible stories about the failure of platforms to act responsibly seem like a daily occurrence. People face threats to their safety and lives when they are impersonated online and are often unable to convince the platforms to help prevent the substantial threat. Sales of guns and drugs proliferate, with some platforms not taking the threats as seriously as they should.

Content moderation is hard. What is illegal or objectionable to one person may be the legitimate exercise of free speech rights to another. Many platforms recently have increased their investments in human moderators and artificial intelligence and are more transparent about their moderation policies and practices, but they have a long way to go. Some platforms have abdicated any meaningful responsibility, treating Section 230 as a birthright and failing to recognize that one of the reasons that Cox and Wyden proposed Section 230 was to encourage them to develop responsible moderation practices.

Congress has taken notice. After a federal appellate court held in 2016 that Section 230 barred sex-trafficking victims from suing the website where they were trafficked, Congress last spring passed the first-ever substantial amendment to Section 230, providing an exception to its immunity for certain civil lawsuits and state criminal actions involving sex trafficking.

Among the many other areas where Section 230 has attracted criticism involves lawsuits brought by victims of terrorist organizations that use social media platforms to communicate, spread propaganda, and raise money. Some of the lawsuits, often brought under the federal Anti-Terrorism Act, have been dismissed under Section 230, though others are dismissed because the judges determined there is not a sufficient causal link between the platform and the harms caused by terrorism. A 2015 Brookings study concluded that the Islamic State’s “ultraviolent propaganda provides an unusually high level of stimulation to those who might already be prone to violence.” In congressional testimony in 2015, then-FBI Director James Comey described how the Islamic State uses Twitter to recruit and influence followers: “It buzzes in their pocket. There is a device — almost a devil on their shoulder all day long, saying ‘Kill. Kill. Kill. Kill.’”

Transparency Can Foster Security

The link between the Section 230-created Internet and national security is not as simple as Comey described. Terrorists, criminals, and other bad actors have used social media to amass power. But their use of these platforms often is in full public view, providing the government with unparalleled access to information about their operations.

Had the communications been transmitted via private channels, intelligence and law enforcement agencies would need to navigate the morass of constitutional and statutory restrictions on the collection, such as the Fourth Amendment, the Stored Communications Act, and the Foreign Intelligence Surveillance Act (depending on the location of the data and the target and the nature of the surveillance). But the information on social media often is out in the open, in plain view. And this open source intelligence is often valuable.

The inherently classified nature of national intelligence prevents us from understanding the full scope of value of open platforms for the U.S. intelligence community. However, public reports indicate the substantial value of the information gathered from the sites. For example, after Malaysia Airlines Flight 17 was shot down over Ukraine in July 2014, the Defense Intelligence Agency almost immediately spotted a social media post from Ukraine separatists taking credit for shooting down a cargo plane. In 2015, Twitter was among the first sources of photos of Russian aircraft over Syria. The indictment last year of 12 Russian nationals who hacked the Democratic National Committee and Democratic Congressional Campaign Committee cited the defendants’ tweets.

The vast volume of social media content provides particularly rich data for aggregate analysis. For instance, a 2018 study analyzed 26.2 million Arabic-language Twitter comments over six months and concluded that “the expressed on-line support to ISIS mainly changes according to the specific target of ISIS actions, to military events, to the on-line volume of the discussion about ISIS and to the coverage of media about it.”

The open source data available on platforms such as social media poses many of the same benefits — and operational challenges — as older forms of human intelligence and signals intelligence gathering. A 2018 RAND report observed that, like human intelligence, “social media data collection provides insights and perspectives of an individual — one who either has unique access or may provide a representative point of view for a community or specific national population,” and like signals intelligence, collection of open source data via social media “may involve electronic collection of a massive number of records that are sifted using technical means to identify interactions or communications of critical interest.”

Stanford’s Daphne Keller noted concerns about the removal of extremist content causing the destruction of intelligence information. She cited a 2010 case in which the Pentagon reportedly caused the closure of al-Qaeda online forums, over the objection of the Central Intelligence Agency:

Different agencies — domestically and internationally — may have very different strategies and priorities regarding online extremist activity. When they do not coordinate, platforms can be caught in the middle.

Open-source intelligence from social media also is valuable for domestic criminal prosecution. For instance, in November 2017, federal prosecutors secured indictments for 13 members of Detroit’s Smokecamp street gang. The indictment cited posts on Facebook, Instagram, Twitter, and YouTube that contained “photographs, videos and statements that identify and highlight their affiliation with the enterprise, as well as their gang-related accomplishments.”

Limiting Section 230 would inevitably cause at least some platforms to restrict — if not entirely ban — user-generated content. At first blush, it might not seem entirely objectionable to rid social media of criminals and terrorists. But banning them from public platforms probably will not silence them. They will move to channels that are more difficult — or impossible — for law enforcement and intelligence to access. As the RAND report noted, increased use of encryption and other security methods means that “information that would have been openly publicly available just a few years ago may now be accessible only by using clandestine or covert collection methods.”

The concern about reduced intelligence are not merely theoretical. The sex-trafficking exception to Section 230 had an immediate impact on visibility for law enforcement. In 2017, Alex Levy predicted such an outcome in a Wake Forest Law Review article, writing that “allowing Internet platforms on which sexual services are brokered to thrive may be key to apprehending traffickers and recovering victims.”

Levy’s article proved to be prescient. Two days after the Senate passed the sex trafficking exception in 2018, Craigslist shut down its online personals site. Separately, the federal government seized and shut down Backpage, which was the source of many sex trafficking ads, just days before the exception was signed into law. The removal of this public information meant that law enforcement’s job in tracking down traffickers and victims became far more difficult. “We’ve been a little bit blinded lately because they shut Backpage down,” undercover Indianapolis vice officer John Daggy told local TV station RTV6. “I get the reasoning behind it, and the ethics behind it, however, it has blinded us. We used to look at Backpage as a trap for human traffickers and pimps.”

Section 230 is responsible for the Internet that we know today — the good, the bad, and everything in between. Any further changes to Section 230 will have an indelible impact on social media and other platforms, and the inherently open nature of the Internet. This very well may be a positive outcome if it sufficiently eliminates real harms such as Islamic State propaganda and illicit drug sales. But any analysis of changes to Section 230 must consider the very real drawbacks, including the reduced visibility of bad actors when the Internet is less open.

 

 

Jeff Kosseff is an assistant professor in the U.S. Naval Academy’s Cyber Science Department. His book about Section 230, The Twenty-Six Words That Created the Internet, will be published in April by Cornell University Press. The views expressed in this piece are only those of the author, and do not represent the U.S. Naval Academy, Department of Navy, or Defense Department.

Image: Japanexperterna.se