Everything you Need to Know About Section 230 [Deep_Dive]
Section 230 is a piece of legislation that has been highly discussed and debated over in recent years. So much so that it has now morphed into a buzzword which is haphazardly thrown around in freedom of speech and online censorship debates.
The essential facts surrounding Section 230 seem to have been lost in translation somewhere. This is made apparent when viewing the current discourse from afar and seeing debates arise on facts that should be unanimously agreed upon. The problem here seems to boil down to a fundamental misunderstanding of what Section 230 actually is and what it entails.
This post was created to hopefully address this issue. The information provided below highlights what Section 230 actually is while also providing supplementary information for making this law as clear as possible. Links to each section of this post are provided below for reference.
- The Differences Between Publishers, Platforms and Distributors
- Breaking Down Section 230's Protections
- Common Law Cases for Section 230
- Exploring the Volatile Discourse of Section 230
- An Outlook of Section 230's Future
The Differences Between Publishers, Platforms and Distributors
Before diving straight into the bill itself, let's first define the parties directly influenced by this bill and how each is typically held liable for their content. This is key to understanding the current arguments surrounding Section 230.
-
Publisher: A person or company that prepares and issues books, journals, music, or other works for sale.
Examples of publishers would include newspapers, traditional media companies, broadcast stations, etc. Publishers are generally held liable for all of the content they publish. This means that newspapers can be sued for the defamatory or obscene content they produce. -
Distributor: A person or company who makes products or services available for consumers or business users.
In this context, distributors would be bookstores, libraries, retail stores, etc. Distributors are typically only held liable for the content they distribute after a notice was issued on the problematic content. For example, a book store isn't expected to vet all of the books within their store for defamatory content. Once a case is reported though, and the distributor is made aware of it, then they can be held liable if the problematic book isn't removed from their shelves. -
Platform: An environment where information is shared or exchanged between multiple people.
Non-digital platforms would include public areas and town squares while digital platforms would include things like social media, and telephone companies. Platforms are generally not held liable for the content shared in their environments. This means that telephone companies aren't liable for trafficking or terrorist activity planned over their phone lines or via text. This is the case because it would be almost impossible for these companies to monitor all communications over their communication lines in real-time. In regards to social media and other online platforms, Section 230 is what grants them their platform protections.
Breaking Down Section 230's Protections
Section 230 is a small part of the Communications Decency Act of 1996 (CDA) which grants broad protections to online platforms and users. The main crux of section 230 can be found under header C of the law which outlines the treatment of publishers, as well as civil liability.
(C-1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
This section of the law is stating that services such as social media sites and ISPs are not considered the publishers of their user generated content. Since these interactive computer services are not publishing anything, they are not liable for any of the information shared on their sites. This would include things such as nasty YouTube comments, malicious reviews on apps like yelp, and really mean tweets. If there is a case for defamation among user generated content, then the information content provider who posted these comments would be held liable, not the provider that hosted it. It's also worth noting that this only applies to the original poster of the defamatory information, not any re-published copies.
-
Provider: An organization that provides a myriad of services such as consulting, legal, real estate, communications, storage, and processing. Relevant examples include ISPs (internet service providers), TSPs (telecommunications service providers), and SSPs (storage service providers).
-
Interactive computer service: Any information service, system, or access software provider that provides or enables computer access to multiple users (Examples: Websites, forums, review sites, comment sections, social media, etc.).
-
Information content provider: Any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet (Examples: end users, shitposters, internet trolls, keyboard warriors, conspiracy theorists, memelords, etc.).
(C-2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of
—(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;
This subsection of civil liability makes it so that online platforms have the right to remove, or restrict access to content they deem to be inappropriate. This would come in the form of posts, comments, videos, images, and other user generated content. The most important part of this passage though is the last line which states that these protections supersede individual rights to free speech. This is probably the biggest point of contention in the 230 free speech debate as the majority of people aren't aware of this clause.
It's also worth noting that the 'good faith' aspect of this law isn't synonymous with 'unbiased' or 'neutrality' as some mistakenly believe. Good faith is a term in contract law that refers to abiding by the terms of a contract fairly and honestly to not impact the benefits of other parties associated with the contract. In this context, Section 230 protections do not cover antitrust or contractual mishaps due to moderation.
or(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).
This subsection makes it so that online platforms are able to moderate users and display user generated content how they see fit; As long as it is done in good faith. This means that providers have the right to promote, prioritize, and advertise any user generated content on their sites. This also includes the right to de-rank, restrict, and ban the same content as well.
Where to find Section 230
Section 230 can be found within 'TITLE 47' of the U.S. Code. To read Section 230 (CDA §230) for yourself, it can be read online at the Legal information Institute.
- United States Code: A consolidation and codification by subject matter of the general and permanent laws of the United States.
- Title 47: One of the 53 defined titles in the U.S. code. Defines the role and structure of the Federal Communications Commission (FCC), and the National Telecommunications and Information Administration (NTIA).
Common Law Cases for Section 230
Hopefully the previous sections cleared up any doubts on what §230 is and what it entails. Now, lets go over a couple of key cases that have litigated §230 in the past. This is a good way to see how this law is actually interrupted in real world scenarios and how service provider liability was challenged over the years.
Legal Cases Pre CDA §230
These first two examples will specifically explore how defamation cases were ruled prior to §230's implementation. The outcomes of both of these cases, and the precedents that both established, sparked the drive for §230 creation.
> Cubby, Inc. v. CompuServe Inc. (1991)
In April 1990, a daily newsletter called Rumorville published defamatory content about a competing online newsletter developed by Robert Blanchard. This newsletter, which was apart of a larger news forum, was hosted by an ISP called CompuServe Inc. In 1991, Robert Blanchard and Cubby sued CompuServe for defamation, business disparagement, and unfair competition.
During the trial, CompuServe conceded the point that the content posted by Rumorville was defamatory, but there was nothing to suggest that CompuServe knew, or should have known, about the existence of this content. The court ruled that although CompuServe did host defamatory content on one of its forums, CompuServe was merely a distributor, rather than a publisher, of the content. As a distributor, CompuServe could only be held liable if it knew of the defamatory nature of the content. As this ISP had not made any efforts to review or moderate the content on their forums, they were not held liable for it.
This court case established the precedent that ISPs were classified as distributors of user generated content, and were subject to traditional defamation laws just like bookstores were. This case also created a dichotomy where ISPs were either disincentivized to moderate any of its content from fears of being liable for it, or adopting overbearing moderation practices.
> Stratton Oakmont, Inc. v. Prodigy Services Co. (1995)
In October 1994, an unidentified user of Prodigy's Money Talk bulletin board created a post which claimed that Stratton Oakmont, a Long Island securities investment banking firm, and its president Danny Porush, committed criminal and fraudulent acts in connection with the initial public offering of stock of Solomon-Page, Ltd. Stratton Oakmont sued Prodigy as well as the unidentified poster for defamation. The plaintiffs argued that Prodigy should be considered a publisher of the defamatory material and were therefore liable for the postings under the common-law definition of defamation. Prodigy asked to be dismissed from the case on the grounds that they could not be held liable for the content of postings created by its users, relying on a 1991 case Cubby, Inc. v. CompuServe Inc., which had found CompuServe, an online service provider, not liable as a publisher for user-generated content.
The Stratton court held that Prodigy was liable as the publisher of the content created by its users because it exercised editorial control over the messages on their bulletin boards in three ways: 1) by posting Content Guidelines for users, 2) by enforcing those guidelines with "Board Leaders", and 3) by utilizing screening software designed to remove offensive language.
The court's argument for holding Prodigy liable, in the face of the CompuServe case, was that "Prodigy's conscious choice, to gain the benefits of editorial control, has opened it up to a greater liability than CompuServe and other computer networks that make no such choice.
Legal Cases Post CDA §230
These last two cases will focus on interesting but relevant §230 cases after its implementation. These will illuminate the nuances of this law, and may also reveal some of its flaws depending on how you interpret their outcomes.
> M.A. v. Village Voice Media (2010)
Latasha Jewell McFarland had been charged with several counts of sex trafficking of a minor in 2010. The evidence in this case included trafficking M.A., a 14 year old runaway at the time, via several ads on backpage.com. Latasha paid backpage for the adspace to advertise pornagraphic photos of M.A. for potential clientel. Soon after Latasha plead guilty to the trafficking charges, M.A. sued Village Voice Media, the parent company of backpage.com, for also being a financial benefactor of these sexual ads. An additional allegation of aiding and abetting was also filed as a part of this lawsuit.
All of the charges against Village Voice Media were dismissed in this case. The court ruled that the company was immune from civil liability for these sexual advertisements under CDA §230. This was ruled despite Backpage.com being aware of illegal advertisements circulating throughout their site, and profiting from them. The aiding and abetting allegation was dismissed as well because it couldn’t be proven that Backpage.com was actively and intently aiding in the trafficking on M.A.
> Prager University v. Google (2018)
PragerU, a conservative education media company, alleged that YouTube was engaging in selective and discriminatory censorship against their company. They claimed that 21 of their videos have been restricted by Youtube as of 2018. Videos which are restricted aren't visible to 95% of the user base, and usually aren't monetizable. PragerU is making the argument that these actions contradict Youtube's mission statement to “give people a voice” and to foster a “community where everyone’s voice can be heard”.
This case was later dismissed on all grounds as Youtube, owned by Google, is a private entity. The first amendment only prevents the government from abridging speech, not private parties. It was also mentioned that private property doesn't lose its private status if the public is invited to use it. And lastly, Google was protected by CDA §230 C2 which was outlined in their TOS.
To read through more common law cases related to section 230, please check out the Electric Frontier Foundation for more examples.
Exploring the Volatile Discourse of Section 230
Section 230 is a highly contentious topic as it touches on concepts such as freedom of expression and individual rights. It has also become an unfortunate victim of the current culture war of ideas. Dialogue surrounding §230 right now is what you’d expect of any other circulated moral conundrum in the public’s view.
From following the discourse surrounding §230 for a while now, I've identified the three main viewpoints that are shared amongst most with an opinion.
Abolish / Rewrite Section 230
One viewpoint within the §230 debate is that we should get rid of it entirely. This stance can be broken down further into the camps where people just want §230 gone for good, while others want to rebuild the bill from the ground up. What is perplexing about this viewpoint is that it's shared across partisan lines, albeit for different reasons. This is very uncharacteristic compared to other quasi-political issues in the U.S.
One side wants to abolish §230 because they feel providers are unfairly censoring their speech. This usually relates to political speech, especially right-wing and conservative ideas. Big-tech companies, which for the most part, are the providers of these social media platforms, are overwhelmingly liberal. This fact causes some on this side to assume that human bias has corrupted these companies' moderation practices. Most of these arguments though rest on the presupposition that platforms must be neutral distributors of user generated content. As highlighted in previous sections, the opposite is true under §230. For those that are on this side and know what §230 entails, the core of their argument is that this is how platforms ought to behave in the current online landscape. This is what fuels these people’s desire for its removal.
The other side of this dichotomy wants to remove §230 for the opposite reason. This side believes that this law enables platforms to be breeding grounds for problematic content and social ills. One example of this would be misinformation. This side argues that the rampant spread of misinformation online is in part due to platforms not being incentivized, or forced, to stop its spread. Another example would be different types of extremism. Some of the people in this camp believe that moderation is too lax on certain platforms. They argue that these platforms become safe havens for extremists who want to propagate their problematic thinking to other people. These arguments usually fail to factor in that the providers aren’t just protected by §230, but also the first amendment as well. It would set bad precedents, and likely be unconstitutional, if the government played a large role in what speech is allowed within private entities.
Revising Some Aspects of Section 230
Another viewpoint within the §230 debate is that of revision. People in this camp think that getting rid of §230 entirely would be too extreme, but recognize that there could be reasonable changes to it. The core of this argument is that the Communication Decency Act (CDA) was written a quarter of a century ago in 1996, when the internet was still somewhat archaic. The original senators that created the bill couldn’t have imagined the current challenges we all face now online. This would include the claims of biased internet censorship, and the rampant spread of misinformation, which is argued by others. This camp also believes that alterations to the sub-sections of CDA §230 can be made so to exclude protections against things like sex trafficking, child pornagraphy, and other illegal material.
Leave 'Section 230' Alone!
And the last major viewpoint within this debate is that nothing should be done to §230. As it stands currently, §230 does exactly what it's supposed to do. It grants broad civil liability protections to providers and users online. This camp argues that these protections are literally what enabled the free and open online environment we have today. Without §230, websites like YouTube, Twitter, Reddit, Yelp, Wikipedia, Google, eBay, and Facebook wouldn’t exist; Period! Making any modifications to §230 would be extremely risky and could put the internet as we know it at risk. If anything would be done though, it would need to be handled with extreme care and foresight. People in this camp also believe that the current discourse on §230 is primarily being hijacked by people that simply do not understand what the law entails. Most people have very strong opinions on topics like this despite knowing very little on the subject matter. They feel that this wouldn’t be such a polarizing subject if most people realized their gripes with the current state of the internet is unrelated to this law.
An Outlook of Section 230's Future
As demonstrated in previous sections, there are many arguments within public sentiment on how §230 ought to be handled moving forward. It would be impossible to know exactly which arguments will come out as victorious in the end, but we can definitely make some educated guesses. This is an opinion, but the current trajectory of §230 looks to be that of tertiary reform.
Stealth Amendments
CDA §230 is likely to be reformed via small amendments within future laws. This seems to be the new standard as there have been two laws which have done this in recent years. The Stop Enabling Sex Traffickers Act (SESTA) and Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) were passed in April of 2018. Both of these laws clarified the U.S. sex trafficking laws, while also nerfing the protections granted to service providers under §230. After these bills where passed, providers no longer had immunity against content pertaining to sex trafficking, and were now subject to state and federal sex trafficking laws. This is technically a blow to §230 protections, and may pave the way for less reasonable amendments moving forward.
Trump & Biden’s Calls for Repeal
Several prominent politicians have stated, on multiple occasions, that they would work on removing §230 if given the chance.
One of the more vocal politicians on this topic was Donald Trump. Throughout his presidency, Trump has made it clear that he intended on repealing §230 eventually. As stated by Trump, “Section 230 was not intended to allow a handful of companies to grow into titans controlling vital avenues for our national discourse under the guise of promoting open forums for debate, and then to provide those behemoths blanket immunity when they use their power to censor …”. He also committed to his promises of revoking the law back in May of 2020 by issuing the Executive Order on Preventing Online Censorship. This EO essentially outlined the current interpretation of §230, explored the legal implications of the EO, and laid out a plan for Title 47 agencies to modify the law.
Another vocal politician on the subject was President Joe Biden. During the start of his presidential campaign back in 2020, Biden was very opinionated on the topic. To quote Biden, "I, for one, think we should be considering taking away [Facebook’s] exemption that they cannot be sued for knowingly engaging in, in promoting something that's not true.". His focus was on Facebook’s as it seemed to him that misinformation was prevalent on this platform. Overall, his stance on §230 seemed coherent at this time, but his recent actions paint a different picture. Earlier this year, Biden repealed Trump’s executive order on §230 after several months of deliberation. This could be seen as Biden flipping sides on this topic after being elected, or can be seen as him continuing the trend of repealing most Trump era executive orders. It may be safe to say that §230 is out of the crosshairs of the executive branch, for now....
Sources
The Differences Between Publishers, Platforms and Distributors
Breaking Down Section 230's Protections
Common Law Cases for Section 230
- https://www.eff.org/issues/cda230/legal
- https://en.wikipedia.org/wiki/Stratton_Oakmont
- https://en.wikipedia.org/wiki/Cubby,_Inc
- https://casetext.com/case/ma-v-village-voice-media-holdings
- www.globalfreedomofexpression.columbia.edu
An Outlook of Section 230's Future