Publication Date: November 2018
Publisher: Data&Society
Research and Editorial Team: Robyn Caplan

The author interviewed representatives of 10 major platforms (websites and services that host user content and social interactions, but do not produce much content themselves, and moderate content and activities). Platforms are increasingly being called upon to make ethical decisions regarding content. Most of them, however, keep their content moderation policy hidden.

Platforms may be differentiated based on

  • technology: search engines are different from social media, as they do not host content themselves; 
  • business model: Facebook and Google are ad-based which arguably creates most problems, Medium is subscription-based, Wikimedia (the owner of Wikipedia) is a nonprofit;
  • size of company: smaller companies may have much more limited resources than Google or Facebook.

The author distinguishes three approaches. The artisanal approach is used for example by Vimeo, Medium, and Discord, and was used in the early days of Facebook and Google. There is no automated decision, decisions are taken case-by-case. Rules sometimes are informal. There are small teams of in-house moderators, normally between 5 and 200.

The community-reliant approach is used by websites such as Wikipedia and Reddit. There is a separation of power between parent organisation and communities. The parent organisation creates overarching norms and standards, then communities can create their own rules, which must respect the general ones. This allows for localised rules, but relationships between parent organisation and volunteers can at time be difficult. It is difficult to know the exact number of volunteer moderators.

Larger companies like Facebook and Google (which owns YouTube) use an industrial approach, with the goal of creating a “decision factory”. They have more resources and more personnel, e.g. Facebook should have a moderation team of 20,000 by the end of 2018. Policy development and enforcement are separated. YouTube also separates people based on language fluency and expertise. Often companies start with an artisanal approach, then grow, formalise rules and become industrial. Automated tools are increasingly used to flag and even remove content.

The quantity of content is enormous. There is an inherent tension between having consistent rules for content and being sensitive to localised contexts. Global rules can have a negative impact on marginalised groups. The necessity to regulate can be used as an excuse for censorship.

In Germany the NetzDG extended hate speech rules from traditional media to profit-making social media sites and platforms offering journalistic or editorial content with over two million registered users that receive more than 100 complaints per year about unlawful content.

The law in the United States distinguishes platforms and publishers. Platforms retain immunity for non-illegal and non-copyrighted content and are allowed to voluntarily “restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected”. It has been claimed that this gave a competitive advantage to the US internet company, but it has also been criticised as too broad of a liability shield. Representatives of some platforms said that the lack of clear guidelines is limiting. Other rules, such as the DMCA for copyright, are clearer, but while big companies can automate takedowns, others must perform them manually.

Tags: Fake news and disinformation Social media Freedom of expression Censorship Hate speech

The content of this article can be used according to the terms of Creative Commons: Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) . To do so use the the wording "this article was originally published on the Resource Centre on Media Freedom in Europe" including a direct active link to the original article page.