Publication Date: December 2018
Research and Editorial Team: Timothy Garton Ash, Robert Gorwa, Danaë Metaxa

The growing influence of platforms like Facebook is raising concerns about the political influence of hate speech, harassment, extremist content contributing to terrorism, polarisation, disinformation, and covert political advertising. These platforms, recognising their role and their responsibilities as a major part of the global digital public square, are trying to implement processes for self-regulation and governance.

Firstly, this report considers the area of content policy, namely the process that develops the rules for what speech and activity is permissible on Facebook, and their enforcement.

Moderation processes have traditionally been very opaque. Over these last years, Facebook has continuously introduced new changes and improvements and, in particular, it has published some internal guidelines for the enforcement of Community Standards, data about the enforcement of these standards, established an appeal process, and more than doubled the number of its content reviewers.

The report suggests Facebook to

1. Tighten the wording of its Community Standards on hate speech, avoiding vague definitions to enable more consistent implementation.

2. Hire more and culturally expert content reviewers. Artificial intelligence can be a powerful tool for content review, but a human component with relevant cultural and political expertise is essential to recognise what is merely offensive speech or dangerous speech.

3. Increase ‘decisional transparency’, providing more details on moderation policies and procedures and sharing case studies to give users a better understanding of the basis on which content decisions are made, and also potential grounds for appeal.

4. Expand and improve the appeal process, so that reviewers get more contextual information about the piece of content. More information should be made available to analysts and users on a regular basis, in order to support a global dialogue.

 

Then, the report analyses the news feed, namely the assemblage of algorithms that directly mediates the information to which Facebook users are exposed.

Facebook has undertaken numerous adjustments to its News Feed with implications for free speech and democracy, from changing the features of the feed to introducing fact-checking partnerships and other tools to improve transparency for political ads.

In addition, Facebook should:

5. Provide meaningful News Feed controls for users, allowing them to choose the type of information they are exposed to. Furthermore Facebook should support digital and media literacy in a more consistent way.

6. Expand context and fact-checking facilities to reduce misinformation and increase the circulation of trusted sources of information, especially outside of the Global North.

 

Finally, the report focuses on governance structures and practices, noting that, despite all the initiatives related to improve Facebook transparency, there is still a lack of accountability to users, who do not have any say over important political decisions.

In 2018, Facebook started supporting initiatives and collaboration with academics, providing them with data for important research projects focused on democracy and elections, and submitted to three rights audits. Furthermore, in November, Zuckerberg declared that Facebook was about to create an external moderation council, to improve transparency and independency.

In addition to what has been done, Facebook should also:

7. Establish regular auditing mechanisms, involving trusted third parties that would assess practices, products, and algorithms for undesirable outcomes, and identify possible improvements.

8. Create an external content policy advisory group that would provide ongoing feedback on the standards and implementation of content policy.

9. Establish a meaningful external appeal body, trying to include also civil society and digital rights advocacy groups.

Even though this report focuses on Facebook, governance and accountability issues are relevant to several other platforms (such as WhatsApp, Instagram, but also YouTube and Twitter), that should offer not only more information, but also more active control to its users.

Tags: Social media Transparency Fact-checking Hate speech Fake news and disinformation Privacy

The content of this article can be used according to the terms of Creative Commons: Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) . To do so use the the wording "this article was originally published on the Resource Centre on Media Freedom in Europe" including a direct active link to the original article page.