On 28 September 2017, the European Commission published its Communication “Tackling Illegal Content Online” . An initial analysis of the Communication was done by EDRi and is captured by these two contributions . To avoid the repetition of the arguments, the present article is focused on the repercussions of the Communication in the domain of private content censorship by various online platforms.

Defining illegal content

The Communication places a great emphasis on the need to address an “increasing availability of terrorist material online”. While the predicament of the EU authorities is clearly articulated, the resultant efforts to counter the terrorism-related speech may be detrimental to the human rights status quo. In often-cited words of the United Nations Special Rapporteur on freedom of expression, David Kaye, the efforts to counter violent extremism can be the “perfect excuse” to restrict free expression and to seek to control access to information. Any energetic initiative to curb terrorist threat always has to deal with delicate matters of safeguarding the human rights that inevitably are a part of the equation.

Of course, terrorism and hate speech are not the only examples of allegedly illegal expression online. The Communication also acknowledges such problematic areas as child pornography and sexual exploitation, copyright, and sale of counterfeit goods. These challenges are not easy to respond to, and several relatively recent IP-related fiascos are still on our minds, for example the Hadopi legislation in France or the heated SOPA/PIPA disputes in the US.

The role of private platforms

The Communication emphasizes significant societal responsibility for the online platforms which mediate access to content for Internet users and aims to lay down “a set of guidelines and principles for online platforms to step up the fight against illegal content online”. At the same time, a red thread is running throughout the text of the Communication, namely, an intention of its authors to shift the decision of what is legal and what is illegal in every specific case from the shoulders of the state authorities towards the private sector. The principle of no general obligation to monitor transmitted content by the online intermediaries is mentioned in the Communication but glossed over. Further the Communication states that “online platforms should do their utmost to proactively detect, identify and remove illegal content online”. Apparently, at the time when 300 hours of video are uploaded to YouTube every minute , the EU regulators see the responsibility for filtering and takedown of illegal content firmly in the hands of the private sector. This solution is fraught with unintended and unexpected consequences and need to be carefully considered.

Firstly, outsourcing content takedown means that the content taken down by an online intermediary is not illegal stricto sensu (this can only be determined by the courts), rather, it is assumed to be illegal by the online intermediaries. The legal position of the online intermediary is therefore shaky at best, “fraught with legal uncertainty, edging into human rights law as well as commercial and telecoms law” (see “The Closing of the Net” at page 88). While it may be relatively easy to determine what is lawful according to the IP laws, to define what constitutes hate-speech is not a trivial undertaking. One should bear in mind that the concept of freedom of expression also encompasses the right to spread information or ideas that offend, shock or disturb . A private agent, who chooses to err on the side of caution and takes a decision to remove certain offending content, introduces friction in the online marketplace of ideas. Even if hastily undone, such decisions can potentially prove quite damaging, given the inherent dynamics and real-time nature of online communications.

Secondly, the Communication intently introduces the second degree of intermediation, suggesting that the online platforms cut their monitoring costs by relying on judgment of the so-called “trusted flaggers” to determine what is legal and what is not. These “trusted flaggers” are defined as specialised entities with “specific expertise in identifying illegal content, and dedicated structures for detecting and identifying such content online.” The Communication is sadly mute on the modalities of cooperation between online platforms and “trusted flaggers”, including such contentious notions as conflicts of interest and remuneration arrangements.

Thirdly, the Communication is enthusiastic about fully automated deletion or suspension of content. In the authors’ words, this content takedown modality “can be particularly effective and should be applied where the circumstances leave little doubt about the illegality of the material”. But, given the abundance of grey areas when it comes to freedom of expression, are there any circumstances where there is so little doubt about the legality of a certain piece of content, that its deletion can be entrusted to an algorithm? The European Union seems rather cautious about permission of self-driving cars on the roads . Why not exercise the same reserved approach to the issue of online content filtering?

Which limits?

Finally, the section of the Communication on safeguards to limit the risk of removal of legal content is notably short and written in a general, unspecific language. Clear accountability principles for online platforms engaged in over-zealous removal of content are called for but not introduced by the Communication.

To conclude, while it is true that “a harmonised and coherent approach to removing illegal content does not exist at present in the EU” – as it goes in the Communication – any harmonisation efforts in the field should raise the human rights standard as high as possible avoiding the emphasis on the cost- and time-saving effects for those moderating the online content. Sadly, the Communication seems to miss this point. In the author’s view, some of the proposed solutions may provoke a race to the bottom among online platforms, a race, where the freedom of expression may fall victim.

*Oleg Soldatov is a PhD researcher at Bocconi University in Milan. He worked as a lawyer at the European Court of Human Rights from 2011 to 2014 and as a legal advisor at the Council of Europe in 2015-2016

Tags: Digital rights Access to information European policies and legislation Censorship