Authors define the Digital Influence Machine (DIM) as an infrastructure of data collection and targeting capacities developed by ad platforms, web publishers, and other intermediaries. They argue that the use of DIM to identify and target vulnerabilities of individuals or groups is a form of weaponization.
The DIM is possible because of three connected developments in communication technologies. The first is the development in techniques for automated online consumer surveillance and profiling. The second are new targeting mechanisms, which allow not only to refine the composition of the audience, but also to decide when and where the audience will be shown an ad. The third is the automatic optimisation of influence campaigns, using AI and split (A/B) testing.
This is possible in the United States because of three key shifts in the media and political landscape. The first is the decline of professional journalism, that has arguably contributed to the spread of disinformation. The second is the increase of financial resources for political influence. This includes “dark money”, where donor identity is concealed, allowing astroturfing (fake grassroots movements) and foreign influences (e.g. by the Russian Internet Research Agency in the 2016 US elections). The third are the improvements in political microtargeting, with poor or absent regulation. Data-driven political campaigns are nothing new (computer-aided direct mail is known since the 1970s), but new technologies have lead to substantial new possibilities.
Political actors use three main strategies to weaponize the DIM. The first is to mobilise supporters through threats to group identity, with microtargeting allowing the use of extreme messages that would be counterproductive if targeted to, or even just known by, the general public. The second is to divide the coalition of the opponent, sowing discord between candidates at primary elections, or with voter suppression operations, pushing for abstention. Political marketing is also increasingly using behavioural science to identify and target vulnerabilities.
Authors propose some legislative reforms, including obliging political advertisers to disclose big donors and requiring platforms to publish reports about political ads, including sponsors and targeting parameters. A new data protection and privacy regulation could be inspired by the European Union’s GDPR. An independent public journalism trust could be financed by a tax on digital platforms, counterbalancing the loss in advertising revenues.
Some improvements might come from self-regulation. Companies could refuse working with dark money groups, requiring sponsors of political ads to disclose major donors. Platforms could limit weaponization by “requiring explicit, non-coercive user consent for viewing any political ads that are part of a split-testing experiment”. Independent committees, including diverse communities and stakeholders, could help develop future ethical guidelines for political advertising. Tags:
Fake news and disinformation
The content of this article can be used according to the terms of Creative Commons: Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) . To do so use the the wording "this article was originally published on the Resource Centre on Media Freedom in Europe" including a direct active link to the original article page.