About this report

Our research team conducted an analysis of proposed or implemented regulations and identified a number of interventions. Some measures target social media platforms, requiring them to take down content, improve transparency, or tighten data protection mechanisms. Other measures focus on civil actors and media organisations, on supporting literacy and advocacy efforts, and on improving standards for journalistic content production and dissemination. A third group of interventions target governments themselves, obligating them to invest in security and defence programs that combat election interference, or to initiate formal inquiries into such matters. Finally, a fourth group of interventions take aim at the criminalisation of automated message generation and disinformation. 

There has long been a tension between allowing free speech to flourish while limiting the spread of undesirable forms of online content promoting hate, terrorism, and child pornography. Blocking, filtering, censorship mechanisms, and digital literacy campaigns have generally been the cornerstones of regulatory frameworks introduced in most countries, but with the growing challenges surrounding disinformation and propaganda new approaches for addressing old problems are flourishing. This paper provides an updated inventory of these new measures and interventions.

Methodology

 

METHODOLOGY 

We have created an inventory of the various government initiatives to tackle the multi-dimensional problems related to the malicious use of social media, such as the spread of dis/misinformation, automation and political bots, foreign influence operations, the malicious collection and use of data, and the weaponisation of attention-driven algorithms. We identified cases in three stages. First, we looked at the top-100 countries with the largest number of Internet users in 2016. Based on this list, we conducted an analysis using keywords related to social media manipulation, including fake news, political bots, online or computational propaganda, misinformation, and disinformation. We then searched for these terms in combination with our top 100 countries, as well as the keywords ‘law’, ‘bill’, ‘legislation’, ‘act’, and ‘regulation’ in order to identify instances of government responses to social media manipulation. 

Using this approach, we identified a total of 43 cases that are either complete, in progress, or have been dismissed in which governments have introduced regulation in response to the malicious use of social media since 2016. The case studies were last updated in October 2018. We then drafted short case studies for each country to be reviewed by country-specific experts who ensured the accuracy of our information and provided additional country-specific information that could not be gleaned from non-English language bills or news sources. A summary of the various legal and regulatory mechanisms we identified can be found in Appendix 1. 

We limited our analysis to legal and regulatory interventions designed in response to social media manipulation, and focused strictly on new or recently updated legal measures proposed in response to allegations of foreign interference in elections around the world, starting with the US election in 2016. For example, both Ghana and Gambia have long-standing legislation designed to tackle digital misinformation, but these countries are not included in our analysis since this legislation was not designed in response to the growing challenge of malicious use of social media. We have not reviewed measures where government legislative or executive branches have expressed interest in regulation, but still lack concrete proposals. We have also excluded pre-existing security-related solutions to disinformation or online propaganda, laws around hate speech, censorship, political campaigning or advertising, foreign intelligence, and other interventions that may have a digital aspect, but are not directly aimed at the growing proliferation of social media manipulation in democracies. Finally, we have not looked into regional or transnational initiatives, but have highlighted some of the more significant efforts initiated by the European Union, NATO, and other international organisations. 

Conclusion

 

CONCLUSION

There is no simple blueprint solution to tackling the multiple challenges presented by the malicious use of social media. In the current, highly-politicised environment driving legal and regulatory interventions, many proposed countermeasures remain fragmentary, heavy-handed, and ill-equipped to deal with the malicious use of social media. Government regulations thus far have focused mainly on regulating speech online—through the redefinition of what constitutes harmful content, to measures that require platforms to take a more authoritative role in taking down information with limited government oversight. However, harmful content is only the symptom of a much broader problem underlying the current information ecosystem, and measures that attempt to redefine harmful content or place the burden on social media platforms fail to address deeper systemic challenges, and could result in a number of unintended consequences stifling freedom of speech online and restricting citizen liberties. 

As content restrictions and controls become mainstream, authoritarian regimes have begun to appropriate them in an attempt to tighten their grip on national information flows. Several authoritarian governments have introduced legislation designed to regulate social media pages as media publishers fine or imprison users for sharing or spreading certain kinds of information, and enforce even broader definitions of harmful content that require government control. As democratic governments continue to develop content controls to address the malicious use of social media in an increasingly securitised environment, authoritarian governments are using this as a moment to legitimise suppression and interference in the digital sphere. 

In the future, we encourage policymakers to shift away from crude measures to control and criminalise content and to focus instead on issues surrounding algorithmic transparency, digital advertising, and data privacy. Thus far, countermeasures have not addressed issues surrounding algorithmic transparency and platform accountability: a core issue is a lack of willingness of the social media platforms to engage in constructive dialogue as technology becomes more complex. As algorithms and artificial intelligence have been protective of their innovations and reluctant to share open access data for research, technologies are blackboxed to an extent that sustainable public scrutiny, oversight and regulation demands the cooperation of platforms. Governments have put forward transparency requirements regarding political advertisements online, such as the Honest Ads act in the United States. While some platforms have begun to self-regulate, their self-prescribed remedies often fall short of providing efficient countermeasures and enforcement mechanisms. 

Such legislation is important for addressing issues related to particular aspects of foreign interference in elections, such as the artificial inflation of hot button issues, or junk news designed to suppress voter turnout. However, many threats to the democratic process also come from within, and there is currently a lack of transparency regarding how misinformation spreads organically through likes and shares, and also around how political parties use social media to advertise to voting constituencies. Finally, while Europe’s GDPR helps prevent some of the challenges arising from the malicious use of social media, and could have helped protect and remedy scandals such as Cambridge Analytica, data protection laws remain highly fragmented. Likeminded democratic governments should work together to develop global standards and best practices for data protection, algorithmic transparency, and ethical product design.