About this report

This study outlines the salient developments in the malicious use of social media. The first chapters examine seven developments in the malicious use of social media by examining case studies supported by interviews with experts in the respective fields. These chapters also comment on the future trajectory of developments and recommends policy changes. The final chapter provides conclusions and takeaways from this research. 

Trends and developments in social media manipulation: 

  • The current state of play is a cat-and-mouse game between malicious actors, governments and the new media industry. As social media companies and other actors take action to counter abuse, malicious actors adapt to the 

    new environment. This has led to, among other things, an increase in the sophistication of cyborgs and trolls as simple automated accounts are being taken down. 

  • Impersonation is commonly used both for the spread of disinformation and for social engineering attacks with different degrees of sophistication, sometimes attempting to create real-life events through online activity. Continued technological development in the field of artificial intelligence and frighteningly realistic ‘deepfake’ video and audio techniques may allow impersonation attacks to become even more credible in the future. 

  • The methods and platforms used to disseminate disinformation are changing. The increased use of encrypted platforms, such as WhatsApp or closed Facebook groups, makes it increasingly difficult to identify ongoing information operations. Furthermore, malicious actors are more effective than before in covering their own tracks. 

Conclusions

The technological advancements of our time have created new ways to influence public opinion. These tools are now available to anyone, often in ways we do not yet fully comprehend. 

During 2018, there was a relative decrease in the use of social media for news consumption around the world. Reuters Institute assessed that this might be a consequence of Facebook changing its algorithm to downgrade news content.29 Other observers have identified a partial shift from traditional social media, such as Facebook, to peer-to-peer encrypted chat applications, such a WhatsApp, Signal, and Snapchat. This change has worrying consequences affecting how disinformation is disseminated as the design of these platforms makes it even more difficult for regulators to identify and counter malicious use of social media.30 As a result, individual users have a greater responsibility to critically evaluate information they consume, and social media companies should take the necessary steps to tackle this growing threat. 

Another shift making disinformation more difficult to monitor is that information operations are increasingly moving from open pages to closed groups, seeding 

disinformation to ‘invisible’ groups that later disseminate it to the public. In other words, just as malign actors are now taking advantage of peer-to-peer encrypted applications, they are also leveraging the potential reach of social media in combination with the platform’s closed group function. 

As things stand, malign actors are able to hide behind anonymous social media accounts, pages, or groups, exploiting a system designed to protect privacy rights. Much of the news we now consume is being promoted without source attribution and without advertising transparency. This provides many opportunities for malign actors to target unsuspecting audiences with disinformation and other forms of information activities without users ever knowing about it. 

The malicious use of social media has developed into a flourishing economy and there are too few obstacles standing in the way of this malicious practice. The social media companies base their economic model on advertising. This model is being harnessed by malicious actors who can pay to promote destructive content. The bot-industry has developed into a lucrative market where people make a living of creating more or less advanced bots that boost views, likes, and shares in the social media space, manipulating the information environment on social media.31 Buying bots is neither demanding nor expensive. A NATO StratCom CoE publication on the black market of social media manipulation provides an in-depth analysis of this problem. 

These are the broader vulnerabilities that enable the abuse of the online information environment through which malign actors can manipulate public opinion, trick people, and undermine trust in society. Other vulnerabilities, such as lack of training and education, and trust in media and governmental actors, are contextual and vary from nation to nation. The malicious use of social media is not merely a question of abuse of the terms and policies of the social media platforms; it is as much a question of abuse of the human mind and the fundamental tenets on which our democratic societies are based. 

Social media companies themselves are also inadvertently creating vulnerabilities when they update their platforms. Recent changes to Facebook Graph Search better protect user information but also seriously hamper the ability of external researchers to identify and analyse malicious use of Facebook.32 The trade-offs between privacy and transparency – between the right to be anonymous and the need for accountability we will need to find better answers to in the months and years to come. 

Trends and developments in social media manipulation: 

  • The current state of play is a cat and mouse game between malicious actors and governments and the new media industry. As social media companies and other actors take action to counter abuse, malicious actors adapt to the new environment.
  • Impersonation is commonly used both for the spread of disinformation and for social engineering attacks with different degrees of sophistication, sometimes attempting to create real-life events through online activity. Continued technological development in the field of artificial intelligence and frighteningly realistic ‘deepfake’ video techniques may allow impersonation attacks to become even more credible.
  • The methods and platforms used to dissemination disinformation are also changing. The increased use of encrypted platforms, such as WhatsApp or closed Facebook groups, makes it increasingly difficult to identify ongoing information operations. Furthermore, malicious actors are more effective than before in covering their own tracks.
  • There has been an increase in the use of cyborgs and trolls in response to social media platforms taking action to defend themselves against attacks from simple automated accounts.