How Social Media Companies are Failing to Combat Inauthentic Behaviour Online

At the height of the US 2020 Presidential elections, the social media accounts of two US Senators were easily manipulated using fake engagement bought from Russian social media manipulation outfits. We have published a new report “Social Media Manipulation 2020. How Social Media Companies are Failing to Combat Inauthentic Behaviour Online” that looks into an experiment and evaluates ability of social media platforms to counter manipulation.

Senators Chuck Grassley (Republican) and Chris Murphy (Democrat) agreed to this experiment to test whether their verified social media accounts are protected against manipulation by foreign actors and bots at the height of a high-stakes election cycle in the US. Interfering with voter choices, and distorting democratic debate on social media, is still worryingly cheap and easy. These are the stark findings of the latest study by the NATO Strategic Communications Centre of Excellence.

“We see that manipulation on social media remains too accessible. This is especially troubling during times of crisis, as these systems can be exploited to stoke up emotions and deepen vulnerabilities within our societies. This year we included TikTok as a part of the experiment. The results show that we need a universal set of rules that these platforms have to follow both in countering internal manipulation and for creating a safe environment for people who use them”, said Jānis Sarts, director of the NATO StratCom COE.

The researchers found it easiest to manipulate Instagram, with 1803 fake comments and 103 fake likes being delivered rapidly and nearly in full on the senator's posts for all of $7.30. Facebook’s comment section was harder to manipulate with bot accounts with the manipulation service having to pay real people to post fake comments. Manipulation of this type is harder to detect but also harder to scale. On Twitter 75% of the inauthentic replies were delivered within 24 hours and none were removed by the end of the experiment.

This study builds on the first experiment in 2019 using the accounts of Commissioners Dombrovskis, Jourova, Katainen and Vestager. Researchers then reported a random sample of the fake accounts used this year themselves but despite the slight improvement in the removal of reported accounts, overall, there was no evidence of any additional safeguards for verified accounts versus unverified ones, and no other improvements since the 2019 experiment. The platforms systems failed when they were supposedly at their highest alert levels. The study concludes that social media platforms remain easily exploitable by malign actors.

The study found that while some of the American platforms are making some improvements, new players like TikTok have no safeguards at all to such fake engagement.

Our key recommendations include much greater transparency and new safety standards from tech platforms, independent oversight and auditing of their algorithms and regulation of the market for social media manipulation.