Home » Brexit bots increased division
Politics

Brexit bots increased division

100,000 tweets on Brexit : Alleged bot account works on Moscow time

A RESEARCH collaboration from academics at Swansea University and the University of California, Berkeley suggests that information automated software agents or ‘bots’ were used to spread either ‘leave’ or ‘remain’ social media stories during and after the Brexit referendum which drove the two sides of the debate further apart.

Research by Professor Talavera Professor, and PhD student Tho Pham from the School of Management, in collaboration with Professor Yuriy Gorodnichenko, at University of California, Berkeley, focused on information diffusion on Twitter in the run-up to the EU Referendum.

Professor Talavera said: “With the development of technology, social media sites like Twitter and Facebook are often used as tools to express and spread feelings and opinions. During high-impact events like Brexit, public engagement through social media platforms quickly becomes overwhelming. However, not all social media users are real. Some, if not many, are actually automated agents, so-called bots. And more often, real users, or humans, are deceived by bots.

TWITTER ANALYSIS
Using a sample of 28.6 million #Brexit-related tweets collected from 24 May 2016 to 17 August 2016, researchers observed the presence of Twitter bots that accounted for approximately 20 per cent of total users in the sample. Given the preponderance of re-tweets from bots by humans, a key question is whether human users’ opinions about Brexit were manipulated by bots.

Empirical analysis shows that information about Brexit is spread quickly among users. Most of the reaction happened within 10 minutes, suggesting that for issues critically important to people or issues widely covered in the media, informational rigidity is very small. Beyond information spread, an important finding is that bots seem to affect humans.

However, the degree of influence depends on whether a bot provides information consistent with that provided by a human. More specifically, a bot supporting leaving the EU has a stronger effect on a “leaver” human than a “remain” human.

‘ECHO CHAMBER’
Further investigation shows that “leavers” were more likely to be influenced by bots compared to “remainers”. These results suggest that dissemination of information is consistent with what is frequently referred to as an ‘echo chamber’ – a situation in which information, ideas, or beliefs are amplified or reinforced by communication and repetition inside a defined system, revealing that the outcome is that information is more fragmented rather than uniform across people.”

Professor Talavera said: “Social bots spread and amplify the misinformation, thus influencing what humans think about a given issue. Moreover, social media users are more likely to believe (or even embrace) fake news that is in line their opinions. At the same time, these users distance themselves from reliable information sources reporting news that contradicts their beliefs. As a result, information polarisation is increased, which makes reaching consensus on important public issues more difficult.”

“It is now vital that policymakers and social media should seriously consider mechanisms to discourage the use of social bots to manipulate public opinion.”

Author