A recent study has investigated the role that Twitter bots played in spreading misinformation in the period surrounding the 2016 United States presidential election.
Researchers at Indiana University (IU) analyzed 14 million Twitter messages that spread 400,000 articles during a 10-month period that preceded and followed the election.
Digital technology – and social media in particular – have made it easier to spread misinformation. Image: pixabay.com
They found evidence that Twitter bots “played a disproportionate role in spreading articles from low-credibility sources.”
For example, they found that:
– 31 percent of misinformation on Twitter was spread by just 6 percent of accounts that they identified as bots.
– 34 percent of all articles from “low-credibility sources” also came from these same 6 percent of Twitter bots.
Messages can ‘spread very quickly’
They also found that Twitter bots had a strong influence in promoting low-credibility information in the first few seconds before it goes viral.
The tiny size of this time frame is a major challenge in countering the online spread of misinformation.
The researchers draw parallels with the stockmarket where the very high frequency of trading means that problems can escalate in just a few seconds.
The journal Nature Communications recently issued a detailed report on the findings.
“This study finds,” says lead investigator Filippo Menczer, who is a professor of computer science and informatics, “that bots significantly contribute to the spread of misinformation online.”
It also shows, he adds, “how quickly these messages can spread.”
He and his co-authors suggest that “curbing social bots” could be an effective way to reduce the “spread of low-credibility content” online.
Various biases working together
In their study paper they define misinformation as the daily exposure of social media users to “false or misleading news reports, hoaxes, conspiracy theories, click-bait headlines, junk science, and even satire.”
Some researchers have shown that the “massive spread of digital misinformation” has already caused harm to health and finance.
There have also been allegations – as yet unproven, note the authors – that digital misinformation poses a global risk that threatens to interfere with elections and undermine democracies.
Scientists recognize that various “cognitive, social, and algorithmic biases” work together in a complex way to increase users’ vulnerability to online misinformation.
An example of this is “confirmation bias,” which is a person’s tendency to favour or look for information that confirms a pre-existing belief. Other examples include the novelty factor of false news, and “information overload and finite attention.”
Messages ‘more likely to be shared’
Fabrication of news has been around a long time. The arrival of digital technology and social media in particular has just made it easier to propagate misinformation faster and in a more manipulative way.
“Public opinion can be influenced thanks to the low cost of producing fraudulent websites and high volumes of software-controlled profiles, known as social bots,” write the authors.
The new study also found that Twitter bots are very good at “amplifying the volume and visibility” of messages. This makes them “more likely to be shared,” even though the bots only represent a small proportion of accounts whose messages go viral.
This could be because Twitter bots come across like real people. They exploit the fact that people tend to pay attention to popular topics and trust information revealed in their chosen social settings and relayed by social contacts.
“Bots prey upon this trust by making messages seem so popular that real people are tricked into spreading their messages for them,” says co-author Giovanni Luca Ciampaglia, who was an assistant research scientist at IU during the study.
Twitter simulation
As well as identifying a number of tactics that Twitter bots use to spread misinformation, the team also ran a Twitter simulation experiment.
This showed that deleting just 10 percent of accounts – selected on the basis that they were probably Twitter bots – led to a significant reduction in articles from low-credibility sources.
“This experiment suggests that the elimination of bots from social networks would significantly reduce the amount of misinformation on these networks,” Prof. Menczer concludes.
He and his colleagues also suggested a number of strategies that social networks could use to reduce misinformation spread. These range from better bot-detection algorithms to requiring more human involvement in message sending.
Discover more from Market Business News
Subscribe to get the latest posts sent to your email.