Concern over the use of Twitter and, more specifically, Twitter bots (accounts set up to post and share automatically) to sway political discourse is becoming a more pertinent issue as we near election season.

Some are skeptical about the efficacy or significance of using such technology during an election, but none dismiss the reality of its playing a factor.

While the use of automated responses can aid a candidate in spreading a positive message, the more common use, as will be discussed, is to generate outrage, trends, and push negative, sometimes borderline slanderous messages.

This is significant as it is likely that the upcoming Federal Election will be driven by a dislike of the opposing party more so than promoting one’s own agenda. Inevitably, this means that political action and speech will involve more negative, partisan tactics rather than positive policy support or advocacy.

As the Angus Reid Institute reported in May, “Currently, one-in-three voters (35%) say that they are planning to vote for a party because they dislike another party even more and want to prevent that party from winning.”

They further added, “This sentiment is equally high among Liberals (40%) and Conservatives (40%).”

Given this likelihood, social media outlets which provide users with the opportunity to air their grievances quickly and slam down on opinions they disagree with or find ludicrous will be more important than ever. And to that end, no other platform appears more useful for that kind of political discourse than Twitter.

So, how relevant are bots on Twitter?

A study by the Pew Research Center in 2018 revealed that 2/3 of all news related social media posts are published by bots.

Furthermore, the 500 most-active bot accounts comprised at least 22 percent of tweets linking to popular news websites, compared to the six percent figure from the 500 most-active human-run accounts.

The study goes on to report that there does not appear to be a monopoly regarding linking to websites with primarily Liberal or Conservative audiences, rather the most frequent posts link to centrist websites such as Forbes or Business Insider, while bots comprised 41 percent of links to Liberal sites such as NPR, NY Times, and CNN, and 44 percent of links to Conservative sites such as NY Post, and Fox News.

While some may feel relieved from this data (at least the bots are not too biased—if bots could conceivably be biased), there is a broader issue: the war for our attention and what we believe others are thinking.

Bots are used because they’re effective, and, more importantly, time-saving. While many bots are used in a harmless fashion, such as tweeting out the daily weather, others can be used more maliciously by rapidly generating outrage through the creation of a false sense of consensus.

While John Gray, the CEO and co-founder of Mentionmapp Analytics, a social media data company, believes some of the concern over bots may be overblown, he acknowledges their relevance in shifting the debate and making it more divisive.

According to Gray, roughly 20 to 30 percent of accounts participating in Twitter conversations connected to certain divisive hashtags show suspect signs of being bots. Such signs are along the lines of “tweeting more than 72 times per day on average, seven days a week,” reports CBC.

However, like all forms of AI, bots, too, are becoming more intelligent and better at portraying human behaviour and digital speech. This is becoming a problem for experts as well as tech giants like Twitter.

Problems determining the real from the artificial

For a recent example, the hashtag #TrudeauMustGo was trending last month, and then suddenly taken down. Twitter claimed they took the hashtag and related tweets down because the sudden influx of simplistic #TrudeauMustGo tweets suggested they may be bot-generated. This would be more understandable if the hashtag #NotABot did not trend immediately afterwards in what appears to be real, human indignation.

Indeed, Michele Austin, head of government and public policy for Twitter Canada, conceded that the #TrudeauMustGo trend was generated organically, not automatically, but that some fake accounts were involved.

It seems cut and dry: Twitter removed genuine political dissent. Except, if you’re in Twitter’s position, how do you determine whether there were bots posting #NotABot?

That isn’t exactly an easy question to answer. Statistically, at least some of them would have been bots. But how many?

What will be the impact of using bots?

“I don’t believe we can actually talk about impact,” Gray said. “We don’t have the research. We don’t have the data, and I don’t even know if we have the right questions to ask about the impact.”

This inability to determine the impact and genuine political dissent will be a problem in the upcoming election, for both the opposing party and the incumbent, as both will presumably utilize bots to spread their messages. And if they don’t, there’s nothing stopping sympathetic actors from doing it for them.

Moreover, this uncertainty surrounding impact and identification of Twitter bots will likely lead to more censorship, rather than less, as the inability to judge a bot from a person gives social media platforms license to censor any trend that they deem too spontaneous or outrageous to be genuine.

After all, who’s to say, and who can tell, whether they end up censoring thousands of bots or thousands of people or thousands of both?

Overall, the effect of bots will be a boon for generating outrage and criticism, and this may lead to some support being generated for one side or the other. However, the use of bots ultimately represents a slippery slope, as it runs the risk of confounding real users, who have real concerns, with the artificial.

Thus, those who use bots should be wary of giving social media platforms even more license to censor the public sphere than they already have. Because, as recent history would suggest, they’ll more than likely take it.