Online Manipulation By Bots Is Harmful No Matter The Source
Bots are filling up the internet at an alarming rate. Online conversations, trending topics, and political narratives are increasingly being influenced by bots and their creators. Mainstream news sources report endlessly on the bot threat originating from Russia. However, recent revelations have shown just how overhyped this news beat was. Bots are a major problem for the internet and pointing a finger solely at Russia will not fix the issue. If the internet is going to survive the bot tide, manipulation of the online world by bots should be condemned wherever it takes place no matter the source or goal of the operation.
Many people are still under the illusion that Russia handed the 2016 election to Donald Trump. According to this worldview, Twitter was a hotbed of Russian bots spreading election “misinformation”. Contradicting this perspective is a study conducted by New York University’s Center for Social Media and Politics. According to the study 70% of the influence that these Russian “troll” accounts had was with 1% of Twitter users. “Highly partisan” Republicans saw nine times more of these tweets than other groups. Essentially, these Russian influence operations failed to make a dent in the electoral map.
Another failed piece of evidence pointing to Russia’s supposed unstoppable Twitter bot army is Hamilton 68. This dashboard claimed to monitor 600 Russian bots on Twitter, earning it heavy attention from the mainstream media who were looking for anything to support their preconceived notions. The Twitter Files have shown how flimsy this narrative was. The accounts were not even Russian bots, and Twitter knew that was the case. The former head of trust and safety at Twitter, Yoel Roth, wrote in an email that Hamilton 68 “falsely accuses a bunch of legitimate right-leaning accounts of being Russian bots.”
These highly partisan Democrats do not take bots seriously. Amazingly, their unseriousness reached a point to where they even used bots themselves to spread “misinformation”. During the 2017 U.S. Senate special election in Alabama, between Democrat Doug Jones and Republican Roy Moore, a “group of Democratic tech experts” set out to smear the campaign of Moore. They set up a fake Facebook page for Alabama conservatives to divide the vote and then attempted to tie the Moore campaign to thousands of Russian bots that had followed Moore on Twitter. An internal report by the group explicitly says it was “[experimenting] with many of the tactics now understood to have influenced the 2016 elections” and “[they] orchestrated an elaborate ‘false flag’ operation that planted the idea that the Moore campaign was amplified on social media by a Russian botnet.” The report does not admit whether the group explicitly created the bots, but it is hard to believe that they would not stoop that low.
The war in Ukraine has caused the influence of bot operations to explode, exposing just how multifaceted the issue is. A study conducted by researchers from the University of Adelaide looked at posts relating to the war. From February 23 to March 8, they found that 60% – 80% of the 5.2 million tweets, retweets, quotes, and replies they studied were from bots. A whopping 90% of this bot activity was in favor of Ukraine. While most of the focus on bot activity has been squarely placed on Russia, it seems that Ukraine may have vastly more reach with their bot operations. No attention is paid to this fact because Russia is the bad guy in the minds of those obsessed with finding any bot activity they can, real or not.
Bots are only going to get worse with the introduction of artificial intelligence (AI). These new and highly accessible AI tools can generate realistic images for profile pictures, coherent text that is well formatted, and disturbingly accurate voice impersonations. As pointed out by political commentator, Mike Cernovich, “[people] will be generals with their own ‘clone armies’ of AI bots.” He rightly states that this will “break a lot of people” and that “[only] the most psychologically robust will survive, as your deepest insecurities are probed, at scale, by thousands of bots who appear real.” AI is not only going to accelerate the prevalence of bots online, but it is also going to make them indistinguishable from real humans.
The internet is already threatened by countless bots who exacerbate divisions and deceive users. AI is the gasoline that will supercharge this spreading fire. Mainstream media has been polluting the airwaves with claims of Russian bot operations that did not materialize under closer scrutiny. To correctly address this issue in all of its facets it must be approached cautiously and without political bias. Bots are toxic no matter who they support and those who ignore this truth are only contributing to the decay that these bots are ushering in.