Activity from social media bots has surged due to technical inadequacy and poor content moderation. Will the rise of bots spell the end of social media platforms as we know it?
A 2024 Imperva Bad Bot Report, found that 49.6 percent of all internet traffic came from bots, an increase of 2 percent compared to the previous year and the highest number in over a decade since Imperva started to measure automated traffic. In February 2024, ABC News Australia reported that X, formerly known as Twitter, had turned into a bot riddled AI spam wasteland. This raises the question whether bots will spell the downfall of social media?
Understanding social media bots
Social media bots serve multiple purposes, such as amplifying the popularity of a person or movement, Cloudflare notes. Bots can also be used to influence elections, which we will dive into later in this article. Social media are used to manipulate financial markets through the spreading false information about a publicly traded company. False information distribution can be extended to simply spreading spam or for malicious purposes such as phishing attacks. These are able to spread their message through fake social media accounts.
We’ve seen instances of massive misinformation campaigns through social media accounts by Russia’s Internet Research Agency. This foreign entity created vast networks of fake social media accounts with AI-generated profit pictures or hacked existing accounts to set up coordinated misinformation campaigns during the 2016 election. Later we will learn that these efforts, along with future campaigns, have led to increased distrust with social media users.
Social media surveillance and information distribution can also be found closer to home. In 2023, the Brennan Center uncovered that the Department of Homeland Security used thousands of fake social media accounts for mass surveillance, ranging from screening visa applications to risk calculation for incoming travelers. The scale and data collected, the Brennan Center, remains largely shrouded in mystery, making it difficult for the general public to assess the potential privacy risks of social media users.
Swarm of Twitter Bots
In September 2023, an analysis published by associate professor Dr. Timothy Graham from the Queensland University of Technology, Australia, together with PhD candidate Kate FitzGerald, found a large increase in bot traffic on Twitter. The analysis, consisting of 1 million tweets, unveiled that despite attempts from its owner Elon Musk to drastically reduce bot traffic, automated traffic had reached new highs since the acquisition.
The analysis was conducted alongside the first Republican primary debate and Tucker Carlson’s interview with former US President, Donald Trump. Graham and FitzGerald were able to identify 1,200 X accounts that were actively spreading false and disproven claims about Trump’s 2020 election, on top of a bot network comprising 1,305 accounts. During the Republican debate, conspiracy content was able to amass 3 million impressions.
One such account was, “MediaOpinion19”, an account operating with the bot network which spewed out an average of 662 tweets per day, or roughly once every two minutes. Surprisingly enough, the account was able to operate for a long time, with little attempts from Twitter to combat the malicious activity. If there have been attempts, they have failed miserably.
FitzGerald told The Guardian that of the bots the team had identified, Twitter (X) was doing next to nothing to reduce their activity, finding only one or two that had been suspended. Adding that whatever methods the social media network is employing, have little to no effect. To add insult to injury, accounts spreading misinformation with a human operative behind it, received a blue checkmark. The check mark however, can be easily purchased without little guardrails.
Wasteland of auto-generated content
In February 2024, Dr. Timothy Graham spoke with Australian news outlet ABC, about the spam bot phenomenon spreading rampant across Twitter (X). ABC remarked that the internet was starting to be filled with “zombie content”, specifically designed to manipulate algorithms and scams. Spam bots are so rampant in fact, ABC notes they are turning the social media platform into a wasteland where only bots interact with each other, with search engines aiming to find whatever valuable content is left.
In the particular instance highlighted by ABC, professor Terry Hughes, a coral researcher at James Cook University, said he had stumbled on a bot network with crypto accounts talking about agricultural runoff, a phenomenon where pesticide residue finds itself into water systems. Graham theorized this network, which used solely AI-generated text, was created to attract followers and age the accounts that could either be re-used for other purposes or sold.
Bot detection
In July 2018, there was still hope on the horizon, through a working paper demonstrating how algorithms could detect bot traffic and proposing a methodology to reduce its influence. The findings come after reports revealing the usage of social media bots across Facebook and Twitter designed by foreign actors to influence U.S. elections. The MIT Sloan School of Management commented that the need to combat these bots have become increasingly important.
Associate professor of operations management at MIT Sloan, Tauhid Zaman, commented that stomping out social media bots had become the new arms race. Together with Nicolas Guenon des Mesnards, graduate student at the Operations Research Center, the duo demonstrated in a working paper that through employing an enhanced algorithm, bot traffic could be better distinguished from regular user generated content, being better suited for “modern day” bot detection.
MIT Sloan pointed out that current algorithms were trained to detect bots through screening individual account details such as the username, number and timing of the tweets or posts, content and a variety of other factors. Zaman commented that based on these signals, platform owners detect whether an account is a bot. Zaman and des Mesnards took another approach, focussing on patterns rather than individual accounts. By detecting patterns, networks of bots could be uncovered.
They started by separating bot behavior from human traffic. Zaman explained that humans talk to humans, bots talk to humans, but nobody talks to bots. By taking this first criteria, they could build the first detection layer. We must take into account that this study has been conducted several years ago, as many observant users will now have spotted accounts interacting with each other.
Framing the efforts back to 2018, Zaman and des Mesnards experienced great success by detecting bots from humans through fewer data points than previous algorithms. Additionally, the model used in the study was language agnostic, meaning it was not reliant on linguistic and cultural subtleties, being able to distinguish based on behavior. This was made possible through the implementation of the Ising model, used in statistical physics.
Detecting bots is difficult
The findings from Graham, Zaman and des Mesnards and others in the field, reveal how widespread the issue is, but simultaneously it raises the question, if bot traffic is so obvious, why is reducing it so difficult? A May 2023 study revealed that current third-party detection software wasn’t nearly as effective as claimed in detecting bots. The paper published by MIT researchers, Chris Hays, Zachary Schutzman, Manish Raghavan, Erin Walk, and Philipp Zimmer, showed that while bot detection software providers claim to have high accuracy in detecting bot, the reality was far less glamorous than promised.
Dylan Walsh from MIT sloan points out that a lot of effort goes into developing tools that can distinguish human from bot traffic, with social media companies creating that our program to combat bot spam, may it be in secrecy. Third party tools differ, aiming to train their software through machine learning through curated data sets. These data sets are designed to develop algorithms that can separate bots from humans. The researchers at MIT used the same methodology to train their own programs and determine their effectiveness.
Postdoctoral fellow at the MIT Institute for Data, Systems, and Society, Schutzman and fellow researchers, downloaded a Twitter data set hosted by Indiana University. They used an off-the-shelf machine learning model to separate bot spam from user generated content. The initial findings were promising, as the team managed to reach an accuracy score of 99 percent. As great as the first results were, Schultzman noted they had to be doing something wrong.
The team continued running their experiments and quickly realized that while the off-the-shelf solutions were very good at detecting bots within predefined data sets, the results quickly fell apart when faced with non-curated data. This led the team to believe that these models would have trouble dealing with the randomized interactions found across social media platforms. Furthermore, less complex models were also just as capable in detecting account activity.
Social media users on alert
Up to this point we’ve primarily focussed on how bots operate and how social media platforms try to combat them. However, the elephant in the room is how will this impact user behavior and their relationship with the platform? Without bots, platforms like Facebook have already faced their fair share of scrutiny, but social media spam will add another layer of complexity that might not only threaten Meta’s umbrella of platforms, but social media in general.
A 2018 Pew Research survey among 4,581 U.S. adults, part of the Pew Research Center’s nationally representative American Trends Panel, revealed that two-thirds of Americans have heard of social media bots, with the majority believing these bots have been designed with malicious intent. The increased awareness of social media bot spam can be traced back to the 2016 U.S. Presidential election, where large quantities of misinformation had been spread across social media platforms.
Despite a growing awareness among American citizens, distinguishing bot content from genuine traffic proved tricky. Pew Research noted that 47 percent of the respondents who were aware of social media bots, almost half found themselves to be confident in recognizing bot traffic. With 7 percent having a high degree of confidence. While recognizing is important, the presence of spam bots on social media left many concerned with 81 percent of respondents believing that a fair amount of news generated through bots is consumed by platform users.
Two-thirds of respondents (66 percent) believe that social media bots have a negative impact on shaping opinions of Americans on current events. Only 11 percent of respondents said that content generated by bot accounts could have a positive impact on public opinion. Interestingly enough, many believe that bots can be beneficial for information distribution, primarily for governments who can post emergency updates.
Twitter traffic plummets
The aforementioned Pew Research findings reveal little about the usage behavior. However, Twitter (X), who has become the benchmark standard for platform misuse, can serve as the precursor for the possible future of social media when bots are allowed to run rampant. In December 2022, eMarketer (formerly Insider Intelligence) forecasted that Twitter (X) would lose tens of millions of users over the course of 2023 going into 2024.
The researchers expected the monthly average Twitter users to drop by 4 percent globally by 2023 and another 5 percent by 2024. eMarketer notes that this has been the first drop in the platform’s history. Principal analyst at Insider Intelligence, Jasmine Enberg, commented on the findings that the decline in usage won’t be a sudden event, but a gradual decline. Adding that disgruntled users will first leave the platform as technical issues grow and hateful content grows. The skeleton operation left at the platform will be unable to keep stable operations nor be able to moderate all the policy violating content.
A March 2023 Pew Research survey conducted among U.S adults showed that Twitter users took a break from the platform in the past year. The survey revealed that user sentiment had drastically changed after Elon Musk’s acquisition of the social media platform just five months earlier. Among U.S. adults 60 percent responded to having taken a break from the platform in the last 12 months, with 69 percent of women indicating to have put Twitter on hold.
Women were also more likely to remain absent from the platform as time progressed, with 30 percent of women indicating that they wouldn’t use the platform a year from the moment of the survey. Men were more likely to use the platform over the same period, with 20 percent of men responding to not using the platform a year later. Nearly half of men (47 percent) indicated to return to the platform within a year, compared to one-thirds of women (31 percent). User retention is one of the most valuable metrics a platform has and any downward trend acts as an immediate alarm to executives.
Advertising revenue nosedives
Decline of users is disastrous for ad revenue, as a social platform’s unique selling point and raison d’etre is a growing user base who is actively engaging on the platform. eMarketer predicted that ad revenue on Twitter would have already plummet by about 40 percent in November 2023. This is a U-turn from earlier predictions where the researchers still expected ad revenue to grow to an upward of $5.58 billion over the first quarter of 2023.
Now, the tide has drastically turned. Enberg noted that as revenue and staffing were set to fall, Elon Musk would have trouble bringing new products to market in order to recover and maintain platform usage. We must add that it’s not only bots who have waned interest in the platform for users and advertisers. Musk’s rather flamboyant remarks and opinions, haven’t contributed to keep those interested in the platform onboard.
In December 2023, host of NPR’s Morning Edition, Steve Inspeek spoke with technology correspondent, Bobby Allyn, and advertising consultant Tom Hespos, about the advertiser boycott that might bring Twitter to its knees. Prior to the discussion on NPR, Musk had been vulgarly taunting large advertisers to stop promoting on the platform if they don’t agree with the decisions made by executives at the company.
The remarks made by Musk during The New York Times DealBook Summit were the proverbial final straw that made many advertisers turn their back on the platform, especially those who were reluctantly present on Twitter. The wild west that Twitter had become had put many advertisers on edge, with Musk erratic decision making, turning the platform into a hostile place to safely promote a brand.
Hespos remarked that advertisers aren’t looking to answer questions about sensitive topics. The controversial tweets by Musk over this period, led to Apple, Disney, IBM and Walmart pulling their ads off the platform. While the total revenue lost is hard to quantify, Allyn added, Twitter has always been reliant on its ad product, drawing 90 percent of its revenue from this single service alone. Turning your back on customers for this segment, puts the entire operation in jeopardy.
Advertising spam on Reddit
Twitter has been one of the most eye-catching examples of social media bots wreaking havoc on its platform. However it is not alone. Reddit, who saw its own drastic strategic realignment is falling prey to increased bot spam. Whereas Twitter saw advertising revenue plummet, owners of bots deployed on Reddit have found clever ways to create ads on the platform.
In July 2023, third-party BotDefense announced its redrawal from the Reddit platform after the API price hikes enacted by Reddit, which forced many developers to rethink their solution delivery to the platform. BotDefense’s tool, which started a volunteer project in 2019, proved to be a worthy solution to combat rogue bot post submissions and comment spam. The creator of the tool, told Ars Technica that it would leave BotDefense as a response to the platform’s “antagonistic actions” against moderators and developers.
The so-called hostile actions by Reddit against third-party developers, who are a valuable resource to maintain some semblance of stability and safety on the platform, would cause direct impact to the content on the platform. Its 2022 transparency report revealed that mods removed over 184 million pieces of content, representing 58 percent of all content removals. This was followed by admins, who removed 123 million piece content, or 39 percent, with the remainder (8.9 million) being removed by the authors themselves.
In total 316 million pieces of content, including spam content, had been removed from the platform. The majority of the content removals were automated, representing 69.1 percent, or 127 million pieces of content, with 30.9 percent, or 56.8 million, were removed by mods manually. Reddit notes that content removals fell under Content Manipulation, including spam, community interference, vote manipulation, or artificial content promotion. The transparency report exemplified how widespread the issues of policy violations across the platform.
Almost a year later a report by 404 Media, showed how bots used Reddit’s advertisement solution to push spam content. Solutions from entities such as the ReplyGuy, provide tools to inject product recommendations into Reddit comments, a practice the creator labels as Stealth Marketing. The presence of the tool further diluted the platform’s already tainted reputation and its decisions fell in line with cuts in the Twitter moderation team, which also allowed for bot spam to flourish.
Community destruction
Apart from the destruction of a tight-knit community, the actions taken at Twitter (X) and Reddit show how brittle the foundations of a social media platform can be when left unchecked. As advertisers we must also ask how suitable these places are for brands to safely promote their products and services. Platform owners meanwhile have to take the interests of its primary customer, the advertisers to heart.
Does this mean that the fates’ of social media platforms who fail to stop bot spam are sealed? At this stage it’s too early to tell. Billions of dollars still flow to platforms such as Facebook, Instagram and YouTube, implying there’s still money to be made by the platforms. However, Reddit and especially Twitter, have shown how easy it is to destroy a vibrant, yet polarizing, ecosystem and scare advertisers away to other platforms.
The fight against bot spam remains an arms race with each side of the party trying to come up with new ways to combat each other. May it be by improving algorithms, or on the other end of the spectrum, creating sophisticated networks with bots aiming to act like humans to promote spam. Platforms have yet to see user activity plummet, however it’s a delicate balance, as leaving too much of the activity unchecked will result in an exodus of users and in turn triggering advertisers to abandon the platform, searching for new channels to reach their targeted audience.