Social media trolls are fake accounts that use automated or semi-automated programs to infiltrate platforms and shape the way people behave online.
Using artificial intelligence to mimic and stimulate human behavior, bots wreak havoc across various social media platforms. They include Twitter, where Elon Musk is calling for more transparency in a highly publicized data debacle, and Facebook, where posts from bots have influenced people’s perception of politics (that kind of behavior is also called coordinated authentic behavior ).
Why are social media bots used?
Groups or individuals create social media bots for a variety of purposes. Companies can sell the fake accounts to other users for money, or political groups can use them to share content intended to polarize and troll viewers.
Generally, there are two types of bots: automated ones, like those that automatically retweet a specific hashtag every time it’s posted, and semi-automated bots, which are usually fake accounts that people use. Both types of bots can be used to propagate hate speech, spread propaganda, sway public opinion, and sell goods or services.
Some bots are programmed to increase engagement or follower numbers, while others are intended to promote insidious speech and incite nefarious actions. Either way, social media bots can be a serious problem — especially because social media users are often unable to tell the difference between them and accounts operated by real people.
A peer-reviewed study from Stony Brook University published in 2021 analyzed over 3 million tweets authored by 3,000 bot accounts and compared the language of those tweets to that of 3,000 genuine accounts. When the researchers looked at the bot accounts individually, they appeared to be run by humans. But when the researchers analyzed the accounts as a whole, the researchers realized that the accounts were apparently clones of each other.
In recent years, the use of robots has increased. And with this increase, cyber security experts have tried to sound the alarm about their threat to our digital ecosystem. In fact, the European Commission launched the Action Plan Against Disinformation in 2018, specifically, to tackle social media bots as a technique to “spread and amplify divisive content and debates on social media” and the spread of misinformation and disinformation. Similarly, the US Department of Homeland Security (DHS) has also launched efforts to combat misinformation on social media, including tips on identifying bot accounts.
Why identify robots?
Not only is it important to identify social media bots to prevent the spread of false information, but also, a routine of removing bot followers from your profile can increase your account’s ranking on a platform.
With few or no bot followers, your profile content is more likely to appear at the top of feeds, opening up opportunities for you to get “likes”, “retweets”, “shares” or “comments” (ie ” engagement”), based on site algorithms.
In other words, although removing bot accounts from your follower lists may reduce your overall follower count, doing so will ensure that those who I DO follow that you are human and engage with your content in a meaningful way.
What are some common robot behaviors?
The DHS Office of Cyber and Infrastructure Analysis described common methods in which social media bots influence and/or engage with people online — also known as “attacks” — such as:
- Click or like farming. This is when bots increase the fame or popularity of a website through liking or reposting content. These types of bots also allow people to buy similar fake accounts to increase their number of followers.
- Hijacking the hashtag. This method uses hashtags to focus an attack on a specific audience using the same hashtag.
- Repost the storms. This happens when a parent social media bot account starts an attack, and then a group of bots immediately reposts that offensive post.
- Sleep bots. These are bots that remain asleep and then wake up to make a series of posts or retweets over a short period of time.
- Trend in trend. This method uses trending topics to focus and attack a target audience.
If you encounter any of the aforementioned patterns, we recommend that you report the suspicious activity to the administrators of the hosting social media platform.
What do bot accounts look like?
Watching a robot can be a tedious task.
You could try a bot detection tool like Botometer, which describes itself as a “machine learning algorithm trained to calculate a score where low scores indicate likely human accounts and high scores indicate likely accounts bot”. But there are limitations to these types of services; some things only the human eye can perceive.
Here are some questions the Snopes engagement team considers when deleting our Instagram, Twitter, and Facebook follower lists and removing bot accounts:
- Does the account have a profile picture? Sometimes a bot account will not have a profile picture. Seeing a profile account without a photo is often our first indicator that the account may be inauthentic.
- Does the account use a generic username? Often with bot accounts, we’ll see a generic username that was likely created as part of an automated system. These often combine names and then end in a series of numbers.
- How many followers? Often with only a few dozen followers, bot accounts tend to have lower follower counts than authentic accounts.
- What are the account privacy settings? Some bot accounts will have high privacy settings, giving no or limited public access to their profile.
- How many posts have they shared? For example, on Instagram, bot accounts often only have a few network images.
- What is the quality of shared content? On Twitter, for example, bot accounts can be programmed to automatically retweet posts with a particular hashtag. If an account seems to share only a certain type of content, we’re suspicious of it.
This guide just scratches the surface of bot behavior. For more information about bot identification and coordinated inauthentic behavior, see the Snopes media literacy collection.
This page is part of an ongoing effort by the Snopes editorial staff to teach the public all the facts of online fact-checking and, as a result, to strengthen people’s media literacy skills. Misinformation is everyone’s problem. The more we all get involved, the better we can do to fight it. Have a question about how we do what we do? Tell us.
sources
“A collection of tips for fighting online misinformation like a pro.” Snopes.Com, https://www.snopes.com/collections/international-fact-checking/. Accessed 23 July 2022.
Botometer from OSoMe. https://botometer.iuni.iu.edu. Accessed 23 July 2022.
DHS Coordinates Efforts to Combat Misinformation on Social Media | Office of the Inspector General. https://www.oig.dhs.gov/node/6297. Accessed 23 July 2022.
“In Reversal, Twitter Plans to Comply with Musk’s Data Demands.” Washington Post. www.washingtonpost.com, https://www.washingtonpost.com/technology/2022/06/08/elon-musk-twitter-bot-data/. Accessed 23 July 2022.
Martini, Franziska, etc. “Bot, right? Comparison of three methods for detecting social bots in five political discourses. Big Data & Society, vol. 8, no. 2, July 2021, p. 205395172110335. DOI.org (Crossref), https://doi.org/10.1177/20539517211033566.
“Snopes Tips: How to Spot Inauthentic Coordinated Behavior.” Snopes.Com, https://www.snopes.com/articles/385721/coordinated-inauthentic-behavior-2/. Accessed 23 July 2022.
“Snopestionary: Misinformation vs. Disinformation.” Snopes.Com, https://www.snopes.com/articles/386830/misinformation-vs-disinformation/. Accessed 23 July 2022.
“Snopestionary: What is ‘Coordinated Inauthentic Behavior’?” Snopes.Com, https://www.snopes.com/articles/366947/coordinated-inauthentic-behavior/. Accessed 23 July 2022.
Social media ‘robots’ tried to influence US election. Germany could be next. https://www.science.org/content/article/social-media-bots-tried-influence-us-election-germany-may-be-next. Accessed 23 July 2022.
“Study Suggests New Strategy to Detect Social Bots |.” SBU News, 30 Nov. 2021, https://news.stonybrook.edu/homespotlight/study-suggests-new-strategy-to-detect-social-bots/.
Uyheng, Joshua, and Kathleen M. Carley. “Bots and Online Hate During the COVID-19 Pandemic: Case Studies in the United States and the Philippines.” Journal of Computational Social Sciences, vol. 3, no. 2, 2020, p. 445–68. PubMed Central, https://doi.org/10.1007/s42001-020-00087-4.