By Regina Mihindukulasuriya
Between 60 and 80 percent of tweets about the Russia-Ukraine war may be bot accounts, a study by researchers from the University of Adelaide, Australia, has found.
Among other influences, these bot accounts may have been driving people from their homes during the conflict between the two countries, the researchers added.
Also Read: Check out our coverage on curated alternative narratives
The researchers also found more “pro-Ukrainian” accounts than “pro-Russian” accounts.
The paper, titled “#ISTandWithPutin versus #ISTandWithUkraine: The interaction of robots and humans in discussing the Russia-Ukraine war,” was published on August 20.
Researchers studied 5.2 million tweets – tweets, retweets, quotes and replies to tweets – shared between February 23 and March 8 this year to understand how bot activity may affect online discussions about the Russia-Ukraine conflict and how they can affect human emotions.
The study posts contain hashtags such as “StandWithPutin”, “(I)StandWithRussia”, “(I)SupportRussia”, “(I)StandWithUkraine”, “(I)StandWithZelensky” and “(I)SupportUkraine”.
Bot accounts were identified using Indiana University’s Botometer – a software that helps identify a bot account.
“We can say that between 60 percent and 80 percent of the accounts posting hashtags that we studied during the first two weeks of the fight were bots, as determined using Botometer,” Joshua Watt told an Indian newspaper.
According to Watt, it is not clear whether the bots were influencing people to leave Ukraine or Russia.
Watt added: “We cannot conclude where this is happening due to the lack of geographical information on the origin of the accounts.”
“We can conclude that bot accounts are more influential in discussions about moving/escape/escape or staying in a place/place.”
MORE PRO-UKRAINE BOT ACCOUNTS
According to the researchers, 90.16 percent of accounts tweeting about the Russia-Ukraine war were “pro-Ukraine” and only 6.80 percent were “pro-Russia.”
“Balanced accounts” — those that showed mixed behavior — accounted for 3.04 percent.
“The pro-Russia account group has the largest outflows of information and significant inflows to a number of other groups, with positive information flow to both the pro-Ukraine and balanced account groups,” they observed. researchers.
This means that real pro-Russian users can influence more users on Twitter than real users who are pro-Ukraine.
Researchers found “a spot” on the robot on March 2 and 4.
The first peak is related to Russia occupying Kherson (a city in Ukraine), but also when the hashtags #(I)StandWithPutin and #(I)StandWithRussia were trending.”
The research also found noon to 1pm as the “most popular time” to tweet in every time zone.
The most commonly used type of bot by both pro-Ukrainian and pro-Russian parties are “self-declared bots — accounts that are transparent about being bots — ” suggesting that authorities have identified these bots as more useful in an information warfare campaign”.
Self-declared bot accounts have the word ‘bot’ in their username or bio.
The research also found that the pro-Ukrainian side uses more astroturf bots than the pro-Russian side.
Astroturf bots are hyperactive political bots, constantly stalking other accounts to increase their account’s follower count and systematically deleting content from their account.
HOW BOTS DRIVE EMOTION
The research studied the words that appear most frequently in accounts of worlds to note that “self-declared worlds generate more anxiety for governing bodies.
From a pro-Russian perspective, it could cause more division in the West, and from a pro-Ukrainian perspective, it could cause more problems in Russia.”
The research paper observed that robots also cause anxiety by using anxiety-related words, most of which are “surrounding fear and anxiety.”
Therefore, the researchers argued that bots and automated accounts “combine to increase fear in the general discussion of the Russia-Ukraine war.”
The bots also increase online discussion about movement, observed in the research paper, by tweeting posts with words like “move,” “go,” “go,” and “leave,” which are potentially related to staying or leaving the place.
The combination of this with increased anxiety suggests that robots could influence people’s decisions about whether or not to flee their homes, the paper claimed.
Join us on Telegram: t.me/theriotimes