What is happening
Misinformation on Twitter could be halved if the social network implemented a series of stricter measures, a new study finds.
Why does it matter?
Misinformation has become a threat to public health, but it is unclear whether social networks will take the necessary steps to slow the spread on their platforms.
Social media platforms such as Facebook, Instagram AND I tweet are filled with misinformation that can easily go viral. A study examined millions of tweets and found that a handful of steps can be taken to slow the spread of misinformation on Twitter.
Researchers with the University of Washington’s Center for an Informed Public found that a combination of multiple measures — including deplatforming repeat misinformation offenders, removing false claims and warning people about posts containing false information — can reduce the volume of misinformation on Twitter by 53.4%. The study’s findings were published in the journal Nature last week.
Using just one of these measures can slow disinformation, but there are diminishing returns to taking just one step, said Jevin West, one of the paper’s co-authors and an associate professor at the University of Washington’s School of Information. By combining multiple measures, there can be a significant improvement in outcomes, the study found.
Misinformation has become a threat to Americans’ public health, warn US Surgeon General Vivek Murthy and Food and Drug Administration Commissioner Robert Califf. Twitter, like other social media sites, has spent the past two years trying to stop false information about 2020 presidential election AND COVID-19 from spreading on its platform. to the company content moderation efforts have been criticized by Tesla and SpaceX CEO Elon Musk, who made a deal in April to buy Twitter. Musk says he wants to make the platform more free-speech-oriented. In a meeting with Twitter employees in Junehe reportedly said the company should “let people say what they want.”
To determine what steps would work to slow viral misinformation on Twitter, researchers looked at 23 million tweets related to the 2020 presidential election from September 1 to December 15 of that year. Each of the posts was linked to at least one of 544 viral events — defined as periods in which a story showed rapid growth and decay — identified by the researchers. The researchers used the data to create a model that is similar to the contagion models used by epidemiologists to predict the spread of an infectious disease.
With that model, the researchers were able to determine different measures, or interventions as described in the study, that Twitter could apply to its platform to help stop the spread of misinformation. Most effective, according to the study, is the removal of misinformation from the platform, especially if it is done within the first half hour after the content is posted.
Also effective is removing repeat offenders, people who regularly share misinformation. The study suggests Twitter implement a three-strikes rule, but West said he understands the controversy surrounding deplatforming individuals.
“We have to get it [deplatforming] serious, especially with discussions about free speech,” he said.
The First Amendment to the US Constitution provides protection against government censorship of speech, but companies may decide not to allow certain types of speech on their platforms. They can have their own standards and require users to follow them.
Twitter’s policy page there are two sets of rules for disinformation, with different penalties. Under it Crisis Disinformation Policy, False and misleading information about armed conflicts, public health emergencies and large-scale natural disasters can result in a seven-day deadline for repeat offenders who are given notices within 30 days. Twitter’s Policy on Misleading COVID-19 Information lists a five-strikes rule that results in a permanent suspension of the offender’s account. The platform would be better able to slow the spread of misinformation if its policies were more consistent, rather than having different penalties for different types of false claims, the study said.
West said that a reduction in amplification — referred to as a “circuit breaker” in the study — of a repeat offender’s account is also effective in slowing disinformation without having to ban or remove an account. This would require using Twitter’s algorithm to make posts or accounts that spread false information less visible on the platform.
Twitter has already taken several measures in this regard, including making tweets from offending accounts ineligible for recommendation, preventing offending posts from appearing in search and moving replies from offending accounts to a lower position in conversations, according to Twitter’s policy page.
The study also refers to impulses. These are the warnings and tags used in tweets that advise people that a post has false information. Twitter has used these extensively throughout the COVID-19 pandemic in relation to misinformation about the virus, treatments and vaccines.
When asked for comment, a Twitter spokesperson said that many of the measures explored in the study are already part of its misinformation policies. They also told about the company How we address misinformation on Twitter page..
West said the researchers first looked at Twitter because it was the easiest platform to collect data. He said the next big step is to use the model on other larger platforms, such as Facebook.