Were Facebook and Twitter Consistent in Labeling Misleading Posts During the 2020 Election?

Editor’s note: To combat election-related disinformation, social media platforms often apply tags that let users know that information from a post is misleading in some way. American University’s Samantha Bradshaw and Stanford Internet Observatory’s Shelby Grossman explore whether two major platforms, Facebook and Twitter, were internally consistent in how they applied their tags during the 2020 presidential election. Bradshaw and Grossman find that the platforms were fairly consistent, but it was still common for identical misleading information to be treated differently.

Daniel Byman

***

As the US midterm elections approach, misleading information can undermine trust in electoral processes. The 2020 presidential election saw false reports of dead people voting, ballot-stuffing schemes, and interference by partisan poll workers. These stories spread across online and traditional media channels, especially social media, contributing to an enduring sense that the election was tainted. As of July 2022, 36 percent of American citizens still believed that Joe Biden did not legitimately win the election.

One way social media platforms have tried to combat the spread of false information is by applying tags that provide contextual information. During the presidential election, platforms applied tags that read “missing context” or “partially false information” and “this allegation of election fraud is debatable.” We know that when social media platforms flag fraudulent content, people are less likely to believe it. But it’s social media platforms consistent how do they enforce their policies on fraudulent election information?

Platform consistency matters. This affects whether people trust the information tags applied to posts. Platform consistency also affects whether people perceive platforms as fair. But understanding whether platforms are consistent in enforcing their policies is actually quite difficult.

Measuring the consistency of implementation

We developed a way to measure the sustainability of the platform, leveraging the work done by the 2020 Election Integrity Partnership, of which we were both a part. This coalition of research groups worked to detect and analyze misleading information that could undermine US elections, sometimes putting it on social media platforms when it appeared to violate the platform’s policies. From the partnership’s work, we created a dataset of 1,035 social media posts that fueled 78 election-related fraudulent claims. The partnership identified these posts primarily from analysts running systematic cross-platform queries and then reported these posts directly to Facebook and Twitter. We use this dataset to determine whether each platform was consistent in tagging or not tagging posts that share the same claim.

In our study, we find that Facebook and Twitter were largely consistent in how they enforced their rules. For 69 percent of fraudulent claims, Facebook consistently tagged every post that included one of those claims — either always or never adding a tag. It inconsistently labeled the remaining 31 percent of fraudulent claims. The findings for Twitter are almost identical: 70 percent of claims are labeled consistently and 30 percent are labeled inconsistently. When platforms weren’t consistent, we were either confused about the patterns we saw, or unable to understand why they handled seemingly identical content differently.

Let’s look at some examples. First, an article circulated on Election Day implying that Michigan’s vote-counting delays were evidence of systematic voter fraud. Facebook tagged every instance we saw of this article with, “Some election results may not be available for days or weeks. That means things are happening as expected.”

But on another occasion — the day after the election — a chart from the website FiveThirtyEight circulated showing vote counts over time in Michigan. While the figure illustrates a standard and legitimate vote count, many observers misinterpreted it as evidence of voter fraud. Facebook often tagged these images with a “missing context” tag, linking to a PolitiFact statement saying, “No, these FiveThirtyEight graphics do not prove voter fraud.” But when a post shared a picture of a screenshot showing the graph — making it a bit blurry — Facebook didn’t apply a tag.

Left: An untagged Facebook post resharing the FiveThirtyEight graphic with a misleading frame. Right: A tagged Facebook post spreading the same graphic and claim.

We found many examples of Twitter handling identical tweets differently as well. On November 4, 2020, Trump tweeted: “We are great, but they are trying to steal the election. We will never let them do it. Votes cannot be cast after the polls are closed!” Twitter placed the tweet behind a warning label. In response, Trump supporters shared the text of the tweet verbatim. Sometimes Twitter tagged these tweets and sometimes it didn’t, even though the tweets were identical and posted within minutes of each other.

Left: An untagged tweet retweeting Trump’s tweet. Right: A tagged tweet retweeting Trump’s tweet.

Explanation of discrepancy

Sometimes the discrepancy was explainable—but not necessarily in a way that matched the platforms’ stated policies. Looking at the 603 fraudulent tweets in our dataset, we found that Twitter was 22 percent more likely to flag tweets from verified users, compared to unverified users. In one striking example, we observed five tweets falsely claiming that Philadelphia destroyed mail-in ballots to make an audit impossible. Twitter only tagged one tweet, which was from a verified user. Surprisingly, this verified user’s tweet was retweeting an untagged tweet in our dataset.

Left: An untagged tweet from an unverified Twitter user. Right: A tagged tweet from a verified Twitter user, retweeting the untagged tweet on the left.

On the one hand, it might make sense for Twitter to more often enforce its disinformation standards against tweets from verified accounts. Tweets from these accounts may be more likely to go viral, making their fraudulent messages more dangerous. On the other hand, a new aspect of our dataset is that we know that Twitter employees saw all the tweets that the partnership reported. It is not clear why they would deliberately choose not to tag tweets from unverified users.

What explains some of the discrepancies we’ve seen on Facebook and Twitter? Our best guess is that the platforms sent the posts we flagged to a content moderator queue, and different moderators sometimes made different decisions about whether a post deserved a tag. If this is correct, it suggests that additional moderator training may be beneficial.

There may also be some untapped opportunities for automation. We have repeatedly observed inconsistent enforcement on tweets sharing the same misleading news article. In these cases, platforms can make a decision about whether the article deserves a tag, and then apply the decision automatically, retrospectively and prospectively. Despite these inconsistencies, it is notable that 70 percent of the time both Facebook and Twitter consistently addressed the misleading narratives.

An opportunity for improvement and further study

The analysis we were able to perform highlights how collaborations between industry and academia can provide insights into typically dark processes—in this case, the decision to tag fraudulent social media posts.

As the 2022 midterm elections approach, social media platforms must build their capacity to enforce policies consistently and also provide researchers with access to content removed and flagged prior to the election. This would allow a wider group of researchers to investigate content moderation decisions, improving transparency and strengthening trust in the information ecosystem, a prerequisite for a trusted electoral process.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *