Facebook, Twitter release 2022 midterms policies to fight the big lie

COMMENTARY

For months, activists have urged tech companies to crack down on the spread of falsehoods that claim the 2020 presidential election is rigged — warning that such misinformation could delegitimize the 2022 terms, in which all seats in the House of Representatives and more than one-third of the Senate are up for grabs.

Still, the social media giants are moving forward with a familiar playbook for policing disinformation this election cycle, even as false claims that the recent presidential election was rigged continue to plague their platforms.

Facebook is again choosing not to remove some allegations of election fraud and instead may use tags to redirect users to accurate information about the election. Twitter says it will apply misinformation labels or remove posts that undermine trust in the election process, such as unverified election-rigging claims about the 2020 race, that violate its rules. (The company did not specify when it would remove the offending tweets, but said the tagging reduces its visibility.)

This is in contrast to platforms such as YouTube and TikTok, which are banning and removing allegations of rigging the 2020 election, according to recently released election plans.

Disinformation experts warn that the severity of companies’ policies and how well they enforce their rules can make the difference between a peaceful transfer of power and an election crisis.

“The ‘big lie’ has entered our political discourse and become a talking point for election deniers to pre-declare that midterm elections will be rigged or filled with voter fraud,” said Yosef Getachew, a media and program director. for democracy in Liberal-leaning government watchdog Common Cause. “What we’ve seen is that Facebook and Twitter aren’t really doing the best job or any job in terms of removing and combating the misinformation that is about the ‘big lie.'”

The political stakes of these content moderation decisions are high, and the most effective path forward is not clear, especially as companies balance their desire to support free expression with their interest in preventing offensive content from appearing on their networks. endanger people or the democratic process.

EXCLUSIVE Election deniers march to power in key 2024 battlegrounds

In 41 states that held nomination contests this year, more than half of the GOP winners so far — about 250 candidates in 469 races — have embraced former President Donald Trump’s false claims about his defeat two years ago, according to a recent Washington Post analysis. In the 2020 battleground states, candidates who deny the legitimacy of those elections have claimed nearly two-thirds of the GOP nominations for state and federal offices with authority over elections, according to the analysis.

And these candidates are taking to social media to spread their election lies. According to a recent report by Advance Democracy, a nonprofit organization that studies disinformation, candidates supported by Trump and those associated with the QAnon conspiracy theory have posted allegations of election fraud hundreds of times on Facebook and Twitter, attracting hundreds of thousands of interactions and retweets.

The findings follow months of revelations about the role of social media companies in facilitating the ‘stop the steal’ movement that led to the siege of the US Capitol on January 6. An investigation by The Washington Post and ProPublica earlier this year found that Facebook was hit with a barrage of posts — at a rate of 10,000 a day — attacking the legitimacy of Joe Biden’s victory between Election Day and the Jan. 6 riots. Facebook groups, in particular, became incubators for Trump’s baseless election-rigging claims before his supporters stormed the Capitol, demanding he get a second term.

“Candidates not endorsing is not necessarily new,” said Katie Harbath, a former director of public policy at Facebook and a technology policy consultant. “There is an increased risk [now] because it comes with one [higher] threat of violence” although it is unclear whether that risk is the same this year as it was during the 2020 race when Trump was on the ballot.

The study finds that social media posts about election fraud are still widespread

Facebook spokesman Corey Chambliss confirmed that the company will not completely remove posts from ordinary users or candidates that claim there is widespread voter fraud, that the 2020 election is rigged or that the upcoming 2022 election is fraudulent. Facebook, which last year renamed itself Meta, bans content that violates its rules against incitement to violence, including threats of violence against election officials.

Social media companies like Facebook have long preferred to take a simple approach to fake political content to avoid having to make tough calls about which posts are true.

And while platforms have often been willing to ban posts intended to confuse voters about the election process, their decisions to take action on more subtle forms of voter suppression — especially by politicians — have often been politically charged.

They often faced criticism from civil rights groups for not adopting policies against more subtle messages designed to sow doubt in the electoral process, such as claims that blacks aren’t worth voting for or aren’t worth voting for. due to long queues.

Midterms are here. Critics say Facebook is already behind.

During the 2020 election, civil rights groups pressured Facebook to expand its voter suppression policy to address some of those indirect attempts to manipulate the vote and apply their rules to Trump’s comment more aggressively. For example, some groups argued that Trump’s repeated tweets questioning the legitimacy of mail-in ballots could discourage vulnerable populations from voting.

But when Twitter and Facebook attached the tags to some of Trump’s posts, they faced criticism from conservatives that their policies discriminated against right-leaning politicians.

These decisions are further complicated by the fact that it is not entirely clear whether labels are effective in combating user perceptions, according to experts. According to Joshua Tucker, a professor at New York University, alerts that posts may be hoax can raise questions about the veracity of the content, or have a backlash effect on people who already believe those conspiracies.

A user might look at a label and think, “‘Oh, it should [question] this information,” Tucker said. Or a user might see a “warning” label and say “Oh, this is yet more evidence that Facebook is biased against conservatives.”

Technology blind spots: Sharing with researchers and listening to users

And even if tags work on one platform, they may not work on another, or may channel people who are annoyed by them to platforms with more permissive content moderation standards.

Facebook said users complained that its election-related tags were overused, according to a post by Global Affairs President Nick Clegg, and that the company is considering using a more tailored strategy this cycle. Twitter, by contrast, said it saw positive results last year when it tested new disinformation labels designed on bare-bones content that redirected people to accurate information, according to a blog. post.

However, the specific policies social media giants adopt may be less important than the resources they use to catch and address offending posts, according to experts.

“There are so many unanswered questions about the effectiveness of implementing these policies,” Harbath said. “How will everything actually work in practice?”

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *