A year ago this month, Twitter permanently suspended an account with 340,000 followers for “repeated violations of our COVID-19 misinformation rules.” The owner of that account, first New York Times reporter and vaccine skeptic Alex Berenson responded with a lawsuit seeking reinstatement. Suffice it to say that few observers thought he had any chance come out on top. An attorney went through the complaint page by page on Twitter and concluded that Berenson had hired a “the group of lawyers unable to leave” to present a convicted case.
Then, somehow, the muppet lawyers won. Earlier this summer, Twitter put Berenson’s account back online, noting that “the parties have reached a mutually acceptable resolution.” Berenson wasted little time calling out the mainstream media for failing to cover “street residence” which led to his return. “I mean, imagine being @dkthomp right now,” he has written triumphantly, referring to my colleague Derek Thompson, who last year called Berenson “the most wrong man of the pandemic.” Now he is determined to be known as the worst victim of the pandemic ban.
Whatever the merits of Berenson’s case and his specific tweet that led to his suspension, the result is significant. For years, people who have been removed from Twitter, Facebook, YouTube and other platforms have tried to sue to get back, and for years, most of their cases were dismissed. Eric Goldman, a law professor at Santa Clara University School of Law, analyzed 62 such decisions for an August 2021 paper and found that Internet companies had won “basically all of them.” When he read about Berenson’s lawsuit, he told me, his first impression was that it “was doomed to fail like dozens of others that have also failed.”
Berenson’s victory was not based on his argument that his detention was a violation of the First Amendment; the judge rejected this claim. Instead, his success appears to have hinged on promises made to him by a high-level Twitter employee. “The points you’re raising shouldn’t be a problem at all,” the company’s then-vice president of global communications assured Berenson at one point, according to the complaint. The suit says the same executive later told Berenson that his name had “never come up in discussions” about Twitter’s COVID-19 disinformation policies. Goldman believes the court’s decision to allow a claim based on that correspondence prompted Twitter to settle. Internet service executives have always been instructed by lawyers not to talk to people about their individual accounts and not make any promises about what might happen, Goldman said, “for reasons that should now be obvious.”
However, this was not the end of the drama. Last week, Berenson published a Substack post that included footage of a conversation on Twitter’s internal Slack messaging system from April 2021, taken during the course of the lawsuit. The images show staffers discussing a recent White House meeting in which members of the Biden administration were said to have asked a “really tough question about why Alex Berenson hasn’t been kicked off the platform,” as one Slack message put it. Another claims that Andy Slavitt, who at the time was a senior adviser to Joe Biden on the administration’s response to COVID-19, specifically cited a “data that had shown [Berenson] it was the epicenter of disinformation.” Berenson has ever since stated that he will to sue the Biden administration for violating his free speech by forcing Twitter to take action against his account.
Once again, legal experts say his case is unlikely to succeed. Berenson faces a “very high stakes” to prove that a private company behaved like a state actor, Evelyn Douek, a The Atlantic the contributor and assistant professor at Stanford Law School told me. According to her and Goldman, the Slack messages that Berenson released do not constitute evidence that the government pressured Twitter to remove Berenson’s account. But Douek is generally troubled by evidence of informal pressure from government officials to limit speech. “I find it unusual,” she said. “It’s certainly unusual to get records for it.”
Andy Slavitt told me he attended a meeting with Twitter, but doesn’t recall bringing up Berenson by name. “Twitter sets its own policies, and I wanted to understand them, whether they were good or bad,” he said. I asked him about an MIT data visualization, widely distributed at the time, that depicted an “anti-masker network” with Berenson as an “anchor.” Had he brought this data ie to the meeting? He said it was possible: “I do not doubt, because we tried to use examples.” But he denied asking Twitter to get rid of Berenson, with whom he claimed he had only a passing acquaintance. “I think his name was in a magazine article,” he said. “I don’t remember anything else about it.”
I reached out to Berenson to request an interview, but he declined to answer questions about his legal battle with Twitter and the resulting settlement. “If you want to have a real conversation that culminates in an article discussing Derek’s article as well as my case, we can do that,” he replied, once again referring to my colleague, “but I expect that life is impossible for you.”
Content moderation is messy by nature. Moderating health or science content can be even more chaotic. Like other social platforms, Twitter tried to implement new policies early in the pandemic that could be applied to conversations about a rapidly changing set of public health best practices. Twitter “Misleading information policy for COVID-19It specifically considers a violation any “claim of fact” that is “manifestly false or misleading” and “may affect public safety or cause serious harm.” But these definitions have proved complicated.
Consider the last tweet from Berenson before he left Twitter last year, which made the following statements about the COVID-19 vaccination: “It doesn’t stop the infection. Or streaming. Don’t think of it as a vaccine. Think of it – at best – as a therapeutic with a limited window of efficacy and dire side effect profile that must be dosed BEFORE THE DISEASE. And we want to mandate it? Madness.” The first two statements in the tweet are factually correct. The third does not seem to qualify as a “claim of fact.” The fourth, with its reference to a “terrible side effect profile,” is at least tendentious and possibly misleading, but the general purpose of the tweet is to express disdain for vaccine mandates. How, exactly, did this tweet effect Berenson’s removal from the site? A spokesperson for the company would only give me the same statement it gave in July: “Upon further review,” the statement said, “Twitter acknowledges that Mr. Berenson’s Tweets should not have led to his suspension at the time .”
Stephanie Alice Baker, a sociologist at City, University of London, has taken issue with the concept of “harm” as it is used in health misinformation policies on Twitter and Facebook. Scientific consensus and official recommendations have changed over the course of the pandemic, she argues, citing changes in early advice on face masks, as well as the retraction of prominent papers on Lancet AND New England Journal of Medicine on the safety of various medications used by patients with COVID-19. “Part of the issue with predicating content moderation policies on the concept of harm at the start of the pandemic is that the scientific understanding of harm was uncertain and evolving,” Baker told me recently via email. “Harm is not a neutral concept,” she added. “What is considered harmful is very dependent on issues and party politics.”
Meanwhile, the mere existence of these policies serves as fodder for a culture war over platforms’ efforts to tame harmful speech — and Berenson’s victory has been good for morale among those who believe they’ve been censored. One of the attorneys who represented him, James R. Lawrence III, has tweeted about his other clients, including Rhode Island doctor Andrew Bostom and former combat medic Daniel Kotzin, who were both banned from Twitter for Violation of the COVID Disinformation Policy. . “Science is not about truth discovered by technocrats; it’s about discussion,” Adam Candeub, a Michigan attorney who advised President Donald Trump on his efforts to counter alleged anti-Republican bias on social media, told me. Candeub has filed lawsuits on behalf of banned Twitter users, but has never found success like Berenson and Lawrence. “It worked for them; thank God it happened,” he said.
The next round of lawsuits may go nowhere, but they may still play a role in a growing ecosystem of “harmed influencers,” for whom allegations of censorship by platforms are themselves a form of influence. Goldman told me that this issue is heating up. New efforts to regulate social media at the state level could enable more legal action, with a higher chance of success. If laws like those passed in Florida and Texas were to stand up in court, that would all change, Goldman said. “We’re going to see a massive tsunami of litigation that dwarfs what we’ve seen today.”