TikTok’s design makes it a breeding ground for misinformation, researchers found. They wrote that videos can be easily manipulated and republished on the platform and appear alongside stolen or original content. Nicknames are common; parody and comedy videos are easily misinterpreted as facts; popularity affects the visibility of comments; and the release time data and other details are not clearly displayed on the mobile app.
(However, researchers at the Shorenstein Center noted that TikTok is less vulnerable to so-called brigades, in which groups coordinate to make a widely shared post, than platforms like Twitter or Facebook.)
During the first quarter of 2022, more than 60 percent of videos with harmful misinformation were viewed by users before being removed, TikTok said. Last year, a group of behavioral scientists who had worked with TikTok said that an effort to attach warnings to posts with inappropriate content had reduced sharing by 24 percent, but limited views by just 5 percent.
Researchers said misinformation will continue to thrive on TikTok as long as the platform refuses to release data about the origins of its videos or share knowledge about its algorithms. Last month, TikTok said it would provide some access to a version of its application programming interface, or API, this year, but would not say whether it would do so before the midterms.
Filippo Menczer, a professor of computer science and computer science and director of the Social Media Observatory at Indiana University, said he had proposed research collaborations for TikTok and been told, “Absolutely not.”
“At least with Facebook and Twitter, there is some level of transparency, but, in the case of TikTok, we don’t have any data,” he said. “Without resources, without being able to access the data, we don’t know who is suspended, what content is removed, if they act on the reports or what the criteria are. It’s completely opaque and we can’t independently assess anything.”