For many people, Twitter is a popular social media platform where users engage with each other through tweets. But new research is highlighting the app’s dark side.
While the firm’s terms of service boldly outline and prohibit certain posts that glorify the concept of self-harm, some researchers have proven that the app does otherwise.
Experts claim in their findings that Twitter may be the first to promote bans on such content, but tends to look the wrong way when it comes to moderating such concerns.
Researchers from NCRI have estimated that there are literally hundreds of thousands of people who repeatedly violate the terms described, but Twitter is not paying attention. And now they’re providing evidence of Twitter’s carelessness by proving how hashtags related to the topic in question have been growing prolifically since last October.
Additionally, Twitter also became aware of the troubling issue last year in October and how it was doing poorly in moderating such content. This is at the same time that we saw a UK-based charity come forward and notify a regulator of issues related to the app’s algorithm and their respective recommendation system.
Research by 5Rights established how the app’s algorithm was targeting accounts that included child-aged avatars searching for terms like ‘self-harm’ to those sharing images and video clips of people self-harming through cutting.
But then something interesting happened. The app came out in the open and mentioned to The FT that it was certainly against the company’s policy to market, encourage and glorify words like suicide or harm. They even boasted how their main goal was limited to providing users with great security measures and to get rid of violence of all kinds.
They even vowed to take extreme action against those involved in violence and its glorification of any kind.
But there is a growing body of evidence that has proven over the past few months that Twitter has done little or nothing to combat the issues, and even when it tried, it failed miserably to combat the concerns.
Furthermore, NCRI proved in their report how people with small followings were also able to get away with promoting such content. At the same time, the researchers also found that users who searched for such clear terms in hashtags also doubled in number since last fall.
Mentions of such terms have also increased by nearly 500% on the app, despite Twitter being alerted to the issue some time ago.
To help put that in perspective, last year in October, the number of such posts was about 3,000. And in July of this year, it has gone up to nearly 30,000. And again, the only answer Twitter is able to give is that it’s trying to combat what it calls a major concern.
So why is there so much neglect, and what can the app do to improve its oversight?
Researchers from NCRI claim that there are many reasons why the platform is unable to take proper moderation measures.
For starters, users are deliberately very evasive. They communicate in code words that may not be noticed by Twitter. Then some made claims of pictures with fake blood. This prevents content from being removed. Subsequently, Twitter appears to be more engaged with political content and its moderation. This worries communities more and is also more reported. It’s hard not to notice such people.
The issue is huge and researchers claim that if it continues, we could be dealing with serious disorders. And in the end, we will be dealing with serious or fatal injuries.
Read more: Twitter’s attempt to make more money from adult content suspended due to major glitches