The views expressed in the opinion columns are the author’s own.
Shortly after taking the helm at Twitter, Elon Musk started mass layoffs, including employees responsible for US elections and content moderation. But his time it couldn’t be worsecasting serious doubt on the social media platform’s ability to suppress content created by malicious actors.
During last week’s midterm elections, Twitter faced one SPREAD of election-related disinformation – and gave an unacceptably inadequate response.
Ample moderation of content is critical to preventing election interference and manipulation, both foreign and domestic. And maintaining this moderation seems entirely dependent on the mitigation of negative effects of Section 230 of the Communications Decency Act, a law that protects technology companies from being held liable for illegal user-generated content, which allows for such negligence.
While the desire for immediate reforms exists on both sides of the corridorthe federal government should start by mandating those companies that are responsible for social media platforms to disclose certain data with researchers — or you will face the loss of legal protection — to reduce the scope for negligence.
The complete repeal of Article 230 may create a bunch of problemsand a lot reasonable criticism there are previously proposed reforms. Instead, more controlled changes to federal regulations would ensure that companies don’t risk catastrophic outcomes in a hastily built effort to maximize profits. Exploration of new legislation is also necessary as the Supreme Court examines the challenges that could change section 230 as we know it.
A content moderation team with limited capacity has an inferior capacity for it find and label content that falsifies or misrepresents the candidates’ attitudes or character – one field where Twitter is already struggling. Fifteen people, the number while remaining with access to the platform’s moderation toolsit cannot reasonably be expected to handle the hundreds of millions of tweets sent each day.
Intelligence agencies and technology companies have both uncovered Russian efforts in 2020 AND until this month very similar to information war run through social media platforms including Twitter in the 2016 presidential election.
And election deniers are increasingly using social media platforms to spread conspiracy theories and sow suspicion in legitimate electoral processes to promote narratives of stolen elections.
Fraud campaigns are also known to spread inaccurate information about voting through means that are harder to track, such as making robo calls AND text messages. Musk continues to run the serious risk of allowing Twitter to become another compromised sphere of influence.
The Twitter imperative must ensure that rigorous content moderation extends beyond electoral integrity – content moderation is widely used to prevent and remove hate speech and other illegal or disturbing content.
After Election Day, Twitter also launched a paid verification service, which was stopped a day later because it became more difficult for users to discern the authenticity of the sources they rely on. If Twitter restarts this service without seriously reevaluating its approach, the same issue will plague it political informationwhen candidates, elected officials and journalists are impersonated at critical moments of elections.
While Musk claims that widely available verification brings “power to the people,” will hurt the average user more than any influential user.
Verification offers few tangible benefits for the bearer. However, it helps unverified users navigate the barrage of information they see while using Twitter, providing assurance that accounts truly represent the entity they claim to be.
The removal of such security will cause deep damage to the electoral process. For example, a user posing as a news organization can post fake election calls, which can reduce turnout later in the day if voters believe the outcome of a race is decided.
What Twitter’s leadership must face is that open communication is not the service social media platforms provide – curated communication is.
Creating a social media platform where users can share ideas and express their feelings about those ideas is an elementary task. Ensuring that the platform is authentic and cannot be abused is the real challenge.
Both tech companies and policymakers share a mutual responsibility in facilitating free speech that is truly free—one that protects the unfettered exercise of voting rights and keeps all users safe. Data transparency is the first step in enabling lawmakers to address the unintended consequences of Section 230.
Dhruvak Mirani is a computer science freshman majoring in government and politics. Mirani can be reached at [email protected].