Twitter rose to fame in the Arab uprisings nearly a decade ago as a key source for real-time crisis information, but that reputation has withered since the platform’s transformation into a magnet for hate speech and misinformation under Elon Musk.
Historically, Twitter’s greatest strength has been as a tool for gathering and disseminating life-saving information and coordinating emergency relief during times of crisis. Its old-school vetting system meant that sources and news were widely reliable.
Now the platform, renamed X by new owner Musk, has broken content moderation, reinstated previously banned extremist accounts and allowed users to simply purchase account verification, helping them profit from viral posts — but often inaccurate.
The rapidly developing Israel-Gaza conflict is widely seen as the first real test of Musk’s version of the platform during a major crisis. For many experts, the results confirm their worst fears: that the changes have made it a challenge to distinguish fact from fiction.
“It’s sobering, but not surprising, to see Musk’s reckless decisions exacerbating the Twitter information crisis surrounding the already tragic Israel-Hamas conflict,” Nora Benavidez, senior adviser at Free Press, told AFP.
The platform is flooded with violent videos and images — some true, but many fake and mislabeled from completely different years and countries.
Nearly three-quarters of the most viral posts promoting falsehoods about the conflict are being pushed by accounts with verified checkmarks, according to a new study by watchdog NewsGuard.
In the absence of guardrails, this has made it “very difficult for the public to separate fact from fiction,” while escalating “tension and division,” Benavidez added.
– “Information Firefighter” –
This became apparent on Tuesday after a deadly attack on a hospital in war-torn Gaza, as ordinary users seeking real-time information expressed frustration that the site had been rendered unusable.
Confusion reigned as fake accounts with verified checkmarks shared images of past conflicts while jumping to conclusions on unverified videos, illustrating how the platform had handed the megaphone to paying subscribers regardless of accuracy.
Accounts masquerading as official sources or news media ignited passions with inflammatory content.
Disinformation researchers warned that many users were treating an account by an activist group called the Israel War Room, stamped with a gold symbol — indicating “an official organization account,” according to X — as a supposedly official source. Israeli.
India-based bot accounts known for anti-Muslim rhetoric further muddied the waters by pushing false anti-Palestinian narratives, the researchers said.
Meanwhile, Al Jazeera warned it had “no connection” to a Qatar-based account falsely claiming ties to the Middle Eastern broadcaster, while urging its followers to “exercise caution”.
“It’s become extremely challenging to navigate the information firehose — there’s a relentless news cycle, push for clicks and amplification of noise,” Michelle Ciulla Lipkin, head of the National Association for Media Literacy Education, told AFP.
“It is now clear that Musk sees X not as a reliable source of information, but just another one of his business ventures.”
The chaos stands in stark contrast to the Arab uprisings of 2011 that fueled a surge of optimism in the Middle East about the platform’s potential to spread authentic information, mobilize communities and elevate democratic ideals.
– “Break the glass” –
Disrupting the site’s basic functionality threatens to hamper or disrupt the humanitarian response, experts warn.
Humanitarian organizations have typically relied on such platforms to assess needs, prepare logistical plans and assess whether an area was safe to send first responders. And human rights researchers use social media data to conduct investigations into possible war crimes, said Alessandro Accorsi, a senior analyst at the Crisis Group.
“The flood of misinformation and the restrictions X placed on access to their API,” which allow third-party developers to harvest the social platform’s data, had complicated those efforts, Accorsi told AFP.
X did not respond to AFP’s request for comment.
The company’s chief executive Linda Yaccarino signaled that the platform was still serious about trust and security, insisting that users were free to adjust their account settings to enable real-time information sharing.
But researchers expressed pessimism, saying the site has abandoned efforts to elevate mainstream news sources. Instead, a new program to share ad revenue with content creators encourages extreme content designed to increase engagement, critics say.
Pat de Brun, head of Big Tech Accountability at Amnesty International, said X should use every tool available, including putting in place so-called “glass-breaking measures” aimed at preventing the spread of lies and hate speech.
“Platforms have clear responsibilities under international human rights standards,” he told AFP.
“These responsibilities increase in times of crisis and conflict.”