A key component of his argument: If he succeeds in getting his $43 billion, he wants to make public the computer code that determines what people see on Twitter to ensure the platform is fair. The algorithm determines the priority in which tweets are served to users, either expanding or limiting how many millions of people see them. Specifically, he said he wants to make the algorithm that recommends whether a tweet is promoted or demoted “open source,” or available for the public to view and improve. This, he said, will help prevent “manipulation behind the scenes”.
If someone’s tweets are “highlighted or de-highlighted, that action should remain visible,” he said in a live stream from a TED conference on Thursday.
But Musk’s proposal probably represents a gross oversimplification of how it would work to make that data public, according to researchers who study recommendation algorithms. As social media companies have grown, the software that drives their recommendation engines has become so vast and complex that analyzing it would require access to such an immense firehose of data. that most people wouldn’t even have access to a powerful enough computer to analyze it. Algorithms on Twitter, Facebook and other social networks process billions of pieces of content and use a wealth of data to determine a ranking, from the popularity of a post to the person who posted it.
“The algorithm is not one thing,” said Nick Seaver, an assistant professor of anthropology at Tufts who researches the algorithms that drive recommendation engines. The systems are so complex, he says, that tech companies themselves often have trouble knowing why their software showed one user one post over another.
“People inside Twitter also want to understand how their algorithm works,” he said.
Twitter, Facebook and Musk did not respond to requests for comment.
The obscurity of the algorithms that drive what people see on social media sites has long been a fascination for conservatives, in particular, who have claimed without evidence that the platforms are biased against them. Musk’s efforts to gain influence over Twitter this week have resulted in jubilation on the right, as some speculate he may try to reduce misinformation policing and bring former President Donald Trump back to the platform.
There have been efforts to make algorithms more transparent through regulation. Last year, several bills were introduced or reintroduced in Congress that explicitly focused on software programs that decide what people see on social media platforms. Efforts are also advancing to regulate artificial intelligence and algorithms globally.
When social networks like Facebook and Twitter were brand new, there wasn’t enough content to warrant the need for a complex algorithm, which is basically a set of rules similar to a mathematical equation that help analyze content and determine what is most relevant to an individual user.
But as hundreds of millions of users began joining and posting billions of pieces of content, companies began writing software that could learn what users were most likely to click on and then rank their feeds accordingly.
Now, companies like Facebook, video app TikTok and Twitter all use some form of algorithm to figure out what to show users. And this means that not every user on a social network sees the same thing. People obsessed with the outdoors might see posts about the best camping spots for the summer, while a basketball fan might be bombarded with posts about the NBA playoffs.
Facebook and Twitter originally had chronological feeds, and both have faced recent pushes to bring them back as the default, particularly amid criticism that their ranking systems help promote the spread of misinformation.
The complex math that most companies use is often called “machine learning,” which is essentially a very fancy form of pattern recognition. A computer program cannot tell whether a particular tweet is funny, interesting or valuable. But if you look at millions of tweets and a number of factors, such as who liked, shared, and retweeted it can begin to predict which tweets are likely to gain attention.
Does Twitter control free speech and should it be fixed? yes
Is the management change good for Twitter and free speech? yes
makes @elonmusk do you know what you are talking about?
No.
He is super naive when he describes the open source algorithm and the manual audit of decisions— Gilad Buchman (@giladbu) April 14, 2022
Even if Twitter made public its secret formula, including the math it uses to “train” its machine learning algorithms, an outsider looking at it wouldn’t be able to draw meaningful conclusions from it. said Michael Ekstrand, an assistant professor of computer science. at Boise State researching recommendation engines. An outsider would also need to see the underlying data used to “train” those algorithms — billions of pieces of data that show who viewed, liked or shared tweets, among many other possible factors.
Releasing that data would raise serious privacy concerns, experts said.
“The algorithm is just the tip of the iceberg,” said Robin Burke, a professor of information science at the University of Colorado at Boulder and a researcher specializing in recommendation algorithms. “The rest of the iceberg is all this data that Twitter has,” he said, most of which it can’t release publicly.
Musk must be adept at the complex nature of algorithms, experts said. Tesla, a company he runs, is using machine learning algorithms to develop autonomous driving technology. This undertaking is so massive that Tesla is building its own supercomputer and custom semiconductors to process all the data.
There are other ways to improve transparency that are more practical, experts said, some of which Twitter is already doing. Some critics have called on social media companies to simplify their algorithms so that when there are problems, such as perceived bias against specific groups of people, they can be more easily addressed. Others have called for independent audits within companies.
Twitter has an internal research team called Machine Learning Ethics, Transparency and Accountability that looks at potential biases in its algorithms. She has published researchfor example, if the algorithms that automatically cut profile pictures contain unintentional biases.
One idea that has been floated is to split Twitter into multiple different algorithms. Twitter may select business partners who will have access to data and develop algorithms tailored to certain audiences.
Nathan Baschez, co-founder of Every, a tech-focused writers’ collective, argued in an article On Friday, Twitter must allow outsiders to create their own algorithms tailored to specific interests. Twitter co-founder Jack Dorsey responded to a tweet posted by Baschez, confirming that he floated the idea while he was the company’s chief executive before stepping down in December.
In congressional testimony in November 2020, Dorsey suggested the idea at length. Algorithms “are responsible for telling us what we see or what we don’t see, and there should be more choice in their use,” he said. Twitter is also funding Bluesky, which aims to decentralize social media in part to give people more choice in how their feeds are organized.
A former Twitter employee, who spoke on condition of anonymity to discuss private matters, said the company has considered an “algorithmic marketplace” in which users can choose different ways to view their feeds. But efforts to provide more transparency have proven challenging, the person said, because of how intertwined Twitter’s algorithms are with other parts of the product. Opening it up could reveal trade secrets and invite abuse, the person said.
Burke said the idea has merit, but it would require a restructuring of how Twitter operates and how its data flows. “The fact that it’s hard to imagine happening speaks to the monolithic nature of these social media companies,” he said.
Releasing Twitter’s code to the public can also have negative side effects. Those looking to game the system by spreading disinformation to influence elections can use the information to manipulate the platform, experts say.
Even if Musk succeeds and the code is published, it’s likely that critics will remain skeptical about whether it’s the complete code and whether something could have been left out. Essentially, the billionaire would be asking Twitter users to trust him that there was no foul play behind the scenes, as the company is today under its current ownership.
“Perhaps it is impossible to eliminate all skepticism and cynicism. There will always be people who don’t believe what’s being said,” Ekstrand said.
Rachel Lerman contributed reporting.