Yang, one of Botometer’s creators, said he hadn’t heard from Musk’s team and was surprised to see that the world’s richest man had used his tool.
“To be honest, you know, Elon Musk is really rich, right? I had assumed that he would spend money to hire people to build some sophisticated tools or methods himself,” Yang told CNN Business on Monday. Instead, Musk chose to use the Indiana University team’s free, publicly available tool.
Twitter has repeatedly argued that the bots are not actually eligible for the deal, as Musk signed a binding contract that does not include any bot-related digging. However, the company responded in a response to Musk’s response by noting that Botometer uses a different method than the company to classify accounts and “earlier this year Musk himself designated it as most likely a bot.”
Botometer actually looks at the issue somewhat differently, according to Yang. The tool does not tell whether an account is fake or spam, nor does it attempt to make any other judgment about the purpose of the account. Instead, it shows how likely an account is to be automated — or managed using software — using various considerations, such as the time of day it tweets, or whether it self-identifies as a bot. “There’s overlap of course, but they’re not exactly the same thing,” he said.
The difference highlights what could become a key challenge in the legal battle between Musk and Twitter: There is no distinct, clear definition of a “bot.” Some bots are harmless (and probably cases, even useful) automated accounts, such as those that post weather or news updates. In other cases, a human may be behind a fake or fraudulent account, making it difficult to catch with automated systems designed to weed out bots.
Botometer gives a score from zero to five that indicates whether an account looks “human” or “bot-like.” Contrary to Twitter’s characterization, the tool has since at least June rated Musk’s account as a one out of five on its bot scale — indicating there’s almost certainly a human behind the account. It shows, for example, that Musk tweets fairly consistently on all days of the week, and his average tweeting hours reflect a human schedule. (A bot, by contrast, can tweet all night, during hours when most people are sleeping.)
But in many cases, Yang said, the difference between bot or not can be unclear. For example, a human can log in and tweet from what is usually an automated account. With this in mind, the tool is not necessarily useful for affirmatively classifying accounts.
“It’s tempting to set some arbitrary threshold score and consider anything above that number a bot and everything below a human, but we don’t recommend this approach,” according to an explanation on Botometer’s website. “Binary classification of accounts using two classes is problematic because few accounts are fully automated.”
Additionally, Twitter’s firehose only shows accounts that tweet, so its rating would leave out bot accounts whose purpose is, for example, simply to increase the number of followers of other users—a form of inauthentic behavior that doesn’t involve tweeting, Yang said. .
Musk’s legal team did not immediately respond to a request for comment on this story. But Musk’s response acknowledges that his analysis was “limited” due to the limited data provided by Twitter and the limited time he had to conduct the assessment. it added that it continues to request additional data from Twitter.
There is private data from Twitter — such as IP addresses and how much time a user spends viewing the app on their devices — that can make it easier to assess whether an account is a bot, according to Yang. However, Twitter claims it has already provided more than enough information to Musk. It may be reluctant to hand over such data, which could be a competitive risk or undermine user privacy, to a billionaire who now says he no longer wants to buy the company and has even hinted at launching a platform rival.