Researchers Are Upset That Twitter Is Dismissing Their Work On Election Interference

“I think it's dangerous that companies like Twitter are discrediting academic studies in the reckless way they are,” professor David Carroll told BuzzFeed News.

Last week, the Oxford Internet Institute published a timely paper suggesting that polarizing, sensational, or outright fake political news and information was shared disproportionately in the U.S. immediately before and after the 2016 presidential election in key battleground states. The study’s conclusion — based on an analysis of over 7 million tweets collected between November 1–11 — suggested a coordinated effort to target crucial voters. It was quickly picked up by major news outlets as the latest in a string of revelations about the role social media played in the spread of false information in 2016.

Twitter, however, attempted to discredit the research. The Washington Post reported that, in response to the paper — which the company received ahead of publication — Twitter “complained about the limits of research conducted using publicly available sets of tweets, as Oxford’s was, through a function called the Twitter search API, which allows developers and researchers to get certain public data from company servers.” Twitter went on to note that “Research conducted by third parties through our search API about the impact of bots and misinformation on Twitter is almost always inaccurate and methodologically flawed.” (One of Twitter’s legitimate claims is that the study was not peer reviewed.)

Later that day, Twitter reiterated that argument in a blog post summarizing its closed testimony before the joint House and Senate intelligence committees about the role it may have played in Russian interference with the 2016 election. ”Studies of the impact of bots and automation on Twitter necessarily and systematically under-represent our enforcement actions because these defensive actions are not visible via our API,” Twitter said.

Twitter’s comment was a clear and pointed warning: Third-party academic research about its platform is limited in scope and shouldn’t always be trusted. Kris Shaffer, a professor and data scientist at the University of Mary Washington who has studied bots and misinformation on Twitter, summed it up this way: “You can only trust Twitter to tell you what's really going on on Twitter.”

The bad actors are already attacking the "third-party researchers". Now Twitter piles on. https://t.co/eLg4XYocpG

Shaffer is not alone in that frustration. At a moment when lawmakers and citizens both are seeking answers from Twitter, researchers who study social media say they are disappointed by the company’s lack of transparency and its the dismissal of their research.

“I think it's dangerous that companies like Twitter are discrediting academic studies in the reckless way they are,” David Carroll, an associate professor at Parsons who studies the intersection of media, politics, and data said. “And that’s because these researchers are the only ones working in public with the data.”

Twitter insists it's not trying to discredit or smear researchers. In a statement provided to BuzzFeed News, a spokesperson suggested the company is looking for ways to work together. “As a company we know we have more work to do to support external research on these important issues,” Twitter told BuzzFeed News. “We look forward to more engagement with these researchers."

It’s understandable that Twitter would want to push back with some skepticism toward critical research, some academics are worried that the company's preemptive dismissal of outside analysis leaves its influence unknown and unchecked at a crucial moment of reckoning.

And Lawmakers seem to agree. Thursday, after Twitter’s testimony, Sen. Mark Warner, the lead Democrat on the Senate committee, told reporters the discussion was "deeply disappointing," and described Twitter's presentation "inadequate" in almost every way. When questioned, Warner didn’t rule out issuing subpoenas to Twitter.

To its credit, Twitter — unlike other big platforms — allows researchers and developers some plug into its API, thus making at least some study of the social network possible. And Twitter did reveal to congress that it found 200 accounts that seem to be linked to the same Russian groups that purchased roughly $100,000 in ads on Facebook in an effort to influence the 2016 election. (Unrelated to bots, it also revealed that three accounts from the Russian media group RT spent $274,100 on US ads in targeted US markets in 2016.) But some researchers worry the disclosures barely scratch the surface of untoward bot activity on Twitter, which already catches more than 3.2 million suspicious accounts globally per week.

For Shaffer, Warner’s reaction to Twitter’s testimony was particularly dispiriting. “Essentially, the company is saying, ‘Don't trust the researchers, but don't put your stock in the regulators either,’” he told BuzzFeed News. “Social media companies like Twitter want to have their cake and eat it, too. They want to have politics funneled through their platforms and benefit financially, but they don’t want to deal with the safeguards.”

“If Twitter instills a distrust in outside experts as well as government officials, then what reliable sources of info is left? Only Twitter."

Shaffer likened Twitter’s responses this week to the far-right’s war on the mainstream media.

“If Twitter instills a distrust in outside experts as well as government officials, then what reliable sources of info is left? Only Twitter,” he said. “It all feels very familiar telling people ‘don't trust government to get this right’ this and making it a free speech issue. There are no checks on the company and that’s really problematic when it’s not being forthright with congress.”

But to some researchers, Twitter's reluctance to share more information about its platform makes some sense.

“Platforms want to be the ones who define the metrics through which engagement (attention), and thus advertising dollars, is understood on their services,” Jonathan Albright, Columbia University’s Research Director for the Tow Center for Digital Journalism, told BuzzFeed News. “Since data on platforms forms the basis of their business model, providing this kind of public service and setting up the tech infrastructure and staffing needed for researchers and public to access it has zero economic incentive.”

David Carroll argues that Twitter has every reason to protect its bot detection and prevention data so that fraudsters can’t exploit it. “Even if an academic comes up with a really accurate bot detection scheme that gives great insight into the problem, Twitter has every motivation to discredit it despite the fact that the public has every right to know,” he said.

So, is Twitter shirking its responsibility to the public? Ultimately, Albright said, “they offload the task and costs [of making sense of these platforms] to the public and academia and then dispute it.”

Researchers who've had their work disputed by Twitter say the company is underestimating it. Emilio Ferrara, an assistant research professor at the USC Department of Computer Science, argues that third-party investigations into Twitter’s platform aren’t as “inaccurate and methodologically flawed” as the company argues. For example, Twitter said in Thursday’s blog post that data gathered by researchers from the API ignores things like user keyword filtering and algorithmic ranking — Ferrara thinks that’s overstated.

He argues that when researchers study bots, they’re looking for how those bots interact with humans — number of retweets, for example. “Since our findings look at engagement, they obviously account for Quality Filter and Safe Search features that the platform enacts — differently from Twitter's claim,” Ferrara told BuzzFeed News. At heart it’s a disagreement over the nature of Twitter’s filtering. Twitter argues that because its filters stop spammy, potentially bot–propagated content from reaching users’ feeds, it is effectively neutralized and therefore it shouldn’t be counted in studies. Researchers say they do take that into consideration and argue that just because the content is filtered, doesn’t mean it isn’t still on the platform.

Others hope that Twitter will find a way to include outside researchers to combat what they see as a formidable threat to democracy. Carrol, for one, is a proponent of adopting the white hat model used by the hacking world, in which select groups are given access to proprietary tools, privileges, and information to do research for the good of the platform — and society. As he sees it, it’s the only way for a company like Twitter to move out from its untenable position.

“Either they admit that they know what is going on inside their platform and give clear reasons why they need to keep it secret — or they explain that they don't understand the scope of the problem,” Carroll said. “But the current responses right now are unacceptable.”

Topics in this article

Skip to footer