Right after Elon Musk took control of Twitter, hateful content rose as moderation was loosened, according to a USC computer scientist and his team.
Have something to say? Lookout welcomes letters to the editor, within our policies, from readers. Guidelines here.
In October, polarizing billionaire and Tesla chief executive Elon Musk purchased the polarizing social media platform Twitter for $44 billion, promising to change how the site operated.
In various statements, most of them tweets, Musk made allusions to decreased moderation on the platform, pledging to make the site a bastion of “free speech.”
In the months that followed, he implemented several new initiatives at the company and the site, including firing hundreds of employees, reinstating hundreds of previously banned accounts, stripping badges of verification from most users who did not pay $8 per month for a Twitter Blue subscription, and pledging to address the site’s bot problem.
But new research shows that the site also underwent another change after Musk took over — it became more hateful.
According to data collected by researchers from USC, UCLA, UC Merced and Oregon State University, daily use of hate speech by those who previously posted hateful tweets nearly doubled after Musk finalized the sale. And the overall volume of hate speech also doubled sitewide.
The research was conducted by Keith Burghardt, Matheus Schmitz and Goran Muric of USC, UCLA’s Daniel Fessler, Daniel Hickey of Oregon State and Paul Smaldino of UC Merced.
The group studied the tweets of users who had made hateful postings a month before and after Twitter was sold and also collected a sampling from the general user pool.
The researchers first developed a “hate lexicon” of 49 racist, antisemitic and homophobic and transphobic terms. Then, they examined the pre- and post-sale postings using an artificial intelligence tool that scanned for the hateful terms and their frequency, weeding out “non-toxic,” or non-hateful, uses of the terms.
“We first had to create a set of words that we could determine as being hateful,” Burghardt, a computer scientist with the Information Sciences Institute at the USC Viterbi School of Engineering, said in a news release. “Our aim was to find words that were relatively high precision, meaning that if people are using these words, it’s unlikely they’re being used in a non-hateful manner.”
The volume of hate speech posted by hateful users surged after the sale was finalized, although researchers noted that hate speech on Twitter was on the rise even before Musk bought it.
At the outset of their project, the researchers hypothesized that, with Musk nodding toward less restrictive policies, hate speech would increase. But they were unsure by just how much.
“I didn’t have any expectations one way or the other,” Fessler, who is director of the UCLA Bedari Kindness Institute, said in an interview with The Times, “because it’s very difficult to gauge in advance. You don’t know what the population of users potentially producing such content is, you don’t know what the size of the population is or what their frequency of tweeting and retweeting is.”
But the results shocked Burghardt.
“What was surprising was ... that this stuff had increased so dramatically,” he told The Times. “We had not expected that hate users would actually be using more hate words after Elon Musk joined Twitter.”
Fessler noted that expressions of intolerance had been on the rise since the start of Donald Trump’s presidential campaign and that Musk “winked at those sentiments often enough that a population of active or potential Twitter users who shared those views recognized the opportunity they were being given.”
“From a kind of 30,000-foot level,” Fessler said, “the Twitter effect is really reflective of larger trends in society.”
Researchers noted that they could not “prove a causal relationship between Musk’s takeover and hate speech.” The CEO’s changes to moderation are “poorly documented,” they said.
The research is an important step in identifying how and why people can become radicalized online by what has been termed stochastic terrorism, in which hate speech is used to incite violent acts, Burghardt said.
Social media could play a role in that radicalization, he said, but more research is needed.
Once users join these hate groups, even on social media sites that are not traditionally hateful, Burghardt said, they’re immediately more hateful and more antagonistic.
“We expect that, once they join these sites, they become more likely to advocate violence,” he said, “and then some small proportion actually commit these acts.”
But despite the documented increase in hate speech, Fessler noted that it was coming from a small population.
“This is in no way a majority view,” he said. “And society as a whole is ... increasingly tolerant of difference and increasingly diverse.”
Twitter has substantial influence despite its relatively small size, so Fessler is concerned that it’s apparently subject to the whims of one person — Musk.
“It is worrisome when a platform with the reach of Twitter can be purchased by one individual and even modest attempts to turn it to more socially constructive ends ... are deconstructed and removed,” Fessler said.
This story originally appeared in Los Angeles Times.