Link found between online hate and offline violence
New research shows that hate speech on Twitter can predict the frequency of real-life hate crimes.
According to a “first of its kind” study from New York University, cities with a higher incidence of a certain kind of racist tweets report having more actual hate crimes related to race, ethnicity and national origin.
A team of researchers from NYU analysed the location and linguistic features of 532 million tweets published between 2011 and 2016, examining “two types of tweets: those that are targeted – directly espousing discriminatory views – and those that are self-narrative – describing or commenting upon discriminatory remarks or acts”.
The prevalence of each type of discriminatory tweet was then compared to the number of actual hate crimes reported during that same time period in those same cities.
“We found that more targeted, discriminatory tweets posted in a city related to a higher number of hate crimes,” said co-lead researcher and NYU assistant professor of computer science and engineering Rumi Chunara.
“This trend across different types of cities (for example, urban, rural, large and small) confirms the need to more specifically study how different types of discriminatory speech online may contribute to consequences in the physical world.”
The analysis included cities with a wide range of urbanization, varying degrees of population diversity, and different levels of social media usage, the team explained, noting it limited the dataset to tweets and bias crimes describing or motivated by race, ethnic or national origin-based discrimination.
They also also identified a set of discriminatory terms and phrases that are commonly used on social media across the country, as well as terms specific to a particular city or region.
“These insights could prove useful in identifying groups that may be likelier targets of racially motivated crimes and types of discrimination in different places,” the researchers said.
And, while most tweets included in this analysis were generated by actual Twitter users, the team found that an average of 8 per cent of tweets containing targeted discriminatory language were generated by bots.
“There was a negative relationship between the proportion of race/ethnicity/national-origin-based discrimination tweets that were self-narrations of experiences and the number of crimes based on the same biases in cities.”
While experiences of discrimination in the real world are known psychological stressors with health and social consequences, “the implications of online exposure to different types of online discrimination – self-narrations versus targeted, for example – need further study”, Assistant Professor Chunura said.
“These results represent one of the largest, most comprehensive analyses of discriminatory social media posts and real-life bias crimes in this country, although the researchers emphasise that the specific causal mechanisms between social media hate speech and real-life acts of violence need to be explored,” the researchers concluded.
The study, called Race, Ethnicity and National Origin-based Discrimination in Social Media and Hate Crimes across 100 US cities, was led by Assistance Professor Chunura and co-authored by NYU assistant professor of biostatistics and social and behavioral sciences Stephanie Cook, as well as Tandon students Kunal Relia and Zhengyi Li.
“Kindness is the language that the deaf can hear and the blind can see.” – Mark Twain