‘Hate Is More Engaging’: Researchers Make Headway Measuring Antisemitic Propaganda on Social Media
The year-long lockdown brought on by the COVID-19 pandemic in 2020 cemented the place of social media as the main channel for the spread of antisemitic messages, often of the crudest and most violent kind.
As the virus enveloped the world, a set of coronavirus-related antisemitic memes rapidly took shape. Some online trolls asserted that, just like the Black Death in the 14th Century, COVID-19 was a Jewish creation, while others urged that the disease — dubbed the “Holocough” — be used to kill Jews en masse.
Another innovation during this period was the phenomenon of “Zoom bombing.” As social distancing measures compelled Jewish institutions to move real-world events onto online platforms like Zoom, dozens of virtual meetings were hijacked by antisemitic rabble-rousers, pushing what one German research institute described as an “overlap of Nazi-glorifying and anti-Israel content” to a bewildered and often distressed audience.
In tandem with those outrages, established social media platforms like Facebook, TikTok, Instagram, Snapchat and Twitter were flooded with antisemitic posts. According to the Anti-Defamation League (ADL), between May 7 and May 14 alone this year, more than 17,000 posts on Twitter used some variation of the phrase “Hitler was right.”
Quantifying these antisemitic conversations on social media and distilling their content has become a key task for academic researchers monitoring the spread of antisemitism through a range of social and professional networks. At Indiana University, Bloomington, scholars at the Institute for the Study of Contemporary Antisemitism (ISCA) — which today launched a major conference on antisemitism in the US, to continue through next week — are working with colleagues from other departments such as computer science to sift through thousands of antisemitic tweets, some of which are written in strongly coded language, and with others expressing their hatred of Jews in candid terms.
“Indiana University has an agreement with Twitter to obtain 10 percent of all tweets on a statistically-relevant basis,” Prof. Gunther Jikeli of ISCA explained in an extensive interview with The Algemeiner. “That represents a huge database that we can run queries on.”
Jikeli said that he has been meeting on a twice-weekly basis with a research team that includes historians and linguists as well as computer programmers. Various generic search terms are utilized, such as “Israel,” Jews” and “Zionism,” as well as pejoratives like “zionazi” and “k*ke.”
“When we monitor individual tweets, we get the text of the post, the user ID, the number of retweets and responses, and other kinds of metadata that allow us to see the extent of its footprint,” Jikeli said. “We then apply a number of considerations to determine whether the posting is antisemitic.”
Those considerations are based upon the working definition of antisemitism endorsed by the International Holocaust Remembrance Alliance (IHRA), which demonstrates how antisemitic narratives work and how they can manifest in different contexts. Indiana University’s ISCA researchers analyzing Twitter posts are directed to an annotation portal, where they can add additional details and perspectives. A series of prompts — does the tweet quality as antisemitic under the IHRA definition? How intensely is the antisemitism expressed? Is the Holocaust mentioned? Is the user intending to be sarcastic? — can then be answered in order to accurately categorize the post.
Delivering his paper to the ISCA conference on Monday, Jikeli said that his research attempted to clarify six basic questions: what does antisemitism on social media look like? How widespread is it? Who is pushing these messages? Who is countering them? What is the overall impact? And what can be done to combat it?
On the last question, the issue of censorship, or “deplatforming,” crops up constantly. Between a climate of absolute censorship of postings deemed antisemitic or racist and an open season for bigotry online, Jikeli is trying to find a more nuanced solution.
Banning antisemites from social media raises both ethical questions about freedom of expression and practical questions about how to close down millions of social media accounts that traffic in bigotry. Jikeli cited recent research from the University of Amsterdam showing that antisemitic accounts removed from mainstream platforms tend to reappear in fringe locations — among others 4chan, Telegram and Gab, the latter application used by Pittsburgh Synagogue shooter Robert Bowers in 2018. Encouragingly, however, these restored accounts invariably have far fewer followers on these lesser platforms, and therefore find it harder to engage in what Jikeli called the “monetization of hatred.”
Yet while antisemites are turning to less popular platforms (as well as the dark web) as service providers clamp down on hate speech, the profile of antisemitism on mainstream social media still continues to rise. Studies of TikTok and Twitter in the last year showed that despite the commitment of the platforms to enforce community speech guidelines, antisemitic tweets increased exponentially on both.
Paradoxically, some extremist leaders are eager to avoid incitement on their own platforms for two main reasons: showing more moderate potential supporters that they eschew violence, and avoiding being shut down by the service providers.
Ayal Feinberg, an assistant professor of political science at Texas A&M University who moderated Jikeli’s panel on Monday, quoted the observation of one leading neo-Nazi in this regard. “These Jewish corpses are going to be used as clubs in the years to come to beat down our free speech rights,” moaned Andrew Anglin — publisher of the viciously racist Daily Stormer website — in the wake of the massacre at Pittsburgh’s Tree of Life synagogue.
On both sides of this clash, however, uncertainty prevails over what steps social media companies will take in future. While extremists fear the imposition of speech guidelines, Jikeli and others point out that equally, social media companies have a financial interest in directing users to content that they will engage with for a longer period of time.
“Hate content is more engaging, and that’s why the social media companies don’t want a reduction from this point of view, they want more of the same,” Jikeli observed.
Jikeli emphasized that even when it is not fed directly to users, “antisemitic content is easy to find on the internet” — a state of affairs that is unlikely to be resolved overnight.
“These social media companies are very young, and our research is also in its infancy,” Jikeli noted. “We are only at the beginning of our work.”