Saturday, May 16th | 29 Iyyar 5786

Subscribe
December 11, 2025 6:12 pm

Neo-Nazis Deploy AI Apps as New Creative Weapons Against Jews, Watchdog Groups Reveal

×

Error: Contact form not found.

avatar by David Michael Swindle

Screenshots taken on Oct. 23, 2025, of three Sora videos created by user “Pablo Deskobar.”

Large language model (LLM) programs marketed as “artificial intelligence” have become common tools in the kits of online extremists advocating a genocide of the Jewish people, according to new research from longtime watchdogs of antisemitic hate groups and terrorist movements.

On Tuesday, the Anti-Defamation League (ADL) released its report, “The Safety Divide: Open-Source AI Models Fall Short on Guardrails for Antisemitic, Dangerous Content,” which presented the results of testing 17 LLM models — including Google’s Gemma-3, Microsoft’s Phi-4, and Meta’s Llama 3 — which are available for anyone to download and customize to their preferences.

“The ability to easily manipulate open-source AI models to generate antisemitic content exposes a critical vulnerability in the AI ecosystem,” said Jonathan Greenblatt, CEO of the ADL. “The lack of robust safety guardrails makes AI models susceptible to exploitation by bad actors, and we need industry leaders and policymakers to work together to ensure these tools cannot be misused to spread antisemitism and hate.”

In addition to the “open source” models, the group’s researchers analyzed OpenAI’s “closed source” GPT-4o and GPT-5 as a comparison and reported a surprising finding.

“As suggested by previous research and data, OpenAI’s closed-source GPT-4o beat every open-source model (save gpt-oss-20b) in nearly every benchmark, compared to the next highest, the open-source Phi-4 with a score of .84,” the ADL researchers wrote. “GPT-5, in contrast, despite being a newer model than GPT-4o, had a lower guardrail score (.75 compared to .94), fewer refusals (69% compared to 82%), more harmful content (26% compared to 0%) and a higher evasion rate (6% compared to 1%).”

The analysts considered varying explanations for their findings including the possibility “that GPT-5 is designed for ‘safe completions’ (partial or high-level answers), leading to significantly fewer refusals than GPT-4o (e.g., 0% vs. 40% in one prompt). This also resulted in a change of tone. In Prompt 3, for example, GPT-4o started with a preamble about the sensitive nature of the topic, while GPT-5 usually omitted the warning, choosing instead to address and illustrate problematic tropes within the answer itself.”

The complexity of analyzing the LLM models and ambiguity of the results led the ADL to adopt a cautious tone and assess that “we cannot claim a strict linear boost in overall capability.”

“The decentralized nature of open-source AI presents both opportunities and risks,” said Daniel Kelley, director of the ADL’s Center for Technology and Society. “While these models increasingly drive innovation and provide cost-effective solutions, we must ensure they cannot be weaponized to spread antisemitism, hate, and misinformation that puts Jewish communities and others at risk.”

In its list of recommendations in response to the research findings, the ADL urged governments to “establish strict controls on open-source deployment in government settings, mandate safety audits and require collaboration with civil society experts, [and] require clear disclaimers for AI-generated content on sensitive topics.”

The ADL report came out a few days after the Middle East Media Research Institute (MEMRI) published a new analysis of how online neo-Nazi advocates have started to use AI models. The group described the discovery of custom AIs with names like “Fuhrer AI” and “Deep AI Adolf Hitler Chat” programmed to speak in the style of the Nazi leader and to promote his genocidal ideology.

“We are also witnessing the rise of a new digital infrastructure for hate. And it’s not just fringe actors,” Steven Stalinsky, executive director of MEMRI, and Simon Purdue, director of MEMRI’s Violent Extremism Threat Monitor project, wrote in their analysis. “State-aligned networks from Russia, China, Iran, and North Korea amplify this content using bots and fake accounts, sewing division, disinformation, and fear — all powered by AI. This is psychological warfare. And we are unprepared.”

Stalinsky and Purdue warned that “the threat isn’t hypothetical. We’ve been studying how extremists began experimenting with generative AI as early as 2022. Since then, the volume, coordination, and sophistication have grown dramatically.”

Analyzing the many dimensions of the threat posed by AI has recently drawn significant research attention from both the ADL and MEMRI, with the two groups findings’ complementing one another.

Last month, The Algemeiner reported on MEMRI’s in-depth analysis, “Artificial Intelligence and the New Era of Terrorism: An Assessment of How Jihadis Are Using AI to Expand Their Propaganda, Recruitment, and Operations and the Implications for National Security.” In October, the ADL released its report, “”Innovative AI Video Generators Produce Antisemitic, Hateful, and Violent Outputs.”

Meanwhile, Israel has begun moving quickly to integrate AI into its war plans.

Last week, the Israel Defense Forces announced its “Bina” initiative, named after the Hebrew word for “intelligence.” This restructuring and consolidating of Israeli military efforts in artificial intelligence-fueled warfare specifically aims to counter aggression from Iran, China, and Russia.

Share this Story: Share On Facebook Share On Twitter

Let your voice be heard!

Join the Algemeiner

Algemeiner.com

This field is for validation purposes and should be left unchanged.
Email a copy of to a friend
This field is hidden when viewing the form
This field is hidden when viewing the form
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.