Meta’s Artificial Intelligence Chatbot Spews ‘Unacceptable’ Antisemitic Conspiracy Theory
A new artificial intelligence chatbot released by Facebook’s parent company Meta Platforms Inc. has promoted the antisemitic conspiracy theory that Jewish people control the world’s economy, Bloomberg first reported.
Meta Platforms — formerly known as Facebook before it officially changed its name in October 2021 — released in the US on Friday a public demo of its new BlenderBot 3, an artificial intelligence conversational software that can converse with people, who can then provide feedback on how to improve the responses they receive. BlenderBot 3 can also search the internet to talk about various topics.
In a conversation with a Wall Street Journal reporter that was shared on Twitter on Sunday, the chatbot claimed it was “not implausible” to believe that Jewish people control the economy, and added that Jews have “been a force in American finance and are overrepresented among America’s super rich.”
Rabbi Abraham Cooper, associate dean of the Simon Wiesenthal Center and co-chair of the United States Commission on International Religious Freedom, criticized Meta regarding the incident.
“It is simply unacceptable that as Facebook, now Meta, moves into AI, that it hasn’t taken steps — from the beginning — to ensure they aren’t going to allow the migration of the haters to their powerful next generation technological platforms,” he told The Algemeiner on Tuesday. “If Meta can’t figure [it] out, maybe Congress will.”
In separate conversations with users, BlenderBot 3 described Meta CEO Mark Zuckerberg as “too creepy and manipulative” and said “his business practices are not always ethical.” It also claimed that Donald Trump is still president of the United States.
Meta Platforms did not respond to The Algemeiner‘s request for comment but on Monday, Joelle Pineau, managing director of fundamental AI research at Meta, said that the company has already collected 70,000 conversations from the public demo and that based on feedback already provided by 25 percent of participants on 260,000 bot messages, 0.11 percent of BlenderBot’s responses were flagged as inappropriate, 1.36 percent as “nonsensical,” and 1 percent were off-topic.
“When we launched BlenderBot 3 a few days ago, we talked extensively about the promise and challenges that come with such a public demo, including the possibility that it could result in problematic or offensive language,” Pineau said. “While it is painful to see some of these offensive responses, public demos like this are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productionized … We continue to believe that the way to advance AI is through open and reproducible research at scale. We also believe that progress is best served by inviting a wide and diverse community to participate.”
There have been issues with chatbots in the past as well. In 2016, a Microsoft chatbot called Tay was taken offline within 48 hours after it praised Nazi leader Adolf Hitler and made other racist and misogynist comments, Bloomberg reported.