Saturday, September 24th | 28 Elul 5782

August 16, 2022 1:55 pm

Meta’s Antisemitic Chatbot Highlights Challenges of Toxic Content in Evolving AI Industry

avatar by Shiryn Ghermezian

Entrance sign at Meta’s headquarters complex in Menlo Park, California. Photo: Wikimedia Commons.

An artificially intelligent chatbot that shared antisemitic conspiracy theories and anti-Israel messages in conversations with users highlights the challenges of bias and discrimination within the booming AI industry as it continues to grow, an Israeli AI expert told The Algemeiner on Monday.

BlenderBot 3, a chatbot released by Facebook and Instagram’s parent company Meta Platforms Inc. on Aug. 5, was found saying in one conversation with a Wall Street Journal reporter last week that it believes Jewish people control the economy and that Jews have “been a force in American finance and are overrepresented among America’s super rich.”

“This recent Meta experience underscores the need for being vigilant,” Yoav Shoham, CEO of AI21, an AI product company, told The Algemeiner. “But at the end of the day it’s solvable. Cars today are much safer than they were 20 years ago. Language models — and systems built on them — will be that way too, and sooner than 20 years from now.”

Artificial intelligence, which uses computer science, machines and data to copy the problem-solving and decision-making capabilities of humans, is a rapidly growing industry. Globally, it is valued at over $65 billion and is expected to reach over a trillion dollars by 2030, according to a report cited by Yahoo in June. Seventy percent of businesses around the world are expected to use AI by the end of the decade.

Related coverage

September 23, 2022 12:32 pm

One-Man Show About Polish Resistance Fighter, World War II Hero Jan Karski Premieres in New York

An off-Broadway solo performance about the World War II Polish resistance fighter and hero Jan Karski premiered this month at...

But AI remains a relatively young technology and it faces considerable growing pains, especially with chatbots, an artificial intelligence conversational software that can converse with people. In 2016, Microsoft pulled its chatbot called Tay offline within 48 hours after it praised Nazi leader Adolf Hitler and made other racist and misogynist comments.

Artificial intelligence learns from humans and chatbots are meant to mimic human interactions. In light of the rapid spread of online hatred and antisemitism on social media, Shoham was asked if there a long-term concern that artificial intelligence will pick up on the antisemitism circulating in society.

“No, it’s not inevitable,” he said.

Shoham, who spent decades as a professor of computer science at Stanford University, argued that AI developers and engineers must work to create a code of ethics when building AI products. He emphasized the need for better “control” and “monitoring” and said the issue of dealing with human tendencies, such as antisemitism, will only become more pressing as AI technologies become more common in the human world.

“Just as there are rules of civil discourse among people, there can and should be such rules for discourse with (and among) bots,” Shoham said. “We know a lot about how to control for it — from cleaning the training data, to monitoring the output and blocking toxic output that got through anyway, and a variety of other methods.”

In Meta’s case, the issue with its chatbot was fixed shortly after the problems surfaced in the press.

Shoham said that though he was surprised that “Meta fell into this trap,” he recognized that that AI is still a relatively new technology, and more work needs to be done to prevent similar incidents.

Share this Story: Share On Facebook Share On Twitter

Let your voice be heard!

Join the Algemeiner

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.