Wednesday, November 22nd | 4 Kislev 5778

Close

Be in the know!

Get our exclusive daily news briefing.

Subscribe
June 27, 2017 11:46 am

Using the Internet to Fight Extremism

avatar by Ronn Torossian

Email a copy of "Using the Internet to Fight Extremism" to a friend

An antisemitic drawing posed on Fatah’s official Facebook page. Photo: Palestinian Media Watch.

Internet sites and social media forums are finally trying to combat extremists online. Terrorist groups and others have been using the internet as a recruiting tool for years, and now — due to negative publicity and security concerns — those sites are finally attempting to stem the tide.

Google recently announced that it will use artificial intelligence in order to identify and remove extremist video content from its sites. As a second-tier effort, Google has said that it will add warning labels to other “objectionable” content that does not reach the threshold for removal.

Kent Walker, the general counsel at Google, blogged about the effort, saying: “While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. … There should be no place for terrorist content on our services.”

Yet the difficulty in finding and removing this content is multilayered.

Related coverage

November 22, 2017 2:29 pm
0

Hitler’s ‘Jolly Elf’ and The New York Times

Haj Amin al-Husseini was one of the seminal figures of the 20th century. He was a founding father of Palestinian nationalism and...

First, the internet companies must establish a fair and equitable line between what is objectionable and what should be banned. This is a difficult step, with a massive gray area — because the definition of what is “offensive” or “dangerous” will differ greatly depending on whom you ask.

From there, Google — and others, including Facebook and Twitter — has to decide what to do about content that most would find offensive, but that doesn’t reach the previously established threshold of removal. In most cases, that line is, necessarily, subjective. So how can Google create an objective standard? That, indeed, is the question.

In a small step, Google has decided that videos people might find offensive will be banned from comments or up-votes by users. The content will still be visible and shareable, but strangers won’t be able to comment on the video and join together in a common cause. As Walker explained, “We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints.”

These videos will also be devoid of any advertising, fixing a problem that recently created a major PR ripple effect for both Google and advertisers whose ads were promoted on offensive videos — without their knowledge or consent.

So will artificial intelligence be able to bridge this gray area, and help Google solve the problem of offensive content on its sites? The proof is in the doing, so we’ll have to wait and see.

Ronn Torossian is CEO of 5WPR, one of America’s leading privately owned PR firms.

The opinions presented by Algemeiner bloggers are solely theirs and do not represent those of The Algemeiner, its publishers or editors. If you would like to share your views with a blog post on The Algemeiner, please be in touch through our Contact page.

Share this Story: Share On Facebook Share On Twitter Email This Article

Let your voice be heard!

Join the Algemeiner

Algemeiner.com