Using the Internet to Fight Extremism
Internet sites and social media forums are finally trying to combat extremists online. Terrorist groups and others have been using the internet as a recruiting tool for years, and now — due to negative publicity and security concerns — those sites are finally attempting to stem the tide.
Google recently announced that it will use artificial intelligence in order to identify and remove extremist video content from its sites. As a second-tier effort, Google has said that it will add warning labels to other “objectionable” content that does not reach the threshold for removal.
Kent Walker, the general counsel at Google, blogged about the effort, saying: “While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. … There should be no place for terrorist content on our services.”
Yet the difficulty in finding and removing this content is multilayered.
First, the internet companies must establish a fair and equitable line between what is objectionable and what should be banned. This is a difficult step, with a massive gray area — because the definition of what is “offensive” or “dangerous” will differ greatly depending on whom you ask.
From there, Google — and others, including Facebook and Twitter — has to decide what to do about content that most would find offensive, but that doesn’t reach the previously established threshold of removal. In most cases, that line is, necessarily, subjective. So how can Google create an objective standard? That, indeed, is the question.
In a small step, Google has decided that videos people might find offensive will be banned from comments or up-votes by users. The content will still be visible and shareable, but strangers won’t be able to comment on the video and join together in a common cause. As Walker explained, “We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints.”
These videos will also be devoid of any advertising, fixing a problem that recently created a major PR ripple effect for both Google and advertisers whose ads were promoted on offensive videos — without their knowledge or consent.
So will artificial intelligence be able to bridge this gray area, and help Google solve the problem of offensive content on its sites? The proof is in the doing, so we’ll have to wait and see.