YouTube sets policies to restrict extremism
OAKLAND (California) — YouTube has struggled for years with videos that promote offensive viewpoints but do not necessarily violate the company’s guidelines for removal. Now it is taking a new approach: Bury them.
The issue has gained new prominence amid media reports that one of the London Bridge attackers became radicalised by watching YouTube videos of an American Islamic preacher, whose sermons have been described as employing extremely charged religious and sectarian language.
On Sunday (June 18), Google, YouTube’s parent company, announced a set of policies aimed at curbing extremist videos on the platform. For videos that are clearly in violation of its community guidelines, such as those promoting terrorism, Google said it would quickly identify and remove them.
The process for handling videos that do not necessarily violate specific rules of conduct is more complicated.
Under the policy change, Google said offensive videos that did not meet its standard for removal — for example, videos promoting the subjugation of religions or races without inciting violence — would come with a warning and could not be monetised with advertising, or be recommended, endorsed or commented on by users.
Such videos were already not allowed to include advertising, but they were not restricted in any other way.
“That means these videos will have less engagement and be harder to find,” Kent Walker, Google’s general counsel and senior vice president, wrote in a company blog post on Sunday. “We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints.”
Google, which has relied on computer-based video analysis for the removal of most of its terrorism-related content, said it would devote more engineering resources to help identify and remove potentially problematic videos.
It also said it would enlist experts from nongovernmental organizations to help determine which videos were violent propaganda and which were religious or newsworthy speech.
Google said it would rely on the specialised knowledge of groups with experts on issues like hate speech, self-harm and terrorism. The company also said it planned to work with counterextremist groups to help identify content aimed at radicalizing or recruiting extremists.
By allowing anyone to upload videos to YouTube, Google has created a thriving video platform that appeals to people with a wide range of interests. But it has also become a magnet for extremist groups that can reach a wide audience for their racist or intolerant views. Google has long wrestled with how to curb that type of content while not inhibiting the freedom that makes YouTube popular.
Part of the challenge is the sheer volume of videos uploaded to YouTube. The company has said that more than 400 hours of video content is uploaded to the site every minute, and YouTube has been unable to police that content in real time. Users flag offensive videos for review, while the company’s algorithms comb the site for potential problems.
Videos with nudity, graphic violent footage or copyrighted material are usually taken down quickly.
Companies throughout the tech industry are working on how to keep platforms for user-generated content open without allowing them to become dens of extremism. Like YouTube, social media companies have found that policing content is a never-ending challenge.
Last week, Facebook said it would use artificial intelligence combined with human moderators to root out extremist content from its social network. Twitter said it suspended 377,000 accounts in the second half of 2016 for violations related to the “promotion of terrorism.”
In the aftermath of terror attacks in Manchester and London, Prime Minister Theresa May of Britain criticised large internet companies for providing the “safe space” that allows radical ideologies to spread.
According to news media reports, friends and relatives of Khuram Shazad Butt, identified as one of the three knife-wielding attackers on London Bridge, were worried about the influence of YouTube videos of sermons by Ahmad Musa Jibril, an Islamic cleric from Dearborn, Michigan.
Jibril’s sermons demonstrate YouTube’s quandary because he “does not explicitly call to violent jihad, but supports individual foreign fighters and justifies the Syrian conflict in highly emotive terms,” according to a report by the International Centre for the Study of Radicalisation and Political Violence.
A spokesman for YouTube said the new policies were not the result of any single violent episode, but part of an effort to improve its service. Google did not respond to a question about whether Jibril’s videos would fall under Google’s guidelines for videos containing inflammatory language but not violating its policies. Jibril still has videos on YouTube, but without ads.
In its blog post, Google acknowledged that “more needs to be done” to remove terrorism-related content from its service. YouTube said it would do more in “counter-radicalisation” efforts, including targeting potential Islamic State recruits with videos that could change their minds about joining the organisation.
Google said that in previous counter-radicalisation attempts, users clicked on ads at an “unusually high rate” to watch videos that debunk terrorism recruitment messages.
Google also announced a series of measures aimed at identifying extremist videos more quickly, an effort that the company started this year as YouTube tries to assure advertisers that its platform is safe for their marketing dollars.
YouTube came under fire this year when The Times of London and other news outlets found examples of brands that inadvertently funded extremist groups through automated advertising — a byproduct of YouTube’s revenue-sharing model that provides content creators a portion of ad dollars.
Brands such as AT&T and Enterprise Rent-A-Car pulled ads from YouTube. Google responded by changing the types of videos that can carry advertising, blocking ads on videos with hate speech or discriminatory content.
Google also created a system to allow advertisers to exclude specific sites and channels in YouTube and Google’s display network. THE NEW YORK TIMES