Advertisers Drop YouTube Over Anti-Semitic Videos, Extremism Concerns

LIONEL BONAVENTURE/AFP/Getty Images
LIONEL BONAVENTURE/AFP/Getty

YouTube and its parent company Google have been accused of continuing to host thousands of anti-Semitic hate videos in defiance of their own guidelines, as the UK government announced it was withdrawing advertising from the global video-sharing giant.

French advertising agency Havas,  one of the world’s largest marketing groups, pulled hundreds of UK clients out of Google’s advertising network Friday after revelations in the Times newspaper that taxpayers and commercial brands were unknowingly funding extremists through adverts. Dozens of other brands have also allegedly withdrawn their business while Havas said it was also considering a global freeze on YouTube and Google ads.

The Times found adverts were appearing alongside content from supporters of extremist groups, making them around £6 per 1,000 viewers, as well as making money for the company.

The Times has now revealed why the commercial retreat from YouTube has gathered pace with its analysis showing that more than 200 anti-Semitic videos are hosted on YouTube. In selective cases, the offensive videos were uploaded years ago and have been viewed hundreds of thousands of times. Some even hosted advertising, suggesting anti-Semites may be enjoying a commercial advantage from perceived association with well-known brands, the newspaper reports.

Content of the videos varied, but relied on common themes that claimed Jews start global conflicts for profit, or perpetuate the ancient blood libel that Jews kill Christian children while keeping others as slaves. Holocaust denial is another thematic variation.

A spokeswoman told the Times: “Google believes in the right for people to express views that we and many others find abhorrent, but we do not tolerate hate speech. We have clear policies against inciting violence or hatred and we remove content that breaks our rules or is illegal when we’re made aware of it.”

Google does not actively look for hate content on YouTube. Instead its policy is to wait for users to flag it up. It said that with 400 hours of video uploaded every minute, it would be impossible to proactively police.

Meanwhile the BBC reports the UK government has removed its adverts from YouTube amid concerns they are appearing next to “inappropriate” material. In one case, Metropolitan Police promotions appeared alongside Hizb ut-Tahrir, an Islamic organisation calling for the establishment of a global caliphate under Sharia law, which is banned in many countries.

The Cabinet Office said it was seeking assurances from YouTube’s owner Google that its messages would be displayed in a “safe and appropriate way” in future. The Guardian newspaper, broadcaster Channel 4 and the BBC itself have also pulled ads citing similar worries over grossly offensive material.

During a recent appearance before the Commons Home Affairs Committee, executives from Facebook, Twitter and Google were told they had a “terrible reputation” for dealing with problems and censured for not policing their content more effectively, given the billions they made.

Yvette Cooper MP, Chair of the Committee, said:

“We’ve seen too many cases of vile online hate crimes, harassment or threats where social media companies have failed to act.

“It cannot be beyond the wit and means of multi-billion dollar social media companies like Twitter, Facebook, and Google to develop ways to better protect users from hatred and abuse. They have a duty to do so. We will be asking the companies about specific cases, why they didn’t act, and what they intend to do about it now.”

In response, Google is now directing its review teams to flag content that might come across as upsetting or offensive in search results.

As Breitbart Tech reported, the review teams – comprised of contractors known as “quality raters” – already comb through websites and other content to flag questionable items such as pornography. Google added a new category, “upsetting-offensive,” in its guidelines for quality raters. For example, content with “racial slurs or extremely offensive terminology” could now get flagged as such.

Follow Simon Kent on Twitter: or e-mail to: skent@breitbart.com

 

COMMENTS

Please let us know if you're having issues with commenting.