Advertisement

YouTube's poor AI training led to rise of child exploitation videos

Contract workers told 'BuzzFeed' that the platform usually gives them confusing guidelines.

YouTube uses algorithms and human moderators, but it still couldn't prevent the rise in disturbing, child-exploitative videos on the platform. Why? Well, it's likely due to various reasons -- one of them, according to a BuzzFeed report, is the confusing set of guidelines the company gives its contract workers for rating content. The publication interviewed search quality raters who help train the platform's search AI to surface the best possible results for queries by rating videos. It found that the workers are usually instructed to give videos high ratings based mostly on production values.

As one rater said:

"Even if a video is disturbing or violent, we can flag it but still have to say it's high quality [if the task calls for it]."

That means raters have to mark videos as "high quality" even if they have disturbing content, which can give those links a boost in search results. The problem? Child-exploitative videos found on the platform usually have good production values: they typically require some effort to create and are professionally edited.

After the media put the spotlight on the existence of disturbing videos aimed at children, YouTube started asking raters to decide if a video is suitable for 9-to-12-year-old viewers even when unsupervised. They were told to mark videos as "OK" if they think a child can watch it or "NOT OK" if it contains sexually explicit content, violence, crude language, drug use or actions that encourage bad behavior, such as pranks.

However, the rater BuzzFeed interviewed found the examples YouTube gave confusing, at best. Taylor Swift's Bad Blood music video, for instance, is NOT OK, based on the examples the platform gave. But videos containing moderate animal violence are apparently OK.

Bart Selman, a Cornell University professor of artificial intelligence, told BuzzFeed:

"It's an example of what I call 'value misalignment.' It's a value misalignment in terms of what is best for the revenue of the company versus what is best for the broader social good of society. Controversial and extreme content -- either video, text, or news -- spreads better and therefore leads to more views, more use of the platform, and increased revenue."

YouTube will have to conjure up a more concrete set of guidelines and make rating less confusing for its human workers if it wants to clean up its platform. Otherwise, enlisting 10,000 employees for help in reviewing videos won't make a significant impact at all. We reached out for YouTube's response to BuzzFeed's report and will update this post once we hear back.

Update: A YouTube spokesperson told Engadget:

"We use search raters to sample and evaluate the quality of search results on YouTube and ensure the most relevant videos are served across different search queries. These raters, however, do not determine where content on YouTube is ranked in search results, whether content violates our community guidelines and is removed, age-restricted, or made ineligible for ads. Those responsibilities fall to other groups working across Google and YouTube.

We have tremendous respect for both the full time employees and contractors who work day in and day out to help improve our platform for viewers. We strive to work with vendors that have a strong track record of good working conditions and we offer wellness resources to people who may come across upsetting content during the course of their work. We have dedicated employees working to develop and evolve the wellness resources we offer to the full time employees and contractors we have handling these difficult jobs. When issues come to our attention, we alert these vendors about their employees' concerns and work with them to address any issues."