Disinformation —

YouTube should stop recommending garbage videos to users

Opinion: The case against YouTube's toxic "up next" algorithm.

YouTube headquarters in San Bruno, California.
Enlarge / YouTube headquarters in San Bruno, California.

In The New York Times, reporters Max Fisher and Amanda Taub explain how YouTube propelled the career of controversial Brazilian politician Jair Bolsonaro. Before his election as Brazil's president, they write, Bolsonaro "had long used the platform to post hoaxes and conspiracies."

Ostensibly, YouTube's recommendation algorithms are politically neutral. However, they're optimized to boost "engagement," and in practice that means promoting videos with extremist and conspiratorial points of view. In Brazil, that has meant bringing a cadre of far-right social media stars to prominence, ultimately helping them gain power in national politics.

According to the Times, Brazilians are also increasingly turning to "Dr. YouTube"—relying on YouTube videos rather than conventional professionals for medical advice.

"Harvard researchers found that YouTube's systems frequently directed users who searched for information on Zika, or even those who watched a reputable video on health issues, toward conspiracy channels," the reporters write.

We've seen similar developments here in the United States, where social media has boosted the profile of anti-vaccine activists.

YouTube's auto-play recommendations aren't great for kids, either. Two years ago, writer James Bridle penned a widely read essay about the increasingly bizarre and disturbing children's videos that have been appearing on YouTube as publishers generate automated videos to game YouTube's recommendation algorithm.

YouTube's recommendation engine "isn't built to help you get what you want—it's built to get you addicted to YouTube," argued Guillaume Chaslot, an engineer who used to work on YouTube's recommendation algorithm, in a June interview with TNW.

There's a simple way to solve these problems: YouTube should stop making algorithmic video recommendations. When a video finishes playing, YouTube should show the next video in the same channel. Or maybe it could show users a video selected from a list of high-quality videos curated by human YouTube employees. But the current approach—in which an algorithm tries to recommend the most engaging videos without worrying about whether they're any good—has got to go.

YouTube isn’t using its vast power responsibly

In recent years, YouTube, Facebook, and other major platforms have been embroiled in debates about whether to ban extreme and controversial content. It's a bit of a no-win situation for them. If they leave a video up, many people see it as endorsing the video's odious views. If they take videos down, they get attacked for political bias and censorship.

But the situation isn't so complicated when a platform makes recommendations. It's not censorship if YouTube decides not to recommend your video to other users. And it should be uncontroversial that, if YouTube is going to recommend a video, the site should first verify that the video is worth watching.

What if YouTube staffers thought more like journalists? Journalists understand that they have a responsibility to do more than just run whatever story will get them the most readers, viewers, or clicks. They try to verify claims before reporting them as true, and they will spike a story if it's not newsworthy.

Of course, journalists aren't perfect. News organizations do still publish clickbait and inaccurate information from time to time. But most journalists at least try to use their power responsibly, even if we don't always succeed.

YouTube has vastly more influence over people's information diets than any contemporary media organization. Yet YouTube seems far less concerned about using its power responsibly. Far from steering people away from inaccurate and conspiratorial information, YouTube's automated algorithms seem to steer people toward this kind of misinformation because (as a former Facebook employee once put it) "bullshit is highly engaging."

Defenders of online platforms argue that it's not feasible to do human content curation at the scale of a platform like YouTube. That might be true if the goal is to take down the worst content. But the argument doesn't make sense if you're talking about YouTube's recommendations.

There's no reason YouTube's recommendations need to be personalized. YouTube could recommend the same set of videos to every YouTube visitor in a particular country—just as Ars Technica recommends the same articles to everyone who visits our home page. Or it could apply limited personalization algorithms to choose among a subset of videos that have been pre-screened by human reviewers for accuracy and quality.

Instead, YouTube recommends a lot of toxic videos to its users because YouTube has made a business decision to prioritize "engagement" over all competing values. That decision is having negative consequences in Brazil, the United States, and around the world. It's time for YouTube to rethink its approach.

YouTube says it’s working to improve its system

The Times story on Brazil was based on research done at Harvard and the University of Minas Gerais in Brazil. In a Monday tweet, YouTube disputed the team's findings.

"We had a team across product and engineering try to reproduce the recommendation results found by the University of Minas Gerais and Harvard's researchers, and it was unable to do so," YouTube wrote.

YouTube also says that since January, it has been "reducing recommendations of borderline content and videos that could misinform users in harmful ways. We're bringing this to Brazil by the end of this year."

But whether or not this will be enough remains to be seen. It may not be easy to predict in advance which types of content will prove harmful to users. A better approach would be to first drastically scale back automated recommendations—then only resume using them if and when the company is confident it can do so responsibly.

Channel Ars Technica