The Problem Isn’t Fake News, It’s Bad Algorithms—Here’s Why

Fortunately, the same algorithms that created these issues can be used to solve them.

We’d rather have the full picture than see the world through a straw.
We’d rather have the full picture than see the world through a straw. Pexels

If you’re reading this article, it’s probably because an algorithm picked it for you. Facebook (META), for instance, factors in around 100,000 variables to determine what shows up in your News Feed. It’s an extraordinarily nuanced (not to mention largely top-secret) process. At its core, however, the Facebook algorithm aims to show you updates you’ll like, based on the ones you’ve liked in the past.

Sign Up For Our Daily Newsletter

By clicking submit, you agree to our <a rel="nofollow noreferer" href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime.

See all of our newsletters

The role of “fake news” — much of it disseminated on social media platforms — has, justifiably, received a lot of attention in recent weeks. But overlooked and intertwined with this is an equally worrying issue with our social news sources. As algorithms mature, growing more complex and pulling from a deeper graph of our past behavior, we increasingly see only what we want to see. Fortunately, the same algorithms that created these issues can be used to solve them.

Beating back fake news

Once upon a time, we knew to take supermarket tabloid headlines with a grain of salt. By contrast, when stories about the Pope endorsing Trump or Hillary selling weapons to ISIS circulate on platforms like Facebook, all too often these headlines are taken at face value … especially by those inclined to believe them. For many people, separating reliable sources from bunk isn’t as easy as it once was.

Here’s where Facebook and its algorithm may have a role to play. To be clear, this role isn’t as any kind of fact-checker or content gatekeeper. In fact, the idea of Facebook “vetting” what constitutes real news brings with it its own set of problems. But Facebook can use its vast data resources to gauge the authority of the source of news update.

This concept of assessing “domain authority” is nothing new. It comes from the SEO world, where sites are ranked, often on a 0-100 scale, based on everything from their longevity to how many other pages link back. The New York Times has a domain authority of 99.79/100, according to one measure. In short, it’s trusted. Ending the Fed, by contrast, the source of some of the most widely shared fake news stories during the election, has a domain authority of 44.90/100, at the time of writing.

For Facebook to factor this into its algorithm would be neither technically difficult nor especially controversial. Articles from news sites that are patently fake or suspicious could be devalued accordingly, ensuring that they don’t show up in as many people’s feeds to begin with, and that they don’t spread if they do. The concept is tried and true. It requires minimal human meddling. And the immediate benefit is that we’ll see less fake news and more of the real thing.

Overcoming our confirmation bias  

More dangerous than fake news, however, is all the real news that we don’t see. For many people, Facebook, Twitter and other channels are the primary — if not the only — place they get their news. By design, network algorithms ensure you receive more and more stories and posts that confirm your existing impression of the world and fewer that challenge it.

Overtime, we end up in a “filter bubble,” insulated from insights that could enlighten or challenge our perspective. In the algorithm-derived news era, we’re not seeing nearly the full picture. Global events go unnoticed. Seismic shifts in the political landscape are overlooked or dismissed … until it’s too late. Decisions are made without all the evidence on the table. In the end, it’s hard to see how this benefits anyone.

But could the same algorithms that have narrowed our content universe also be used to selectively open it up again? This concept is already familiar to users of streaming music services, for example. You can have your algorithm deliver nothing but ‘80s pop if you want or you can use it to “discover” new, different genres. Could social networks factor a greater element of “discovery” into their algorithms, infusing our feeds with new and competing ideas, rather than just holding up a mirror to our own? Alongside content we’re sure to “Like,” could a certain percentage touch on themes that were contrary, surprising or representative of distinct views?

This is just one idea and it’s not without hurdles. Won’t users get sick of having their assumptions challenged? Don’t we prefer to stay inside that protective filter bubble? Maybe. But I have faith that the vast majority of people are rational. We prefer real news to the fake kind. We’d rather have the full picture than see the world through a straw.

Ryan Holmes is the CEO of Hootsuite and founder of Invoke Media

The Problem Isn’t Fake News, It’s Bad Algorithms—Here’s Why