Nieman Foundation at Harvard
HOME
          
LATEST STORY
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
ABOUT                    SUBSCRIBE
Feb. 2, 2017, 2:07 p.m.
Business Models

Reddit’s /r/worldnews community used a series of nudges to push users to fact-check suspicious news

“We found a method that can invite a much wider readership into the work of dealing with this problem, and at scale.”

To curb the spread of unreliable news, should news organizations and platforms turn to algorithms or rely on users themselves? One recent experiment suggests a potential solution could combine the two.

In a recent collaboration with Reddit’s /r/worldnews community, researchers at MIT found that encouraging users to fact-check potentially misleading or sensationalist stories both doubled the number of comments with links that those posts got and halved their Reddit scores (pushing them farther down the page), “a statistically significant effect that likely influenced rankings in the subreddit,” says the report.

J. Nathan Matias, a Ph.D candidate and the experiment’s lead researcher, said the experiment proved the power of what he calls the “AI nudge,” which combines human persuasion and algorithms to generate a desired effect, while not imposing any actual limitations on user behavior. (The idea is borrowed from the research of Richard Thaler and Cass Sunstein, who detailed how nudges could be used in government and other institutions.)

“Our results here showed that many of the issues we care about, such as the spread of fake news, are shaped by a combination of human and algorithmic factors, and that we can influence algorithms by persuading people to shift their behavior, even if we don’t control those algorithmic systems,” Matias said.

Here’s how the experiment worked. Subreddit moderators produced a list of news sources “that frequently receive complaints” about “sensationalized headlines and unreliable information.” (Mostly British and Australian tabloid newspapers, interestingly, plus the New York Post.) Links to those sources submitted to the subreddit over a two-month period were either: left alone; appended with a comment encouraging skepticism and a fact-check; or appended with a comment that encouraged both fact-checking and downvoting “if you can’t independently verify these claims.” The findings:

Posting a sticky comment encouraging skepticism caused a comment to be 1.28 percentage points more likely to include at least one link. Posting a sticky comment encouraging skepticism and discerning downvotes caused a comment to be 1.47 percentage points more likely to include at least one link. Both results are statistically significant.

Within discussions of tabloid submissions on r/worldnews, encouraging skeptical links increases the incidence rate of link-bearing comments by 201% on average, and the sticky encouraging skepticism and downvotes increases the incidence rate by 203% on average.

On average, sticky comments encouraging fact-checking caused tabloid submissions to receive 49.1% the score of submissions with no sticky comment, an effect that is statistically significant. Where sticky comments include an added encouragement to downvote, I did not find a statistically significant effect.

Encouraging skepticism caused a tabloid submission’s score to grow more slowly. Encouraging skepticism and downvoting may have very very slightly increased the growth curve of tabloid submissions, on average on r/worldnews.

The experiment was a product of CivilServant, a project Matias and fellow researcher Merry Mou created to help online communities conduct their own experiments around moderation, harassment, and other topics. Matias chose Reddit for the experiment because the platform “delegates substantial power to the users,” both in the form of top-down community moderation and top-up community self-policing. Likewise, /r/worldnews was an ideal subject both because of the importance of the community’s content and because of its sheer influence over how millions of people get their news each day. The community, which is one of the default subreddits for new users, has over 15 million subscribers and gets around 450 article submissions a day — making it possibly “the largest single group for discussing world news anywhere in the English-speaking Internet,” Matias notes. Tweaking how it handles unreliable news stories can have significant effects on how those stories spread.

While the experiment’s results were promising, the researchers had significant concerns going in. One, for example, was whether in the process of fact-checking unreliable news stories, /r/worldnews users inadvertently helped spread them. Because Reddit’s ranking algorithm works in part by looking at how much commenting activity a post generates, a flurry of fact-checking could actually make a story more popular, regardless of whether it’s true.

This concern is larger than Reddit. The so-called “backfire effect,” popularized by researchers Brendan Nyhan and Jason Reifler in a 2010 paper, posited that, in certain situations, corrections actually increased the spread of false information, rather than curtailing it. A more recent paper, published last year, downplayed the frequency of the backfire effect, but there’s still a lot of concern over whether, for example, vigorously fact-checking Donald Trump’s assertions about voter fraud actually leads more people to believe them. “This is the algorithmic wrinkle to that question,” Matias said.

It’s notable that one thing that did not work well was suggesting that users downvote these news stories in addition to fact-checking them. Matias found that encouraging people to downvote posts “essentially removed” the effect that fact-checking had on posts’ Reddit scores — though the lack of voting-level data makes it hard to determine the exact cause of the behavior. (One potential reason? “What psychologists call ‘reactance’ — it’s possible that some people disliked the idea of moderators encouraging downvoting and decided to do the opposite.”)

While many of the experiment’s findings apply only to the specific nature of Reddit, Matias said some of what he learned could also apply both to the efforts of other platforms and to those of news organizations.

“The results, I hope, encourage news organizations to think about the opportunity to trust and work with readers to address challenges like fake news,” he said. “The Reddit moderators could have come up with a policy that relied entirely on the moderation team. Instead, we found a method that can invite a much wider readership into the work of dealing with this problem, and at scale.”

The full report is here.

Photo of tabloid cover by orbakhopper used under a Creative Commons license.

POSTED     Feb. 2, 2017, 2:07 p.m.
SEE MORE ON Business Models
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
“While there is even more need for this intervention than when we began the project, the initiative needs more resources than the current team can provide.”
Is the Texas Tribune an example or an exception? A conversation with Evan Smith about earned income
“I think risk aversion is the thing that’s killing our business right now.”
The California Journalism Preservation Act would do more harm than good. Here’s how the state might better help news
“If there are resources to be put to work, we must ask where those resources should come from, who should receive them, and on what basis they should be distributed.”