internet hygiene —

We desperately need a way to defend against online propaganda

Despite years of fake news online, we still have no idea how to protect against it.

Would you get your Internet from this van?
Enlarge / Would you get your Internet from this van?

We've learned something from the investigation into whether Russia meddled in the US election that has nothing to do with politics. Humans are more vulnerable than ever to propaganda, and we have no clue what to do about it.

Social media as weapon

A new report in The Washington Post reveals that the Obama administration and intelligence community knew about Russian attempts to disrupt the 2016 election months in advance. But they did virtually nothing, mostly because they didn't anticipate attacks from weaponized memes and propaganda bots.

Former deputy national security adviser Ben Rhodes told the Post that the members of the intelligence community focused on more traditional digital threats like network penetration. They wanted to prevent e-mail leaks, and they also worried about Russian operatives messing with voting machines. "In many ways... we dealt with this as a cyberthreat and focused on protecting our infrastructure," he said. "Meanwhile, the Russians were playing this much bigger game, which included elements like released hacked materials, political propaganda, and propagating fake news, which they'd pursued in other countries."

Rhodes' comments dovetail with many other reports over the past two years spotlighting how Russia has been honing its social-media propaganda skills. Last year, Time published a massive report in which senior intelligence officials talked about how Russians pretending to be American voters infiltrated social media groups, spread conspiracy stories via Facebook accounts for fictional media outlets, and bought Facebook ads to spread fake news.

Anyone who has ever succumbed to the clickbait headlines on Russia Today knows that Russian media hacks are adept at crafting dank memes of legendary stickiness. The weird part is that those hacks are now working alongside state-sponsored hackers. We don't typically think of Facebook posts as a "cyberthreat," but now we have ample evidence that they are.

In 2015, The New York Times published an article by Adrian Chen about Russian "troll farms" full of people paid to post pro-Putin comments on social media. A year later, Chen discovered that many of the Russian troll accounts had become "fake conservatives" posting about Trump. In a study published the day before the election, researchers at USC revealed that 20 percent of election-related tweets came from an army of 400,000 bots that appeared to originate in the US state of Georgia.

“Don’t feed the trolls” isn’t enough

We know that these kinds of bot-driven memes fool ordinary people, in part because of two different incidents involving fake news about Ebola outbreaks. In 2014, a US nurse tried to return to her Maine home after treating Ebola patients in West Africa. That's when the joke news site Amplifying Glass ran a story about how she was being treated in a hospital for symptoms of Ebola (she was not, and she was perfectly healthy). The story gained so much traction that the nurse was kicked out of her apartment by a landlord who feared exposure to the disease.

A couple of months later, the Russian troll farm that Chen followed for The New York Times tested its powers with its own fake Ebola story on Twitter. The farm used its thousands of accounts to spread disinformation about a fictional outbreak of Ebola in the state of Georgia. For a while, the story was so popular and widely shared that the hashtag #EbolainAtlanta was trending in Georgia.

The Ebola stories are just two examples of how regular people get taken in by fake news—sometimes with dire consequences. Whether it's a story in The Onion or fake news spread by state-sponsored trolls, people fall for it. And this has been going on for years. So why isn't there a fake-news blocking tool yet?

Let's return for a moment to the reaction that the Obama administration had when it realized that Russians were spreading fake news on US social media networks. The administration had no idea how to combat meme attacks without coming across as partisan. If Obama had come out immediately and warned people to beware of fake news that made Clinton look bad, he would have been pilloried. And for good reason: such a statement sounds exactly like the kind of propaganda he wanted to stop. So the Obama camp engaged in a tactic that dates back to the earliest days of the Web: don't feed the trolls. Instead of calling out Russia's propaganda bots, the administration said nothing.

And that's pretty much where we're at with fake news more generally. There have been weak efforts by Facebook and Google to label news as "disputed" if it might be fake. But we need more than that. We need to fundamentally change people's expectations when it comes to what they're reading online.

Stranger danger for Internet news

The problem is that most people weren't raised to expect that their social spaces would be full of bots, blabbing the results of simple algorithms and infecting human conversations with misdirection. Rarely do audiences on Twitter and Facebook pause to wonder where their information is coming from.

So what is to be done? Helping Americans understand the difference between truthful information and malicious propaganda is a bipartisan issue. Plus, as I said earlier, it goes way beyond politics. Companies selling snake-oil "remedies" have everything to gain from fake health news, for example. Same goes for other hucksters. Marketing companies often hire social media teams to seed forums and comment sections with positive reviews of games and movies in order to sway public opinion and drum up business.

In a sense, social media audiences need basic "stranger danger" lessons. Every kid knows that the nice person offering candy and a ride might actually be trying to kidnap them. We need the same instincts in online public spaces, too. The friendly person tweeting at you from Georgia might actually be a bot under the control of Russian hackers. Don't trust Internet people until you know them.

One of the most hopeful responses I've seen to these problems has come from an unlikely place: the Girl Scouts of America. The group has just created a cybersecurity badge that girls can earn alongside more traditional badges for skills like camping, first aid, and music (apparently the "whittling" badge I was so proud of as a kid is no longer offered).

It's encouraging to see the Girl Scouts teaching cybersecurity to children, because this is the kind of basic skill that people will need more than ever in years to come.

Perhaps the next step will be encouraging teachers and librarians to teach kids defensive social-media skills. Lessons would start with the basics, like how to find the sources for an article and how to understand who has made edits on Wikipedia. More advanced students could be trained to recognize the kinds of bots that are used in propaganda campaigns. Eventually, students could learn to build tools that block known sources of malicious information, much the way Block Together works to prevent the spread of trolling and sockpuppet armies on Twitter.

We're in the early stages of figuring out how to defend against weaponized memes, but that doesn't mean we won't be able to do it. In the end, there is a defense for every attack. But first we have to recognize the danger.

Channel Ars Technica