How Misinfodemics Spread Disease

Researchers are finding more and more that online misinformation fuels the spread of diseases such as tooth decay, Ebola, and measles.

A dose of a measles, mumps, and rubella vaccine
Lucy Nicholson / Reuters

They called it the Great Stink. In the summer of 1858, London was hit with a heat wave of noxious consequence. The city filled with a stench emanating from opaque pale-brown fluid flowing along what was once poetically known as the “Silver Thames.” Politicians whose offices overlooked the river doused their curtains with chloride of lime to mask the smell, the first time they’d been incentivized to really take action. At the time, close-quarters living arrangements and poor hygiene were contributing to a rise in illnesses and epidemics. But residents of what was then the world’s largest city believed it was unpleasant smells that directly transmitted contagions such as the plague, chlamydia, and cholera.

Their belief, the miasma theory of disease transmission, had some truth to it—it just wasn’t precise. The smell of stagnant, contaminated water is indicative of a perfect breeding ground for microorganisms that can cause water-borne diseases. But it’s the germs in the water—not the stench emanating from it—that’s really the problem, and, at the time, scientists had limited technologies and tools to understand the difference. So they found themselves focusing on solutions that couldn’t actually stop the spread of disease.

Now disease also spreads via Facebook statuses and Google results—not just the droplets from a sneeze or the particles that linger in the air when we forget to cough properly into our elbow crease—and around the world, digital health misinformation is having increasingly catastrophic impacts on physical health. Recent research found that Twitter bots were sharing content that contributed to positive sentiments about e-cigarettes. In West Africa, online health misinformation added to the Ebola death toll. In New South Wales, Australia, where the spread of conspiracy theories about water fluoridation run rampant, children suffering from tooth decay are hospitalized for mass extractions at higher rates than in regions where water fluoridation exists. Over the past several weeks, new cases of measles—which the Centers for Disease Control and Prevention declared eliminated from the United States in 2000—have emerged in places such as Portland, Boston, Chicago, and Michigan; researchers worry that the reemergence of preventable diseases such as this one is related to a drop in immunization rates due to declining trust in vaccines, which is in turn tied to misleading content encountered on the internet. With new tools and technologies now available to help identify where and how health misinformation spreads, evidence is building that the health misinformation we encounter online can motivate decisions and behaviors that actually make us more susceptible to disease.

You might call these phenomena “misinfodemics”—the spread of a particular health outcome or disease facilitated by viral misinformation.

Much of the origin of today’s vaccine hesitancy can be traced to a single, retracted article that met the viral power of the internet. The lead scientist of the original piece was in the process of filing a patent for an alternative measles vaccine, and he led a campaign to link the competing measles-mumps-rubella vaccine to autism. The article he published is now widely recognized to have been the result of serious financial conflicts of interest, unethical data collection (including the lead author paying children for their blood samples during his son’s 10th birthday party), and fraud. His medical license has since been revoked, but the virus his article produced has continued to infect our information channels.The fraudulent study has been referenced as a basis for health hoaxes related to flu vaccines, misinformed advice to refuse the provision of vitamin K to newborns for the prevention of bleeding, and modifying evidence-based immunization schedules.

Vaccines are just one part of this story. Researchers led by Brittany Seymour mapped the direct relationship between viral health misinformation and growing advocacy against water fluoridation. Their findings demonstrated that strong ties on digital social networks galvanizing behind a severely flawed study about fluoridation led to people forming group identities online that continue to fuel the spread of health misinformation. Misinformation based on discredited studies continues to mutate and spread online—in memes, articles, and videos, through platforms including Pinterest, Instagram, and Facebook. Like the germs running through the River Thames, toxic information now flows through our digital channels.

In the United States, aggregate data seem to imply that vaccination rates are stable. But this optimism may be shortsighted in today’s digital age, where younger populations—future vaccine decision makers, in some states—are becoming sensitized to vaccine misinformation online. For example, diseases such as measles have long been thought to spread in communities with insufficient “herd immunity”—i.e., not enough vaccinated people to prevent the spread of highly infectious disease. Herd immunity is no longer just a matter of quality public-health ecosystems, where vaccinations and antibiotics alone can prevent the spread of disease, but also of quality public-information ecosystems. We now know, for example, that social-media-based rumors made Ebola spread faster—and that when crisis responders adapted their communications strategies, more communities began receiving vital treatment and taking action toward prevention.

And yet our understanding of exactly how digital infections happen remains focused more on symptoms, looking at the number of shares a given vaccine-hesitancy tweet receives, than on some of the underlying causes, such as the digital infrastructure that makes some internet users more susceptible to encountering false information about immunization. Additionally, as the researchers Richard Carpiano and Nick Fitz have argued, “anti-vaxx” as a concept, describing a group or individual lacking confidence in evidence-based immunization practices, creates a stigma that focuses on the person—the parent as a decision maker or the unvaccinated child—and the community. More often, as Seymour has noted, the problem is rooted in the virality of the message and the environments in which it spreads.

Public-health authorities are not explicitly paying attention to the information ecosystem and how it may impact the spread of vaccine-preventable diseases in the near future. When 75 percent of Pinterest posts related to vaccines are discussing the false link between measles vaccines and autism, what does it mean for future herd immunity? And what about when state-sponsored disinformation campaigns exploit the vulnerabilities our systems have already created? Just last week, scientists at George Washington University found that a number of Russian bot and troll accounts on Twitter posted about vaccines 22 times more often than the average user.

To date, many public-health interventions seem to be addressing the outward signs of a misinfodemic by debunking myths and recommending that scientists collect more data and publish more papers. As well, much of the field remains focused on providing communications guidelines and engaging in traditional broadcast-diffusion strategies, but not search-engine optimization, viral marketing campaigns, and accessing populations through social-diffusion approaches. Research demonstrates that public-health digital outreach uses a lot of language and strategies that are inaccessible to the populations it is trying to target. This has created what the researchers Michael Golebiewski and danah boyd call “data voids”: search terms where “available relevant data is limited, non-existent, or deeply problematic.” In examining these environments, researchers such as Renée DiResta at Data for Democracy have documented the sorts of algorithmic rabbit holes that can lead someone into the depths of disturbing, anxiety-inducing, scientific-sounding (albeit unvalidated and potentially harmful) content that often profits from explanations with quick fixes at a cost.

To its credit, Google has made important progress in this regard. Its search-related guidelines prioritize expertise, authoritativeness, and trustworthiness; now, when you search for something such as “flu symptoms,” you’ll find Harvard- and Mayo Clinic–backed knowledge-graph-information panels appear on the right-hand side. The data in these panels include downloadable PDFs for more information. Facebook also says it’s working to address misinfodemics through a new feature that shares additional context for articles, allowing users to click on the image and see links to related articles, maps visualizing where a particular article has been shared, source information, and related Wikipedia pages.

It’s not just the big platforms working to stop misinfodemics. Our work on the Credibility Coalition, an effort to develop web-wide standards around online-content credibility, and PATH, a project aimed at translating and surfacing scientific claims in new ways, is part of two efforts of many to think about data standards and information access across different platforms. The Trust Project, meanwhile, has developed a set of machine-readable trust indicators for news platforms; Hypothesis is a tool used by scientists and others to annotate content online; and Hoaxy visualizes the spread of claims online.

Even the CDC and the Mayo Clinic maintain Instagram presences, though their collective following is 160,000 people, or 0.1 percent of Kim Kardashian’s follower count. Health advocates such as Jennifer Gunter (“Twitter’s resident gynecologist”), who blogs about women’s health, debunking celebrity-endorsed myths to a broad audience, and the Canadian professor Timothy Caulfield, whose health-video series about extreme remedies around the world was recently picked up by Netflix, are gaining recognition online. Doctors around the world are bridging gaps by borrowing strategies from marketing, and scientists are advocating for collaboration between social influencers and public-health experts.

Misinfodemics can seem devastating. One lesson learned from urbanization, like what happened in 19th-century London, is that when people come together, the risk of disease spread increases. We still don’t completely understand why, especially because new evidence changes scientific consensus over time. After London’s Great Stink, researchers found enough evidence to develop a new understanding of disease transmission, updating the dominant idea that smells caused illness to the new germ theory of disease. Ultimately, there was not one solution but an ecosystem of solutions: People started using antiseptics to keep surgical procedures sanitary, taking antiviral medications to treat diseases such as herpes and HIV, participating in community-vaccination campaigns to protect from (and eradicate) diseases such as polio and smallpox, and creating sewage systems separate from drinking-water sources.* Today, though the Thames is still polluted, it is no longer a consistent origin of catastrophic epidemics.

Now we know that disease also spreads when people cluster in digital spaces. We know that memes—whether about cute animals or health-related misinformation—spread like viruses: mutating, shifting, and adapting rapidly until one idea finds an optimal form and spreads quickly. What we have yet to develop are effective ways to identify, test, and vaccinate against these misinfo-memes. One of the great challenges ahead is identifying a memetic theory of disease that takes into account how digital virality and its surprising, unexpected spread can in turn have real-world public-health effects. Until that happens, we should expect more misinfodemics that endanger outbreaks of measles, Ebola, and tooth decay, where public-health practitioners must simultaneously battle the spread of disease and the spread of misinformation.


* Due to an editing error, this article previously misstated the type of medication used to treat herpes and HIV.

Nat Gyenes is a health and technology researcher at Harvard’s Berkman Klein Center for Internet & Society and consultant for MIT Media Lab.
An Xiao Mina is a research affiliate at the Berkman Klein Center for Internet and Society, and was a 2016 Knight Visiting Fellow at the Nieman Foundation for Journalism.