The Human Toll of Protecting the Internet from the Worst of Humanity

Even technology that seems to exist only as data on a server rests on tedious and potentially dangerous human labor.
Even technology that seems to exist only as data on a server rests on tedious and potentially dangerous human labor.PHOTOGRAPH BY ERICH HARTMANN / MAGNUM

Henry Soto worked for Microsoft’s online-safety team, in Seattle, for eight years. He reviewed objectionable material on Microsoft’s products—Bing, the cloud-storage service OneDrive, and Xbox Live among them—and decided whether to delete it or report it to the police. Each day, Soto looked at thousands of disturbing images and videos, which included depictions of killings and child abuse. Particularly traumatic was a video of a girl being sexually abused and then murdered. The work took a heavy toll. He developed symptoms of P.T.S.D., including insomnia, nightmares, anxiety, and auditory hallucinations. He began to have trouble spending time around his son, because it triggered traumatic memories. In February, 2015, he went on medical leave.

This story is laid out in a lawsuit filed against Microsoft, late last year, by Soto and a colleague named Greg Blauert, and first reported by Courthouse News Service. Soto and Blauert claim that the company did not prepare them for the stress of the job, nor did it offer adequate counselling and other measures to mitigate the psychological harm. Microsoft disputes Soto’s story, telling the Guardian in a statement that it “takes seriously its responsibility to remove and report imagery of child sexual exploitation and abuse being shared on its services, as well as the health and resiliency of the employees who do this important work.”

The lawsuit offers a rare look into a little-known field of digital work known as content moderation. Even technology that seems to exist only as data on a server rests on tedious and potentially dangerous human labor. Although algorithms and artificial intelligence have helped streamline the process of moderation, most technology companies that host user-generated content employ moderators like Soto to screen video, text, and images, to see if they violate company guidelines. But the labor of content moderators is pretty much invisible, since it manifests not in flashy new features or viral videos but in a lack of filth and abuse. Often, the moderators are workers in developing countries, like the Philippines or India, or low-paid contractors in the United States. There’s no reliable figure for how many people are employed in this line of work, but it’s certainly in the tens of thousands. Content moderators are recent college graduates and stay-at-home mothers, remote workers in Morocco and employees sitting in giant outsourcing companies in Manila. The number of moderators will only increase, as more of our lives are lived online, requiring more and more policing. Whenever I share on Twitter an article about content moderation, I’m always struck by the number of people who tell me that they’ve done the work and how psychologically difficult they found it.

Regardless of the merits of Soto’s specific case, constant exposure to the worst of humanity on a daily basis takes an undeniable toll. One former moderator for Facebook described it to me: “Think like that there is a sewer channel and all of the mess/dirt/waste/shit of the world flow towards you and you have to clean it.” A former moderator for YouTube told me that constant exposure to brutal combat and animal-abuse videos sent him into a depression. Studies that examine the impact that exposure to disturbing content has on moderators are rare, but there have been a number of studies of law-enforcement officers who investigate computer crimes. A study, conducted by the U.S. Marshals Service, of six hundred employees of the Justice Department’s Internet Crimes Against Children task force, suggested that a quarter of the investigators surveyed displayed symptoms of secondary traumatic-stress disorder, which is akin to P.T.S.D., but is caused by indirect exposure to trauma.

Tech companies don’t like to talk about the details of content moderation, so it’s difficult to judge how well they’re caring for the psychological health of moderators. Silicon Valley’s optimistic brand does not fit well with frank discussions of beheading videos and child-molestation images. Social-media companies are also not eager to highlight the extent to which they set limits on our expression in the digital age—think of the recurring censorship controversies involving deleted Facebook pages and Twitter accounts. In 2013, an internal Facebook moderation document was leaked, revealing that Facebook moderators were instructed to flag all attacks on Kemal Atatürk, the founder of modern Turkey. Kurdish users critical of the Turkish government protested the company’s actions. Leaving the moderation process opaque offers a degree of flexibility and plausible deniability when dealing with politically sensitive issues.

Moderation has become an especially charged subject in the past year, as the tumult accompanying the Presidential campaign played out on social media. Both Facebook and Twitter were involved in high-profile controversies over their moderation practices, or, rather, their lack thereof. Facebook caught flak from Barack Obama for its role in spreading fake news, while Twitter was criticized for a surge in hate speech and targeted harassment, much of it perpetrated by members of the alt-right, which found a fertile breeding ground on the site.

Tech companies like to envision themselves as neutral platforms, efficiently governed by code and logic. But users want these companies to be flexible and responsive to their needs. They want something more than terms of service handed down from policy teams, or canned responses to a reported abuse, which then disappears into a bureaucratic maze. They want a human relationship with the services that play such an important role in their lives. That will never be possible if those services dehumanize the workers that protect them.