Alex Chrum can easily recite the most common sexist, racist and homophobic slurs. She’s privy to some of the most hateful language you’ve ever heard. And for about a year, she read it over and over again, every single day.
Chrum, 25, became fluent in such viciousness thanks to her job moderating online commentary and trolling. As a content specialist for Debate.org, a site that invites users to discuss controversial topics, Chrum had to wade into the dark, sometimes poisonous muck of Internet comments. Each day, she sifted through 50 to 200 questionable posts, trying to decide what to publish and what to let rot.
“It was honestly the most emotionally exhausting thing I’ve ever done here at [work],” Chrum told Mashable.
Sitting at her desk in Swansea, Illinois, she alternated between relishing and loathing her role. Many of the site’s discussions involved vibrant, positive conversations about important political and social issues, and Chrum thrived on encouraging debate. Yet, as a moderator, she watched in real-time as some built their arguments around slurs, launching ad hominem attacks based on gender, race and sexuality.
Chrum, who majored in English literature and minored in gender studies, occasionally seethed with frustration. Life before taking a job as a comment moderator was like a bubble; she didn’t know that many people still believed Satan created homosexuals and that God made women to serve men. Chrum, though, couldn’t just acknowledge this reality. Instead, her job required thinking deeply about every offensive idea, each foul word, and even the veiled threats.
Most of us don’t see this version of the Internet. Unless you’re the target of an attack by so-called trolls, avoiding the dregs of social media is rather simple. You click away from a thread that turns nasty, unfollow a friend who says something reprehensible, or avoid sites infamous for their Lord of the Flies approach to social interactions. People like Chrum are the stopgap between innocent users and thrill-seekers who want to test the boundaries of common decency. But trying to protect the rest of us online can be a personal sacrifice -- one that drains the mind and spirit.
“It’s definitely taxing for one single individual to do something like that all day, every day,” Chrum said. “It’s not feasible.”
The dilemma plagues social media sites of all sizes. Lately it seems that not even a week passes without a high-profile incident in which anonymous users ruthlessly harass someone on a social platform.
In August, for example, Zelda Williams, the daughter of the late Robin Williams, abandoned Twitter and Instagram after users reportedly sent her photoshopped pictures of her father’s dead body.
I'm sorry. I should've risen above. Deleting this from my devices for a good long time, maybe forever. Time will tell. Goodbye.— Zelda Williams (@zeldawilliams) August 13, 2014
Just a few weeks later, Anita Sarkeesian, the founder of a web series that analyzes the representation of women in video games, was again threatened with rape and murder after her latest episode aired. This time, though, an online attacker apparently discovered the location of her home, prompting Sarkeesian to involve the authorities.
Some very scary threats have just been made against me and my family. Contacting authorities now.— Feminist Frequency (@femfreq) August 27, 2014
On Jezebel, the Gawker site dedicated to women’s news, anonymous users relentlessly seeded the comments with images of violent pornography. The months-long problem was only recently resolved.
We don’t know much about the inner lives of those who vet the Internet. We take for granted that they try to shield us from the worst content, but we rarely wonder aloud how doing so might affect their happiness and mental health.
Their role in regulating the Internet is largely undervalued. Moderators are but a small part of the inertia that drives the web, and the infrastructure we’ve built to make online communities safe is, at best, a promising improvisation and, at worst, embarrassingly broken. The attacks launched against Williams, Sarkeesian and Jezebel in the span of a few weeks make that much clear.
And while we know the problem of online aggression and trolling is massive, we simply don’t yet understand the psychological effects of what it’s like to combat the onslaught.
Elizabeth Englander, a professor of psychology at Bridgewater State University who focuses on cyberbullying in her research, said that’s because we haven’t yet specifically studied what it’s like to moderate abusive behavior and speech online.
A useful comparison, she said, might be to think of moderators as the digital equivalent of a police officer or emergency room nurse. Moderators don’t witness the same visceral life-and-death scenes, but the nature of their job exposes them to aspects of the human experience that most of us try to avoid.
On the Internet, that can include identifying child pornography or reviewing rape or death threats. Over time, Englander added, most professionals find a way to break their lives into pieces, separating themselves from the trauma they see, and maybe even becoming desensitized to it.
But there is also considerable research that describes what happens when first responders experience mental anguish and how to treat it. There are no such guidelines for moderators and no established standards for who should be permitted to take on this responsibility, or what kind of training is required to do so.
Though their stories are rarely heard, there is a class of tech workers whose lives are directly affected by these unanswered questions. Facebook, for example, receives about one million reports of abusive content every week. Staff members in Menlo Park, California, London and Hyderabad, India investigate these claims 24 hours a day. On Instagram, users upload an average of 60 million photos per day, and the company employs moderators to ensure that the service isn’t used to “defame, stalk, bully, abuse, harass [or] threaten” other users. Twitter has a “trust and safety” team that reviews the aggressive behavior of users.
At Reddit, where moderators volunteer to monitor the site’s many communities, becoming a target for harassment is not uncommon. A recent thread on the topic generated dozens of responses from moderators who regularly receive hate mail and death threats. Some moderators enjoyed taking trolls on, but the group also created its own private thread as a “therapy [subreddit]” in order to share their stories.
At Debate.org, the idea to add an opinion section where users could easily discuss their views as opposed to engaging in formal deliberation was somewhat of an experiment. Chrum, the site’s community manager at the time, received just basic training to enforce a code of conduct, which forbids certain types of behavior: no harassing or stalking; no profanity or swear words; no personal attacks against other members; no racial, sexual or religious slurs; and no threats of violence -- blatant or implied.
[Editor's note: These are real comments that Chrum had to vet. Cast your vote and then scroll to the bottom of the article to see if you made the same decision as she did.]
As the opinion section grew, comments flooded one of Chrum’s two computer monitors. She would put on her headphones, play Led Zeppelin or Lil Wayne depending on her mood, and dive in.
She could quickly reject some comments using the site’s code of conduct, but others straddled a faint line between provocative commentary and offensiveness. If a user, for example, states that the world would be a better place without homosexuals, does that count as a violent threat or slur, or is it simply a moral perspective? If someone asks whether it’s ever acceptable for a man to strike a woman, and a user replies that the woman might deserve it, should that be considered appropriate for the “intellectual and thought-provoking conversation” the site says it values?
The process of interpreting a user’s intent wore Chrum down. As she’d try to plumb the psyche of a stranger, Chrum would occasionally glance down at piece of artwork taped to her computer monitor. The small card, decorated with her favorite soothing colors of purple and green, read: “Keep calm and be nice to people.”
The slogan became Chrum’s mantra, but it was hard to follow at times. She had to exercise restraint, no matter how badly she wanted to confront a user over hateful comments. The cumulative effect of poring over the comments was like an eclipse, casting a shadow over her days.
“Most of the stuff [on the site] is positive,” she said. “I see the negative more because I have to personally make the judgment calls. I’d go home at the end of the day, many days, and I’d be in a depressed mood.”
The comments rattled her during the day and then consumed her at home. She worked long days and even checked the site during the weekend. Sometimes she meditated, trying not to think about the comments at all. At work, she’d turn to a few co-workers in exasperation after reading a particularly cruel comment.
“We can kind of share that misery together,” she said. “They give me that reassurance that sometimes people say things on the Internet that they would never say to anyone in person.”
And that seems to be the crux of this problem: While we wouldn’t tolerate such exchanges in the workplace or at a school lunch table, there is a loud, unwavering contingent that defines the Internet’s more vile outbursts as a form of speech that should be protected.
Whitney Phillips, a communication lecturer at Humboldt State University and expert in digital culture, disagrees. By unequivocally defending harassment, she said, that can immediately silence the voices of those being targeted, which often belong to women and other long-marginalized groups.
“Whose side do you take?” Phillips asked. “Do you take the aggressor’s side? Or align yourself with people who could contribute to the community in a more productive way?”
As Chrum contemplated versions of these difficult questions, she also remained mindful that her personal views had the potential to subtly affect her decision-making. She describes herself as anti-racist, feminist and a vocal supporter of the lesbian, gay, transgender and queer communities.
“What’s offensive to me isn’t necessarily going to be offensive to someone else,” she said. On the other hand, she worried that users might read a borderline abusive comment about women or homosexuality or race and be motivated to harm themselves -- or someone else.
When in the midst of this philosophical and ethical whiplash, she turned again and again to the idea that the comments had to somehow promote “meaningful and thoughtful” debate. “How many of these people are essentially trying to troll and get attention and get people worked up?” she would ask herself. “If you’re not bringing anything legitimate to the table, your content probably won’t pass our code of conduct and won’t be published.”
Chrum estimates that she rejected 50% of the comments on these grounds, and for explicit violation of the site’s guidelines.
The emotional and mental strain of making these decisions would have persisted had Chrum’s role on the site not fundamentally changed.
Last year, the site adopted a moderation model developed by its sister company, CrowdSource.com.
CrowdSource, whose clients include Overstock, Klip and Staples, approaches the amorphous chore of moderation by using contracted workers -- and an algorithm to monitor the quality of their decisions. The company draws from a pool of available workers who receive some training in guidelines for acceptable content, and then tests them with sample scenarios. Their scores are measured against how CrowdSource’s most-trusted contractors responded, and the trainees in turn earn a ranking for the reliability of their judgment.
Multiple contractors may review a single piece of content for a client, so the decision of whether to publish it is typically made by the algorithm, and not a single person. CrowdSource workers can also opt-out of assessing items like pornography or inflammatory language.
Using an algorithm alleviates the burden on moderators while offering predictability to companies still grappling with how best to safeguard their users from offensive content.
“It is rare that a client comes to us with a set of guidelines that either have been developed professionally or that they have confidence in,” said Stephanie Leffler, CEO of CrowdSource.com.
Chrum welcomed the transition. She gave the contractors guidelines for harassment and told them the debates were frequently heated. At the moment, Debate.org doesn’t use an algorithm, but a majority-rules model so that a few moderators assess a post and vote on whether it should be published.
Now Chrum spends just five to 10 hours per week working on the site, freeing her up to work on less intense projects for CrowdSource. “I come home at the end of the day in a much better mood,” she said.
This is just one solution for a single site and moderator. It also doesn’t reflect the work that Debate.org’s core group of 50 to 100 dedicated members do to informally regulate the site, contacting users when they come dangerously close to violating the code of conduct.
Chrum argues that sites must start with clear guidelines and expectations for membership; those who don’t obey the rules should be censored when necessary or excommunicated altogether.
Of course, this is much more difficult to execute on sites where membership is global and sprawling. Twitter, for example, has 271 million monthly active users, and is known as a platform where trolling and aggressive behavior occurs with alarming frequency. Even when users are reported for abusive behavior and subsequently punished by the service, they can easily return under a different guise -- a problem that most social sites face.
Despite the sheer volume of the problem, some sites are adopting new policies aimed at creating a different Internet culture. In August, the aggregator site Fark.com added misogyny to its list of offenses that could result in a deleted post. At the same time, Ask.com acquired the international question-and-answer site Ask.fm, and immediately began overhauling its harassment policies.
For people like Chrum, new rules don’t necessarily mean moderators will see less awful content, and that’s why protecting those workers from potential harm is an essential and logical next step.
Chrum, who knows the job took an immeasurable toll on her, actually found meaning in her exhausting journey as a moderator.
“Overall, it has improved me as a person because I’ve learned not to take negative things so personally,” she said. “I would go home and dwell on a comment, and anymore, it’s channeling that into more productive avenues. Let’s have a meaningful conversation about this, and then let’s move on.”
And yet, that growth was a bittersweet bargain.
“When you see things that you feel very emotional about, it sticks with you, even if you don’t know the person,” she said. “When it’s just a faceless, person-less comment, and it’s comment after comment after comment, it makes you lose faith in humanity.”
Comment moderation key:
The response to "Is Feminism Wrong?" was published.
The response to "Should Women Have Rights?" was not published.
The response to "Should Gay Marriage Be Legal in the U.S.?" was not published.
The response to "Should Transgender Surgery Be Legal?" was not published.
The response to "Is Racism Ever Justified?" was published.