Skip to main contentSkip to navigationSkip to navigation
woman computer concerned
When online harassment is routine, being online might become less of a part of women’s routine. Photograph: Alamy
When online harassment is routine, being online might become less of a part of women’s routine. Photograph: Alamy

If tech companies wanted to end online harassment, they could do it tomorrow

This article is more than 9 years old
Jessica Valenti

The courts may decide that sending threats over social media isn’t threatening enough to be a crime. Silicon Valley needs to step up or lose customers

If someone posted a death threat to your Facebook page, you’d likely be afraid. If the person posting was your husband – a man you had a restraining order against, a man who wrote that he was “not going to rest until [your] body [was] a mess, soaked in blood and dying from all the little cuts” – then you’d be terrified. It’s hard to imagine any other reasonable reaction.

Yet that’s just what Anthony Elonis wants you to believe: That his violent Facebook posts – including one about masturbating on his dead wife’s body – were not meant as threats. So on Monday, in Elonis v United States, the US supreme court will start to hear arguments in a case that will determine whether threats on social media will be considered protected speech.

If the court rules for Elonis, those who are harassed and threatened online every day – women, people of color, rape victims and young bullied teens – will have even less protection than they do now. Which is to say: not damn much.

For as long as people – women, especially – have been on the receiving end of online harassment, they’ve been strategizing mundane and occasionally creative ways to deal with it. Some call law enforcement when the threats are specific. Others mock the harassment - or, in the case of videogame reviewer and student Alanah Pearce, send a screenshot to the harasser’s mother.

But the responsibility of dealing with online threats shouldn’t fall on the shoulders of the people who are being harassed. And it shouldn’t need to rise to being a question of constitutional law. If Twitter, Facebook or Google wanted to stop their users from receiving online harassment, they could do it tomorrow.

When money is on the line, internet companies somehow magically find ways to remove content and block repeat offenders. For instance, YouTube already runs a sophisticated Content ID program dedicated to scanning uploaded videos for copyrighted material and taking them down quickly – just try to bootleg music videos or watch unofficial versions of Daily Show clips and see how quickly they get taken down. But a look at the comments under any video and it’s clear there’s no real screening system for even the most abusive language.

If these companies are so willing to protect intellectual property, why not protect the people using your services?

Jaclyn Friedman, the executive director of Women Action Media (WAM!) – who was my co-editor on the anthology Yes Means Yes – told me, “If Silicon Valley can invent a driverless car, they can address online harassment on their platforms.”

Instead, Friedman says, “They don’t lack the talent, resources or vision to solve this problem – they lack the motivation.”

Last month, WAM! launched a pilot program with Twitter to help the company better identify gendered abuse. On a volunteer basis, WAM! collected reports of sexist harassment, and the group is now analyzing the data to help Twitter understand “how those attacks function on their platform, and to improve Twitter’s responses to it”.

But when a company that made about $1bn in ad revenue in 2014 has to rely on a non-profit’s volunteers to figure out how to deal with a growing problem like gendered harassment, that doesn’t say much about its commitment to solving the problem.

A Twitter spokesperson told me that WAM! is just one of many organizations the company works with on “best practices for user safety”. But while Twitter’s rules include a ban on violent threats and “targeted abuse”, they do not, I was told, “proactively monitor content on the platform.”

When WAM! and the Everyday Sexism Project put pressure on Facebook last year over pages that glorified violence against women, the company responded that its efforts to deal with gender-specific hate speech “failed to work effectively as we would like” and promised to do better.

On Sunday, a Facebook representative confirmed to me that since then, the social network has followed through on some of these steps, like completing more comprehensive internal trainings and working more directly with women’s groups. Harassment on Facebook remains ubiquitous nonetheless – and even the most basic functions to report abuse are inadequate.

So if those who face everyday online harassment can’t rely on the law, and if social media companies are reluctant to invest in technologies to scrub it from their platforms, what then?

Emily May, the executive director of the anti-street harassment organization Hollaback, told me that, like many women, “I don’t want to be on YouTube or Twitter if every time I open up TweetDeck I see another rape threat.”

Do you?

Most viewed

Most viewed