6 Experts on How Silicon Valley Can Solve Online Harassment

Silicon Valley is all about using tech to solve problems. Yet the ugly reality of online harassment has remained intractable.
Image may contain Sitting Human Person Clothing and Apparel
From left to right: Nadia Kayyali, Chinyere Tutashinda, Adria Richards, Laura Hudson, Anil Dash, and Del Harvey.Christie Hemm Klok
Christie Hemm Klok

Silicon Valley is all about using tech to come up with solutions to gnarly problems. Yet the ugly reality of online harassment has remained intractable. The Internet, which has so amplified the voices of women, minorities, and LGBT folk, is still very much a free-fire zone for those who would shame, silence, or abuse them. A 2014 study by the Pew Research Center found that 25 percent of 18- to 24-year-old women have been the target of online sexual harassment. Last year the issue erupted in the mainstream media with Gamergate. The online movement targeted a female game developer, making accusations about her sexual life and publishing her address and phone number, prompting her to move out of her home. In September, WIRED convened a roundtable of people deeply involved in the issue to discuss what it would take to produce lasting change.

This conversation has been edited for clarity and space constraints.

wired: Defining harassment can be really complicated. Del, you’ve said before that’s a challenge for you and Twitter.

Christie Hemm Klok

Del Harvey: At 140 characters, there’s a lot of context that you don’t necessarily have when you look at a tweet. Understanding what someone really meant can be challenging. You can see an account saying, “Hey, bitch,” to another account, and that could be a friend saying hello to another friend or it could be someone being abusive. And the third example, which I have in fact seen, is someone who’s role-playing as a dog. [Laughter.] So we look at the ways that people interact more than at the content or the words themselves.

Anil Dash: A lot of people have this knee-jerk desire to simplify the problem and think that harassment is if you do x, y, or z. That just ignores the context. To make a judgment, you need information that technology is very bad at capturing.

Anita Sarkeesian: We need to broaden the definition of online harassment and abuse. For example, someone will post a YouTube video that defames me, and then thousands of people will reply to that video and tweet at me “You liar” or “You dumb bitch.” That’s not a threat, but it’s still thousands of people coming after me, right?

Adria Richards: There’s one thing about harassment that’s clear when you monitor traffic: It’s consistent in frequency. So if you take a period of time and then look at the number of incidents or interactions within that, it’s really clear that someone’s being attacked by a lot of people. Companies should develop algorithms and automated processes for detecting, evaluating, and responding to that. I always point out that Google is very concerned about click fraud, and they have processes to identify click fraud.

Dash: It’s a business problem to them.

Richards: I’d like online harassment and abuse to become a business issue too, because ­people are starting to compare social networks to cities that aren’t safe to walk anymore. Let’s incorporate antiharassment features when we’re building our platforms, to deter that behavior.

Dash: For 15 years I’ve had the same conversation over and over, about how we can be much more mindful of the effects of the way systems are designed. You get a new app, it’s made by kids who are 22 years old, and they weren’t around 10 years ago when the same cycle happened before. There’s no body of knowledge they’re learning from. There’s no ethics curriculum in most computer science programs in this country.

Harvey: People often start with the best intentions, and then all of a sudden things get a lot bigger. How do you take your policies and philosophies and make them scale? That’s one of the biggest challenges that we’re still very much working on.

Richards: There are communities that are doing it just fine. On Metafilter, for example, people have totally elevated, helpful conversations, and there’s no name-calling.

Dash: There’s a site for programmers, Stack Overflow, and full disclosure, I’m on the board. People on Stack Overflow have religious wars about programming languages, whatever. But the site supports anonymity and pseudo­nymity. And there isn’t a lot of gendered abuse. You go to YouTube videos about the exact same topics and ­people are being horrible. You make a set of choices early on about how you build the social dynamics, and you set expectations about what’s not going to be tolerated. Stack Overflow has really good tools. Moderators are ­people from the communities relevant to the discussions and are elected into authority roles. There’s a high ratio of moderators to users, and rules are strictly enforced. And it’s a big site. Its network, Stack Exchange, is within the top 50 most visited sites in the US on Quantcast.

wired: What responsibility do platforms have to their users in terms of protecting them from abuse?

Christie Hemm Klok

Richards: After a threat, I reached out to someone at a social media company. They connected me to one of their security people. I emailed them, saying, “I’m very concerned I’m going to be murdered in the next two weeks.” They asked me to file a ticket, so I did. I didn’t hear back. So I got another name to contact and sent a similar email to someone else. They also asked me to file a ticket. Thankfully I’m still here, so I wasn’t murdered. Yay.

Dash: Ten years ago when I was building social tools, when people behaved abusively, I was the guy saying, “We believe in free speech, and people are going to be jerks, and it’s not our fault.” I didn’t get it. And that understanding took me 10 years. I mean, I’ve been doxed by people using the tools that I built.

wired: Doxed, meaning someone maliciously published private information about you on the Internet. You raise an interesting point—that tools built for a good purpose can be misused.

Harvey: That is something Twitter puts a ton of time and effort into. When we first made it possible to upload photos, we decided to strip out metadata, because a ton of these images are taken via your smartphone or your digital camera. And guess what, your exact location is often in the metadata. I didn’t want us to potentially put ­people in danger. I refer to the work that my department does as “catastrophization”: What is the worst thing that could go wrong? Let’s work backward from that, to see what protections we can put in place to try to minimize it. Honestly, if people are doing startups or trying to get something off the ground, please don’t hesitate to reach out. I’m happy to talk to you about what I’ve learned, and what you should and shouldn’t do, and what will hurt a lot when it goes horribly wrong.

Nadia Kayyali: I think a perfect example is Facebook’s real names.

wired: You’re referring to their policy of requiring users to register the name they are known by.

Christie Hemm Klok

Kayyali: Advocates who represent the groups that Facebook is supposedly protecting with the real-names policy have repeatedly said that, no, real names are not the most important thing when it comes to protecting vulnerable groups like trans people and domestic violence survivors from harassment and violence. In fact, for some people—like people trying to avoid a stalker—it’s not being allowed to use a pseudo­nym that’s the problem.

Sarkeesian: We need to protect and value pseudonymity and anonymity under certain circumstances.

wired: Online harassment disproportionately affects women, people of color, and LGBT people. Can we talk about that?

Christie Hemm Klok

Chinyere Tutashinda: I work within the social justice movement, and there’s no one, especially in the black community, who doesn’t expect harassment online. It’s just replicating what happens in the real world, right? How do we make other people know and care?

Richards: I have been on Twitter since 2008. I was never called the N-word until the mob went after me in 2013.

Tutashinda: That’s where security plays a big role. At BlackOUT Collective, we helped coordinate a series of actions this winter. Afterward we got severely trolled, like for hours and hours and days and days. At one point an ex–police officer used our action’s hashtag to tweet a picture of himself pointing a gun at the camera with a caption that said, “Move along.” That line between what’s real and what isn’t—how far you are willing to take this—becomes really scary. The level of racism and visceral hate is astonishing.

Sarkeesian: The thing is that oppression didn’t start with the Internet. Racism and sexism and transphobia and homophobia have been around for a long time. The activist movements before us created real structural changes that forced our communities to change, so that it wasn’t acceptable to say racist slurs in front of other people. The civil rights movement didn’t persuade every white person to stop being racist, it forced ­people to behave differently. We can’t change everybody’s minds individually. But we can make it so that they can’t come after people as easily.

wired: What is the biggest stumbling block? Is it getting companies and platforms to recognize the severity of the problem, or is it getting them invested in the solutions?

Dash: It depends on the company. I think Twitter broadly gets it. I think the first step is literacy. The people who are in power are not the people who are marginalized enough to get attacked, and they tend to think, “Why don’t you ignore them?” They can’t understand this is an organized campaign trying to go after my income, my line of work, my family, to put me in danger. There’s a playbook: Here’s how you take somebody’s life apart.

Harvey: There’s still a strong narrative that “online is not real life.”

Kayyali: There’s still the idea, for a lot of people, that the Internet is this special place where we go and suddenly we’re not people of color, we’re not trans. But we’re now seeing that it’s a place where the differences matter. Free speech as an excuse for bad behavior has been conflated with the idea that free expression is an important value.

wired: Nadia, you’re someone who has been harassed and doxed and yet who believes very strongly in free expression. Those two things are so often in conflict. How do you reconcile them?

Christie Hemm Klok

Kayyali: In terms of the actual structure of the Internet, these are things that EFF is yelling about all the time. Sometimes it feels like we’re off in a corner saying, “Hey, your domain registration requires people to put in their address. That doesn’t make any sense. All of this information gets sold to data brokers. That’s such a good way for people to get doxed.” We were really excited to see Twitter creating block lists. There may be less agreement on the fine points of deciding what speech is and isn’t OK. But I think that’s much smaller than the areas that we do agree on.

wired: Most of our conversations about online harassment tend to focus on people in the United States. What are we missing in terms of a broader global perspective on online harassment?

Kayyali: At Facebook, they have the platform in regional languages, but they have very limited regional language support—people who speak it—to deal with complaints.

wired: This allows authoritarian governments to hire troll armies to trick Facebook’s algorithms into taking down the accounts of political dissidents by making it seem like the dissidents are violating Facebook policies.

Kayyali: Harassment at a global level is often political. It’s the Free Syrian Army versus Bashar al-Assad’s paid Internet commenters. It’s attack squads in Vietnam that are supposed to get people kicked off Facebook for supposed violations to the real-name policy, because they’re writing unpopular things. It’s incredibly important to expand who we’re thinking of when we think about the unintended consequences of our policies.

Harvey: Yes. Twitter recently introduced a couple of different identification paths for accounts. We wanted to make sure that we weren’t unduly putting people at risk. For example, we made sure that if you couldn’t provide a phone number, there were other options. Because outside the US, if the telecom is directly connected with the government, a phone number can lead the authorities to someone who’s an activist or a dissident or a whistle-blower.

wired: Are there any other steps companies could take to discourage online harassment?

Harvey: It’s a challenge because the same sort of tool that you develop to help people can potentially be repurposed to target people. Sometimes we hear suggestions like, just give users a way to see what accounts are involved and identify the ones that are the primary drivers and that have the most clout, so they can be reported. And I’m like, yes, but what if someone took that same tool and used it against a marginalized group to identify who to target in order to harm that group the most?

Richards: Just like there are bounties for finding security flaws, there could be bounties for effective antiabuse tools. There needs to be a value on this work.

Christie Hemm Klok

Dash: Almost nobody starts coding this stuff without ever having been harassed, right?

Richards: There could be little pop-up warnings: “Hey, a lot of people have reported they don’t like receiving this word. Do you still want to post this?”

Kayyali: Can we shame people with pop-ups? Because shame seems to work a lot better. “Hey, did you know you’re being racist right now?” [Laughter.]

Richards: Education could work too; maybe provide a little link.

Harvey: The majority of our users are on mobile, which means that we have very limited screen space.

Richards: I’ve come up with various app ideas—for example, helping someone who is nontechnical to document harassment.

Kayyali: There should really be one site that you can go to, just like the Do Not Call Registry, and you can fill it out. That seems like a fantasy now for anyone who’s gone through the Crash Override Network, the guide to protecting yourself from doxing.

Tutashinda: There is a lack of diversity in who’s creating platforms and tools. Too often it’s not about people, it’s about how to take this tool and make the most money off it. As long as people are using it, it doesn’t matter how they’re using it. There’s still profit to earn from it. So until those cultures really shift in the companies themselves, it’s really difficult to be able to have structures that are combating harassment.

Christie Hemm Klok

wired: Do we need better laws?

Dash: If you talk to members of Congress, they’re like, “Yes, online abuse is bad.”

Kayyali: They have a very dangerous knee-jerk reaction.

Sarkeesian: When I think about solutions, I think about it in a three-pronged approach: a cultural shift, tech solutions, and then the legal aspect. There are already laws against this stuff. Sending someone a death threat is already illegal, so having it taken seriously is the third prong.

Tutashinda: In terms of Black Lives Matter and the broader movement, I know that the police, the FBI—they’re actively watching us on social media. So they see the threats. They see the level of harassment and are not doing anything.

Kayyali: There is some work to be done by groups like mine with law enforcement, so that they actually understand these technologies. What we’re seeing is an analogue of rape culture and a racist criminal justice system. So things that happen to women, things that happen to people of color, are not taken seriously. Until we address those bigger issues, it’s really hard to have a legal response.

Dash: The leaders of the abuse communities are very legalistic. They read every detail of exactly how far they can go and exactly how they can phrase this and how they can structure it so it’s technically legal. The things that are impacting the most people are technically allowed. That’s where the industry and activists can come together and have a lot more impact.

Tutashinda: It requires a cultural shift. We’ve been able to shift culture so that people don’t say those things out loud. We have to do the same thing online. I have friends who have gotten horribly harassed on Twitter, and the only people who say anything or respond are individuals they know. It’s not the rest of the users who are following the conversation and saying, “Oh my God, have you seen what happened on Twitter today?” As opposed to, “Hey, that’s not OK, we need to flag this person.” Or on YouTube, people don’t reply to the level of hate or body shaming or harassment that they see. We just kind of let it go and expect the platform or the person being harassed to do all the work. We have to create a society that says it’s not OK to do it.

Richards: The most significant changes would come from the tech companies that run these platforms. If they were to increase diversity on their advisory boards and their safety teams, that would help inject more ideas.

Tutashinda: Diversity plays a huge role in shifting the culture of organizations and companies. Outside of that, being able to broaden the story helps. There has been a lot of media on cyberbullying, for example, and how horrible it is for young people. And now there are whole curricula in elementary and high schools. There’s been a huge campaign around it, and the culture is shifting. The same needs to happen when it comes to harassment. Not just about young people but about the ways in which people of color are treated.

Dash: Digital direct action works, in terms of holding the social networks accountable. The people who run them, they hear it, they pay attention to it, they’re embarrassed by it. If you care about free speech, then you have to protect people from being silenced by abuse.