It's Too Easy for Trolls to Game Twitter's Anti-Abuse Tools

In a report today, a group tapped by Twitter to handle harassment claims found that Twitter's systems for policing harassment are easily gamed.
twitter
Ariel Zambelich /WIRED

Twitter CEO Dick Costolo has said that kicking trolls off the platform is one of his top priorities this year, and recently, the company has made some changes to its policies to accomplish that goal. But a new study from a group called Women Action Media proves just how tough a task eradicating harassment will be.

Last fall, Twitter appointed Women Action Media, better known as WAM!, as one of its authorized reporters. These are groups that are allowed to report and identify harassment on behalf of others, and, at least in theory, Twitter is supposed to prioritize those reports. So, over the course of three weeks this fall, WAM! collected a total of 811 harassment reports, 161 of which it reported to Twitter. Meanwhile, the group analyzed all the reports it received in hopes of pinpointing some of the leading drivers of harassment on Twitter. In a lengthy report published today, WAM! found that Twitter's systems for policing harassment are easily gamed.

According to the report, one key problem is the fact that today, Twitter requires users to submit a link to the offending Tweet in order for an investigation to begin. Knowing this, harassers often delete disparaging Tweets not long after sending them, a phenomenon WAM! refers to as "tweet and delete." Giving users a way to prove that they're being harassed with, say, a screenshot or some other form of authentication could help Twitter catch bad actors, even after they've covered their tracks.

Another trend WAM! discovered is so-called "dogpiling," in which users are flooded with a wave of harassment from many different accounts. According to WAM!, these users need some way to report several accounts at once and have the issues resolved simultaneously.

Complicating matters for Twitter, however, is the fact that WAM! also detected a substantial amount of so-called "false flagging" and "report trolling," in which people make fake claims about harassment just to gum up the works for reviewers and make it tougher to address legitimate claims. Another issue is the fact that some 57 percent of reports came from bystanders, adding an additional hurdle of verification.

For Twitter, these observations are a good place to start. And yet, it's important to note that this study was far from scientific. After all, it was dependent on Twitter users reporting issues to WAM!, meaning the study was conducted on a self-selected group. And while 811 reports is substantial, it's a fraction of the feedback that Twitter and other social networks receive overall. If WAM! could detect these trends within a small sample, it's probably safe to say that Twitter, which has seen some order of magnitude more evidence, is already well aware of these issues and many other issues not included in the report. Now, Twitter just has to find more ways to solve them.