Twitter Has New Rules for Violent and Sexual Content

We may earn a commission from links on this page.

Twitter’s downward spiral into a platform where abuse thrives has been well-documented over the years, but harassment on Twitter is in the news again this week because the company suspended actress Rose McGowan after she tweeted about sexual abuse in Hollywood. In response, CEO Jack Dorsey promised to introduce stricter rules against harassment—and it looks like some of those rules just leaked to Wired.

Dorsey promised new rules covering “unwanted sexual advances, non-consensual nudity, hate symbols, violent groups, and tweets that glorifies violence” and it looks like most of those are covered in an email from Twitter’s head of safety policy.

Advertisement

The email, obtained by Wired, shows that Twitter will require deeper account reviews in response to certain types of harassment reports. Users have often complained that their reports of abusive behavior are dismissed too easily. To improve on this, the email says that, in cases where a user posts non-consensual nudity, the company will conduct “a full account review.” If the account is dedicated to posting non-consensual nudity, it will be suspended immediately.

Advertisement

It sounds like Twitter used to require a report from someone pictured in a non-consensual image before it would remove the content—but that’s also changing, the email says. Twitter’s definition of non-consensual nudity will be expanded to include “upskirt imagery, ‘creep shots,’ and hidden camera content” and Twitter will become less tolerant of pornography that mimics these kinds of images.

Advertisement

“While we recognize there’s an entire genre of pornography dedicated to this type of content, it’s nearly impossible for us to distinguish when this content may/may not have been produced and distributed consensually. We would rather error on the side of protecting victims and removing this type of content when we become aware of it,” the email states.

The details on Twitter’s new plan to enforce violent content are a bit more flimsy. The company says it will be taking action against “violent groups” on the platform and will also take action against content that glorifies violence—for instance, a tweet that says “Murdering makes sense. That way they won’t be a drain on social services” would now fall under Twitter’s enforcement policy.

Advertisement

However, the email notes that these policies are still being developed, and it doesn’t say this kind of content will be outright removed from the platform. It’s possible that violent content will be met with other enforcement action, such as a temporary suspension. In cases that involve “hate symbols and imagery,” Twitter will simply hide the image behind a banner that marks them as sensitive media and requires a user to click through in order to see it.

“We realize that a more aggressive policy and enforcement approach will result in the removal of more content from our service. We are comfortable making this decision, assuming that we will only be removing abusive content that violates our Rules,” the email notes.

Advertisement

“Although we planned on sharing these updates later this week, we hope our approach and upcoming changes, as well as our collaboration with the Trust and Safety Council, show how seriously we are rethinking our rules and how quickly we’re moving to update our policies and how we enforce them,” Twitter said in a statement.

[Wired]

Advertisement