Facebook’s Hate Speech Policies Censor Marginalized Users

Opinion: Facebook needs to fix its policies to keep the platform open to the LGBTQ community and people of color.
This image may contain Graphics Art and Text
Ben Bours

As queer artists and activists who have challenged Facebook’s “real names” policy for three years, we’re alarmed by a new trend: Many LGBTQ people’s posts have been blocked recently for using words like “dyke,” “fag,” or “tranny” to describe ourselves and our communities.

While these words are still too-often shouted as slurs, they’re also frequently “reclaimed” by queer and transgender people as a means of self-expression. However, Facebook’s algorithmic and human reviewers seem unable to accurately parse the context and intent of their usage.

Whether intentional or not, these moderation fails constitute a form of censorship. And just like Facebook’s dangerous and discriminatory real names policy, these examples demonstrate how the company’s own practices often amplify harassment and cause real harm to marginalized groups like LGBTQ people, communities of color, and domestic violence survivors—especially when used as a form of bullying to silence other users for their identities or political activities.

For example, we’ve received reports from several people whose posts about their LGBTQ activism were taken down. Ironically, one was attorney Brooke Oliver, who posted about a recent Supreme Court ruling related to her historic case that won Dykes on Bikes (a group of motorcycle-riding lesbians that traditionally leads gay pride parades) a trademark.

Two individuals wrote that they were reported for posting about the return of graphic novelist Alison Bechdel’s celebrated Dykes To Watch Out For comic strip. One happened to be Holly Hughes, who is no stranger to censorship: She’s a performance artist and member of the infamous NEA Four. A gay man posted that he was banned for seven days after sharing a vintage flyer for the 1970s lesbian magazine DYKE, which was recently featured in an exhibition at the Museum of the City of New York. A queer poet of color’s status update was removed for expressing excitement in finding poetry that featured the sex lives of “black and brown faggots.”

A young trans woman we heard from was banned for a day after referring to herself as a “tranny” alongside a selfie that proudly showed off her new hair style. After she regained access, she posted about the incident, only to be banned again for three more days. She also highlighted double-standards in reporting, noting that in her experience men often use the term to harass her, but are rarely held accountable. Many others also shared stories of reporting genuinely homophobic, transphobic, racist, and sexist content, only to be told it didn’t violate Facebook’s “Community Standards.”

Additionally, former RuPaul’s Drag Race contestant Honey Mahogany was unable to purchase an ad featuring the hashtag #blackqueermagic for an event that features a cast of African-American performers. It turns out that Facebook prohibits ads with “language that refers to a person’s age, gender, name, race, physical condition, or sexual orientation” (though it is easy enough to target users based on identity regardless). While such policies may rightfully prevent discrimination in legally protected areas like employment or housing, they cast too wide a net and ultimately discriminate against communities in cases like this.

And these stories are just the tip of the iceberg. Facebook, of course, has recently seen many public controversies for (temporarily) removing content like Nick Ut’s famous photo of Kim Phúc fleeing a napalm attack and video of Philando Castile’s murder by police.

Interestingly, in a recent blog post on the difficulty of moderating hate speech, Facebook vice president Richard Allan offered “dyke” and “faggot” as challenging examples, noting that, “When someone uses an offensive term in a self-referential way, it can feel very different from when the same term is used to attack them.”

However, as with its real names policy, while Facebook’s intentions may be noble, its algorithms and human-review teams still make too many mistakes. The company is also increasingly under pressure from users, groups, and now governments to improve its procedures—Germany just passed legislation requiring social media companies to remove hate speech.

We’ve identified four interrelated problems.

First, Facebook’s leadership doesn’t seem to understand the nuances of diverse identities. As leaked documents recently published by ProPublica indicate, its policies aim to prevent harassment of users based on “protected categories” like race, gender, and sexual orientation; however, by making exceptions for subsets of protected groups, the company’s protocols paradoxically “protect white men from hate speech but not black children,” as ProPublica reported. Such a color-blind and non-intersectional approach fails to acknowledge the ways in which groups discriminated against differently. (It is also not too surprising that Facebook ultimately protects white men, given its employee demographics.)

Second, Facebook’s approach to most issues, including authentic names or hate speech, is to create one-size-fits-all policies that it claims will work for the majority of users. However, given that Facebook’s user base just topped 2 billion, even something that affects 1 percent of users still affects 20 million people. Moreover, it appears that Facebook’s own policies aren’t applied consistently. Sometimes Facebook formally implements exceptions to rules, and then the risk of reviewers’ own opinions or biases interfering is huge.

Third, Facebook does not share the details of its enforcement guidelines, release data on the prevalence of hate speech, or give users an opportunity to appeal decisions and receive individualized support. With this lack of transparency and accountability, the company plays judge, jury, and executioner with its patchwork of policies, leaving many users stuck in automated customer service loops. And in cases in which users are banned, they are unable to participate in what is arguably one of our most important public forums. Like the telephone, Facebook is essentially a utility: To be abruptly cut off from one’s content and communities, based on arbitrary policies, can be annoying if not outright dangerous—especially for queer and trans people who rely on these connections for support.

Fourth, Facebook appears unwilling to invest adequate resources to address these issues. While it recently pledged to increase the size of its review team, even employing 7,500 people to moderate hundreds of thousands of reported posts each week seems paltry. Further, many reviewers may not be adequately trained or given enough time to interpret context and detail, especially with regards to diverse communities. This is exacerbated by the fact that may content reviewers are outsourced internationally, and don’t necessary have the cultural competence in the geographic areas they are reviewing.

Our experience with the #MyNameIs campaign—which scored an apology and several changes to the company’s “real names” policy, and continues to assist users in getting their accounts back—is that Facebook prioritizes and invests resources in rainbow-colored PR stunts and social experiments, but drags its feet when it comes to building tools that actually keep people safe.

We understand that it’s not easy to manage a platform that’s home to one-quarter of the world’s population, and that living in an increasingly digital culture comes with unanticipated challenges. And yet, if Facebook truly wants to “build community and bring the world closer together,” it has to do better. It must work with diverse communities to genuinely understand their needs, and implement policies and protocols that take into account their specificities—without doing harm, denying users’ identities, or preventing us from expressing ourselves.

When Facebook released its rainbow flag Pride reaction button earlier this summer, reviews were mixed. For us, a rainbow flag is a great symbol, but it’s not enough to wave—or click. If Facebook truly wants to be a safe place for queer, trans, and other diverse communities, it needs to fix its names and content moderation policies before LGBT users march on by to a better platform.

Dottie Lux (@redhotsburlyq) is an event producer and the creator of Red Hots Burlesque, a queer burlesque and cabaret; she is also a co-owner at San Francisco’s Legacy Business The Stud Bar. Lil Miss Hot Mess (@lilmisshotmess) is a PhD student in media studies at NYU by day and a drag queen by night. Both are organizers with the #MyNameIs campaign. WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here.