BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Report: Facebook's Content Rules 'Favor Elites And Government' Over Activists, People Of Color

Following
This article is more than 6 years old.

An investigation by the nonprofit journalism group ProPublica has found that Facebook's internal methods for managing hate speech come down harder on minority groups than white posters of privilege.

According to an investigation reaching back years, Facebook's rules for moderating hate speech and other questionable content on the platform have often been used preferentially toward those in power. Based on numerous instances of enforcement and Facebook's own internal training documents, ProPublica has determined that the social media site may go the extra mile to protect freer speech for prominent white users, but be quick to delete or punish potentially controversial content posted by users of color, and particularly when that content gets political.

Citing a long list of cases in which charged statements by prominent white users survived scrutiny and ones by government protesters and activists of color were shot down (often while seemingly failing to break Facebook's content rules), ProPublica concluded that the world's largest social media site has a history of unfair enforcement of its policies. Having reviewed a range of Facebook's training materials for moderators, ProPublica also wrote, "the documents suggest that, at least in some instances, the company’s hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities. In so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens."

See also: Facebook Accidentally Revealed Moderators' Identities To Suspected Terrorists

According to ProPublica's review of internal documents, Facebook has trained its typically internationally based, contracted content moderators to use the company's global hate speech algorithm according to certain parameters around "protected" or non-protected groups. As ProPublica explained, insulting or hateful speech and calls for violence are only deleted when they target Facebook's protected categories of people, established by race, gender identity, sex, religion, national origin, ethnicity, and serious disability or disease.

Attacks on subsets of these groups, meanwhile, tend to be permitted--meaning calls to wipe out radicalized Muslims (rather than all Muslims) can endure, but accusations of widespread white racism tend to be nixed. Thus, according to Facebook training documents reviewed by ProPublica, "White men are considered a group because both traits are protected, while female drivers and black children, like radicalized Muslims, are subsets, because one of their characteristics is not protected."

See also: Facebook Wants Users' Help With 'Hard Questions' On Content, Censorship And Safety

Stacey Patton, a journalism professor at Baltimore's historically black Morgan State University whose question about racial dual-standards in American violence was deleted and caused her account to be disabled for three days, said the censorship trend places a heavy burden on users of color. “It’s such emotional violence,” Patton commented to ProPublica. “Particularly as a black person, we’re always have these discussions about mass incarceration, and then here’s this fiber-optic space where you can express yourself. Then you say something that some anonymous person doesn’t like and then you’re in ‘jail.’”

In the past decade, Facebook's rules for moderating speech on the platform have boomed along with its user base, which now regularly clocks visits from 2 billion people worldwide. In recent months, the site has particularly faced calls for better oversight of its content, some of which has included live-streamed violence, false news stories, racist vitriol, and terrorism recruitment materials.

As part of its new "Hard Questions" series, aimed to help users understand and bolster the platform's evolving security practices, Facebook explained its definition of hate speech in a blog post yesterday. Among other things, Facebook VP EMEA of Public Policy Richard Allan wrote,

Our current definition of hate speech is anything that directly attacks people based on what are known as their “protected characteristics”--race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease.

There is no universally accepted answer for when something crosses the line. Although a number of countries have laws against hate speech, their definitions of it vary significantly.

[...]

People who live in the same country--or next door--often have different levels of tolerance for speech about protected characteristics. To some, crude humor about a religious leader can be considered both blasphemy and hate speech against all followers of that faith. To others, a battle of gender-based insults may be a mutually enjoyable way of sharing a laugh.

Allen also noted that Facebook is "committed to removing hate speech any time [they] become aware of it," and has deleted close to 280,000 posts reported as hate speech globally over the past two months. "But it’s clear we’re not perfect when it comes to enforcing our policy. Often there are close calls--and too often we get it wrong," Allen wrote.

When asked for comment, a Facebook representative suggested review of the same blog post.

ProPublica's extensive reporting on the topic, including examples of seemingly politically related deletions, can be found here.

See also: Netflix Joins Net Neutrality Coalition, Vowing 'Never To Outgrow The Fight'

Follow me on Twitter or LinkedInCheck out my website