Social media blackout sets ‘troubling precedent’

.

The social media blackout that Sri Lanka imposed in the aftermath of the Easter Sunday bombings that killed more than 300 people illustrated the dwindling confidence of governments around the world that Silicon Valley can effectively combat the misuse of social media by terrorists.

Free speech activists are troubled not just by the blackout itself but by the lack of any public outcry in response, said Ivan Sigal, executive director of Global Voices, an advocacy group that favors free speech on digital platforms.

“That this is happening and there’s not outrage, because there’s neither trust in platforms nor trust in government, is a point of failure all around,” Sigal told the Washington Examiner. “It’s a telling moment, this idea that it’s an acceptable approach to public security to shut the media down.”

The decision by the Sri Lankan government, which affected platforms including Facebook and photo-sharing site Instagram, was an attempt to prevent confusion and further violence from false reports that were spreading rapidly in the aftermath of the explosions, according to officials in the capital city Colombo.

[Related: Caliphate, interrupted]

In March, livestreamed attacks in New Zealand killed 50 people at two mosques, prompting scrutiny of how social media platforms can be exploited by terrorists to give anyone with a smartphone access to an audience of billions. With violent attacks increasing in frequency even in developed countries, calls to regulate the internet have also grown.

“We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published,” New Zealand Prime Minister Jacinda Ardern told her country’s Parliament a day after the mosque shootings. “They are the publisher, not just the postman.”

Australia, where the suspect in that case lived, has called on the G-20 nations to consider new rules for social media at its meeting in Japan this year. In the U.S., the House Homeland Security Committee summoned executives of Facebook, Google, Twitter, and Microsoft to Washington to explain their handling of the matter.

British Prime Minister Theresa May’s government, meanwhile, has proposed establishing a new internet regulator and imposing a so-called duty of care on digital platforms to block harmful content. Enforcement powers would include not only levying fines but possible criminal prosecution of individual senior managers.

“There definitely is a degradation of trust,” Sigal said.

The government of Sri Lanka, a nation of 22 million still recovering from a 25-year civil war that ended in 2009, bears some responsibility for failing to proactively address tensions that might have rendered the blackout unnecessary, he added.

At Facebook, teams from across the company “have been working to support first responders and law enforcement as well as to identify and remove content which violates our standards,” a spokeswoman said after the Easter slayings. “Our hearts go out to the victims, their families and the community affected by this horrendous act.”

Facebook’s policies ban anything that “glorifies violence or celebrates the suffering or humiliation of others,” including images that show visible internal organs and charred or burning people, founder Mark Zuckerberg has said. Last year, the company dedicated a team of people to identify and delete content that promoted violence against Muslims in Myanmar, though it was criticized for acting too slowly.

[Also read: Facebook considers burying anti-vaccine posts after pressure from Congress]

Social media plays a significant role in the proliferation of hate speech in that country, according to a 2018 report prepared for the Human Rights Council of the United Nations that faulted the government of Myanmar for allowing such behavior to thrive.

“Facebook has been a useful instrument for those seeking to spread hate, in a context where, for most users, Facebook is the Internet,” the council’s fact-finding mission concluded. “The extent to which Facebook posts and messages have led to real-world discrimination and violence must be independently and thoroughly examined.”

Zuckerberg, for his part, conceded in a Washington Post op-ed in late March that the time has come for increased government regulation of the internet, suggesting that four areas should be prioritized, including harmful content.

“We have a responsibility to keep people safe on our services,” he wrote. “That means deciding what counts as terrorist propaganda, hate speech and more.”

Noting that lawmakers have told him repeatedly, including during congressional hearings, that Facebook has “too much power over speech,” Zuckerberg said he agrees. “I’ve come to believe that we shouldn’t make so many important decisions about speech on our own.”

Twitter CEO Jack Dorsey says he’s open to “regulation where it makes sense,” noting that the San Francisco-based company has focused on proactive removal of content that violates its policies, which prohibit criminal activity and hate speech.

“Of all the tweets we take down every week for abusive content, 38% of them are now proactively detected by our machine learning models,” Dorsey explained. “This is a huge step, as there was 0% just a year ago.”

Twitter has been “focusing a lot of our work on making sure that we recognize that everything that happens online has off-line ramifications, and protect someone’s physical safety above all else,” he added.

Use of the platforms by white nationalists has been a particular concern. In April, the House Judiciary Committee questioned representatives of both Google and Facebook on their efforts to combat supremacist rhetoric, which Chairman Jerry Nadler, D-N.Y., said was linked to extremist attacks, including the one in New Zealand.

“In the age of instant communication with worldwide reach, white nationalists target communities of color and religious minorities through social media platforms, some of which are well-known to all Americans and some of which operate in hidden corners of the web,” he said. “These platforms are utilized as conduits to spread vitriolic hate messages into every home and country.”

Such developments are the reason Kara Swisher says her first thought after learning of Sri Lanka’s decision was that it was a wise move.

“It pains me as a journalist, and someone who once believed that a worldwide communications medium would herald more tolerance, to admit this — to say that my first instinct was to turn it all off,” she wrote in a New York Times column. “But it has become clear to me with every incident that the greatest experiment in human interaction in the history of the world continues to fail in ever more dangerous ways.”

Sigal worries such reactions might grow more widespread, prompting governments in other countries to try tactics like Sri Lanka’s, especially in nations that lack free speech protections such as the U.S. Constitution’s First Amendment.

“Blocks like that tend to decrease trust in government, and they create spaces for silence and power imbalances in who has access to media,” he said. “It’s a highly problematic approach.”

Related Content

Related Content