Google's Jigsaw aims to increase the quality of online conversations
Jigsaw, a technology incubator that is a part of Google, and Google’s Counter Abuse Technology team want to rid the web of bad comments.
Jigsaw, a technology incubator that is a part of Google, and Google’s Counter Abuse Technology team want to rid the web of bad comments.
Jigsaw, a technology incubator that is a part of Google, and Google’s Counter Abuse Technology team want to rid the web of bad comments.
To this end, last week they announced the launch of Perspective, an API “that makes it easier to host better conversations.”
Perspective “uses machine learning models to score the perceived impact a comment might have on a conversation” and can be used by publishers to identify and filter out comments that are likely to be “toxic.” When fed the content of a comment, the API provides a percentage likelihood of the content being deemed toxic.
A toxic comment is a “a rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion” and Perspective’s toxicity model has been trained by asking real people to rate real comments on a scale that ranges from “very toxic” to “very healthy.” The Perspective website offers a free tool that demonstrates how the Perspective API rates sample content.
Jigsaw does note that “it’s still early days and we will get a lot of things wrong,” but already, a number of prominent publishers are experimenting with the Perspective API. For example, The New York Times and The Guardian are working on moderation tools that aim to improve the quality of conversations within their reader communities, and Wikipedia is testing how it can better detect attacks against its editors.
Few would argue that “mak[ing] it easier to host better conversations” online is an unworthy goal. Trolling and personal attacks can quickly destroy comment sections and forums and unfortunately, they seem to be increasingly common.
But in some cases, there’s a fine line between “toxic” contributions and contributions that, while perhaps negative in tone or argumentative, are entirely legitimate and conducive to beneficial discussion.
In this author’s test of the Perspective API, it would appear that a few words can have a significant impact on how a comment is rated. For example, the comment “I do not agree. You have distorted the point of the article and are intentionally misrepresenting the facts” is deemed by the Perspective API to be 13% similar to comments people said were “toxic.”
But change the first sentence from “I do not agree” to “That’s silly”, and the percentage more than doubles to 34%.
Over time as it collects more data and user feedback, Perspective’s model should improve, but the problem for publishers hoping to rely heavily on this technology is that when it comes to the possibility of “censoring” contributors, for obvious reasons, it’s hard to entrust decisions to a machine.
Because of this, publishers looking to foster better conversations among members of their virtual communities would be wise to consider that ultimately, the conversation quality challenge is a problem created by humans, and must be solved by humans.