Google Researchers Are Forced To Hide The Real Nature Of AI

Dhir Acharya - Dec 28, 2020


Google Researchers Are Forced To Hide The Real Nature Of AI

Google's parent company, Alphabet, has been asking its scientists to make AI technology look more positive in their research papers.

According to a Wednesday report from Reuters, Alphabet, Google’s parent company, has been asking its scientists to make AI technology look more positive in their research papers.

Reportedly, a new procedure is being used that requires researchers to consult with Google’s policy, legal, or PR teams for a “sensitive topics review” before exploring areas such as face analysis, along with gender, racial, and political affiliation.

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues.”

google
Is artificial intelligence as positive as Google shows us?

An internal correspondence told Reuters that the company told authors to pay attention to striking a positive tone.

Earlier this month, Sundar Pichai, Google’s Chief Executive Officer, apologized for improperly handling the departure of AI researcher Timnit Gebru from the firm and said that it would investigate the incident. The researcher left Google on December 4, who said that she was forced to leave the firm over an email she sent to co-workers.

In the email, she criticized the Diversity, Equity, and Inclusion operation at Google, as reported by Platformer that posted the entire email. She wrote that she had been asked to retract a research paper she had been working on following the feedback she received.

“You're not supposed to even know who contributed to this document, who wrote this feedback, what process was followed or anything. You write a detailed document discussing whatever pieces of feedback you can find, asking for questions and clarifications, and it is completely ignored.”

Capture Compressed 2

She added:

“Silencing marginalized voices like this is the opposite of the NAUWU principles which we discussed. And doing this in the context of 'responsible AI' adds so much salt to the wounds.”

NAUWU, or “nothing about us without us,” is an idea that policies should not be made without the input from whom they affect.

Google did not respond immediately to a comment request.

>>> Delhi Police Can Hack Your Smartphone To Extract Data Even If It's Locked

Comments

Sort by Newest | Popular

Next Story