Sections

Commentary

If not AI ethicists like Timnit Gebru, who will hold Big Tech accountable?

The logo of Google is pictured during the Viva Tech start-up and technology summit in Paris, France, May 25, 2018. REUTERS/Charles Platiau - RC1E77E45030

Until last week, Timnit Gebru co-led the Ethical Artificial Intelligence (AI) team at Google. Dr. Gebru’s groundbreaking research, with Algorithmic Justice League founder Joy Buolamwini, has exposed how AI systems exhibit biases, especially towards people of color. Their work prompted moratoriums on facial recognition software, which is why Gebru’s position at Google lent credibility to the company’s efforts to responsibly integrate AI in a wide range of its products.

Now the tech world is roiling in the wake of her dismissal from Google. While Google’s Senior Vice President Jeff Dean has said that Gebru resigned, this is disputed by Gebru and others. It is clear that, following a disagreement over an AI research paper and a frustrated email from Gebru about ineffectual diversity efforts, Google’s management was not reticent to push Gebru out.

Timnit Gebru is a Black woman with a distinguished history of diversifying AI research, and Google’s actions raise difficult and important questions about the company’s diversity initiatives. Yet there are also serious accountability concerns: Who should be overseeing the deployment of artificial intelligence systems with major societal implications? Because those systems are typically built with proprietary data and are often accessible only to the employees of large technology companies, AI ethicists at these companies represent a key—sometimes the only—check on whether they are being responsibly deployed. If a researcher like Gebru, an undisputed leader in the field of AI ethics, cannot carry out that work within a company like Google, one of the leading U.S. developers of AI, then who can?

Improving AI from the inside

Ethical AI research is hugely important. With AI deployed in a wide range of systems both in the private sector and in government, ethical AI research can help make such systems more safe, fair, and transparent. Such research can prevent the use of a problematic application, as when Amazon scrapped an AI recruiting tool that was unable to remedy a bias against women. The paper at the heart of the dispute between Gebru and Google discusses concerns with the environmental consequences of large language models, their emerging bias and interpretability challenges, and the structural limitations of large AI models that try to understand language. These are meaningful problems for these language models and this paper would not be the first to level such criticism.

When examining AI systems, research from within technology companies has the potential to be more precise, as it can be done by researchers who have access to and familiarity with the company’s data. As was the case with Gebru and her co-authors’ paper, corporate researchers are also more likely to have access to cutting-edge AI models, which can cost millions of dollars to train and thus are prohibitively expensive for many outsider researchers to access.

Indeed, designing research to study internet platforms from the outside is notoriously challenging. Take for instance the potential for YouTube to radicalize users by recommending more extreme political content. A recent report examining the 2019 massacre in Christchurch, New Zealand, cited the shooter’s consumption of far-right content on YouTube as a contributing factor in his radicalization. Yet broader examinations of radicalization on the platform falter on a lack of data. Academic studies have attempted to answer narrow questions, like whether YouTube typically recommends more extreme political content. But without access to YouTube’s data, these studies are unable to account for algorithmic decisions based on personal browsing history or observe whether people often follow the recommendations toward radicalization.

Further, outside researchers don’t have any ability to test counterfactuals—that is, what might happen if the algorithm was changed to mitigate the problems. These are tough question even with YouTube’s data, and from the outside, “there’s no good way for external researchers to quantitatively study radicalization,” argues Arvind Narayanan, a Princeton computer science professor.

Stemming from these limitations, a consistent pattern emerges. External researchers collect what data they can and probe AI systems from afar. When those studies criticize those systems, the technology companies typically dispute the findings. This usually includes criticism that the study’s methodology was flawed—as YouTube argued in response to the example above regarding the recommendation of extreme political content. These statements tend not to mention that improving the study would be impossible without the company’s collaboration.

Regarding the recommendation of extreme political content, YouTube later changed its algorithm by increasing recommendations to more authoritative sources. But it is unlikely there will ever be a public understanding of how this decision, which determines 70% of all videos watched on YouTube, came to be made. This is not to say the companies are always misleading the public. At times, academic researchers may be mistaken and misplace criticism.

Such an outcome is more often the exception than the rule—criticisms from outside researchers are generally brushed aside by technology companies. This is why the presence of ethical research teams inside major tech companies, with both database permissions and access to the C-suite, is so important. In an ideal world, ethical AI teams could build new knowledge and act as a mechanism for corporate accountability. Unfortunately, as demonstrated by the dismissal of Gebru, this is hardly guaranteed. Events like this will surely dampen the spirits and temper the actions of other AI ethicists.

Big Tech is bereft of accountability

The marginalization of internal ethics teams and the struggles of outside researchers to examine AI systems are especially troubling since little else holds Big Tech accountable. Consumer boycotts have failed to change company business practices: While the 2017 #DeleteUber campaign resulted in hundreds of thousands of people deleting the app, that was only a small fraction of Uber’s tens of millions of monthly users. What’s more, because the Big Tech companies are ubiquitous on the internet, cutting them out all together is functionally impossible. Even advertisers struggle to make an impact. The  #StopHateForProfit advertiser boycott of Facebook, earned the support of more than 1,000 companies, and yet Facebook reported ad revenue growth during the month-long moratorium.

Innovative journalism can be impactful, such as when ProPublica used Facebook’s own ad targeting platform to show it could easily violate the Federal Fair Housing Act. This led to letters from Congress and a series of lawsuits that resolved the issue, but it took years for a recalcitrant Facebook to make necessary changes. More typically, tech reporting leads to a brief outburst of outage without sustained long-term improvements. This is not an indictment of journalism, but of the lack of federal government action.

There are signs this lack of governmental oversight is changing. A renewed interest in antitrust enforcement has sparked a recent lawsuit from the Federal Trade Commission and forty states accusing Facebook of anticompetitive behavior and another from the Justice Department targeting Google. Hopefully, the Biden administration will take a more active role, too—helping agencies enforce existing laws that should apply to tech companies.

These efforts are meaningful but have not structurally changed the equation. Big Tech remains well entrenched and, at best, only moderately responsive to the public will. This is why the rank-and-file developers and data scientists at tech companies matter so much. They are highly trained, relatively hard to replace, and essential to core business functions. So far, they have not systemically turned against their employers. This is best exemplified by how Facebook and Uber, among the most criticized major tech companies, are still able to hire and retain talent.

Nonetheless, the big tech companies are clearly afraid of the possibility of a revolt by their engineers. This fear explains Google’s broader pattern of hostility toward employee activism and organizing. A recent federal complaint faults Google for illegally firing two employees involved in labor organizing, and spying on others. Google has also been criticized for retaliating against organizers of a mass walkout over the company’s mishandling of sexual harassment claims. Last year, Google altered internal guidance to minimize non-work related conversations that used to be a hallmark of the company’s ethos.

The limits to outside influence and subduing of internal dissent are the frame to the portrait of Timnit Gebru’s dismissal. Gebru foreshadowed this herself, writing in the email that criticized Google’s diversity efforts that “if you would like to change things, I suggest focusing on leadership accountability.”

It’s good advice and someone needs to do it. But if not AI ethicists like Gebru, then who?


Facebook and Google are general, unrestricted donors to the Brookings Institution. The findings, interpretations and conclusions in this piece are solely those of the author and not influenced by any donation.