IE 11 is not supported. For an optimal experience visit our site on another browser.

Big Tech juggles ethical pledges on facial recognition with corporate interests

The careful wording of public pledges leaves plenty of room for oppressive uses of the technology, critics say.
San Francisco Board Of Supervisors To Vote On Banning Facial-Recognition Technology
A video surveillance camera hangs from the side of a building in San Francisco on May 14, 2019.Justin Sullivan / Getty Images file

Over the course of four days last week, three of America's largest technology companies — IBM, Amazon and Microsoft — announced sweeping restrictions on their sale of facial recognition tools and called for federal regulation amid protests across the United States against police violence and racial profiling.

In terms of headlines, it was a symbolic shift for the industry. Researchers and civil liberties groups who have been calling for strict controls or outright bans on the technology for years are celebrating, although cautiously.

They doubt, however, that much has changed. The careful wording of the public pledges leaves plenty of room for oppressive uses of the technology, which exacerbate human biases and infringe on people's constitutional freedoms, critics say.

"It shows that organizing and socially informed research works," said Meredith Whittaker, co-founder of the AI Now Institute, which researches the social implications of artificial intelligence. "But do I think the companies have really had a change of heart and will work to dismantle racist oppressive systems? No."

Facial recognition has emerged in recent years as a major area of investment, both in terms of developing technology and in lobbying to allow law enforcement and private companies to use it. The technology began to show up in government contracts, with some companies, like Clearview AI, scraping billions of photos of unwitting members of the public from social media to build a near-universal facial recognition system.

At the same time, critics and skeptics of the technology — including from within the companies — have pushed for transparency and regulations around its use. Some of those efforts have been successful, with cities like San Francisco, Oakland and Berkeley in California and Somerville, Massachusetts, banning the use of the software by police and other agencies.

Now, Whittaker and other technology researchers and civil rights groups, including the American Civil Liberties Union and Mijente, an immigrant rights group, say the technology companies' pledges have more to do with public relations at a time of heightened scrutiny of police powers than any serious ethical objections to deploying facial recognition as a whole.

They seek a total ban on government use of the technology, arguing that neither companies nor law enforcement agencies can be ethically trusted to deploy such a powerful tool.

"Facial recognition technology is so inherently destructive that the safest approach is to pull it out root and stem," said Woody Hartzog, a professor of law and computer science at the Northeastern University School of Law.

While the companies make timely public calls for regulation, they have armies of lobbyists working to shape that regulation to ensure that they can continue to bid for government surveillance contracts, said Shankar Narayan, former director of the Technology and Liberty Project of the ACLU of Washington and co-founder of MIRA, a community engagement agency.

IMAGE: Facial recognition tech at CES 2019
Facial recognition software is demonstrated at the Intel booth at the Consumer Electronics Show at the Las Vegas Convention Center on Jan.10, 2019.Robyn Beck / AFP - Getty Images file

"This isn't a shift but part of the optics pivot that big tech was doing well before this," Narayan said. "These companies have been saying, 'Hey, we care so much about these issues that we will write the rules and regulations ourselves that will allow the technology to be widely embraced.'"

Microsoft and Amazon have earned some of the skepticism by calling for limits on facial recognition technology while also pursuing its development and deployment.

With last week's announcement, Microsoft said it wouldn't sell facial recognition to police in the United States until there was federal regulation. But the company has spent months lobbying state governments to pass bills to allow police to use facial recognition.

In Washington state, a Microsoft employee wrote a facial recognition bill that was signed into law in April. The law requires basic transparency and accountability mechanisms around government use of the technology, but beyond outlawing "mass surveillance," it does little to restrict how police can use it.

"The worry is that this weak regulation would be the template for a federal law and that Congress will make use of federal pre-emption to undo the laws that are stronger locally," said Liz O'Sullivan, technology director of the Surveillance Technology Oversight Project, referring to cities that have banned police use of the software.

Over the last two years, Microsoft has warned about the "sobering" applications of facial recognition technology and called for government regulation and the application of ethical principles.

"Imagine a government tracking everywhere you walked over the past month without your permission or knowledge," Microsoft President Brad Smith said in a July 2018 blog post. "Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech."

Around the same time, however, the company was pitching its facial recognition technology to the Drug Enforcement Administration, hosting agents at its office in Reston, Virginia, according to emails dating from September 2017 to December 2018 obtained by the ACLU and shared with NBC News.

The company's ethical principles were also challenged in its 2019 investment in the Israeli facial recognition startup AnyVision, which field-tested its surveillance tools on a captive population of Palestinians despite having made public pledges to avoid using the technology if it encroached on democratic freedoms. After NBC News reported on the startup's activities in the West Bank, Microsoft commissioned an investigation by former Attorney General Eric Holder and eventually divested from AnyVision.

Microsoft didn't respond to a request for comment.

Amazon said in its announcement that it wouldn't sell its Rekognition tool to police for a year to give Congress time to regulate the technology.

It's not clear how many law enforcement customers Amazon had for Rekognition, but the Washington County, Oregon, Sheriff's Office has used it since late 2017 to compare mugshots to surveillance video — a contract that attracted criticism from civil rights groups.

For years, the ACLU, top AI researchers and some of Amazon's own investors and employees have urged the company to stop providing its technology to law enforcement. Studies have found the system to be less accurate at identifying dark-skinned faces. Amazon repeatedly disputed the research and continued to promote the tool to law enforcement.

"New technology should not be banned or condemned because of its potential misuse," Michael Punke, Amazon's vice president of global policy, said in a February 2019 blog post. "Instead, there should be open, honest, and earnest dialogue among all parties involved to ensure that the technology is applied appropriately and is continuously enhanced."

Even with regulation, law enforcement could still misuse facial recognition technology, said Jacinta Gonzalez, field director of Mijente.

"There's a huge crisis of accountability with policing," she said, pointing to the protests taking place across the country in the aftermath of George Floyd's death in police custody. "Until we have accountability, the continued investment in these technologies will only further the criminalization and abuse of Black and immigrant communities."

Amazon didn't respond to a request for comment.

IBM seemed to go further than Amazon and Microsoft by pledging in a letter to Congress to stop researching, developing or selling "general purpose" facial recognition.

In the letter, IBM CEO Arvind Krishna said the company "firmly opposes" the use of facial recognition for "mass surveillance, racial profiling, violations of basic human rights and freedoms."

John Honovich, founder of IPVM, an independent website that tests and reports on surveillance systems, said the timing of the announcement was curious, because the company had pulled its video analytics product that included facial recognition from the market in May 2019.

IBM had previously tried to develop less racially biased facial recognition software through the release in January 2019 of a diverse set of 1 million photos of faces of people of different skin tones, ages and genders. However, as NBC News reported in March 2019, the company took those photos from Flickr without the subjects' knowledge or informed consent.

Although the company said it was for research purposes only, IBM has a history of developing facial recognition for law enforcement. In the aftermath of the terrorist attacks of Sept. 11, 2001, the company used New York Police Department surveillance camera video to develop technology that allowed police to search video feeds for people based on attributes that included their skin color.

Download the NBC News app for breaking news and alerts

Eliminating bias in facial recognition technology might represent scientific progress, but it doesn't make the technology safer, critics say.

"It's incredibly bad and destructive when it's biased, but it's even worse when it's accurate, because then it becomes more attractive to those in power that wish to use it," said Hartzog, of Northeastern University. "We know that people of color bear the brunt of surveillance tools."

IBM relaunched its video analytics tool in early May, but it told IPVM that it had removed facial recognition, race and skin tone analytics based on recommendations from its AI ethics panel.

Honovich said IBM was a small player in the face surveillance industry, so its withdrawal from the market wouldn't make much of an impact on the company's bottom line.

"It's not a tough business decision, especially if there's tons of protests against it," Honovich said.

IBM declined to comment.

Honovich and others also noted that although IBM, Microsoft and Amazon are giants in the technology industry, they aren't market leaders in the police surveillance industry. They have plenty of competition in the form of startups selling facial recognition to law enforcement, including Briefcam and Clearview AI, which don't have the kinds of consumer-facing brands susceptible to public pressure. That allows the big companies to take the moral high ground without limiting police surveillance capabilities.

"The small companies will quietly continue to fly under the radar and sell products exclusively to law enforcement," O'Sullivan said. "It's a better business model, because you don't have to worry about your brand and people buying books and toilet paper from your website if all you do is sell facial recognition to law enforcement."

CORRECTION (June 22, 2020, 7:15 p.m. ET): An earlier version of this article misstated Shankar Narayan's former position. He is the former director of the Technology and Liberty Project of the ACLU of Washington, not the former director of the national ACLU's Speech, Privacy and Technology Project.