Clearview AI Harvests Private, Deleted Photos for Facial Recognition

© REUTERS / Thomas PeterPeople walk past a poster simulating facial recognition software at the Security China 2018 exhibition
People walk past a poster simulating facial recognition software at the Security China 2018 exhibition - Sputnik International
Subscribe
The New York-based high-tech company Clearview AI made headlines last year when it was first reported that the facial recognition firm - with its mammoth three-billion photograph database - kicked off providing services to law enforcement agencies.

Clearview cofounder, Hoan Ton-That, 31, during a recent interview with CNN Business, showed how Clearview's algorithm breaks privacy settings on social media. The program easily identified a CNN staffer by using several Instagram photos which had been kept behind normal privacy settings.

The CNN reporter described the bizarre experience. "As we scrolled through the images it had found, my producer noticed that Clearview had found pictures from her Instagram account, even though her account has been private, accessible only to her followers".

Ton-That remarked that the Clearview AI had probably downloaded the photos from her account, before they were tagged private.

The appalling size of the Clearview database also made it possible, during another demonstration, to find a photo from a newspaper with a 15-year-old boy and to match it to the current appearance of the CNN reporter.

"Most jarringly, he found a photo that I had probably not seen in more than a decade, a picture that ran in a local newspaper in Ireland when I was 15 years old and in high school. Needless to say, I look a lot different now than I did then; in fact, my producer, who has to spend far more time than she'd like looking at me through a camera, didn't even recognize me. But the system did", the CNN Business reporter said.

Ton-That claimed that, despite fears of privacy violation, the company uses its powerful facial recognition technology only for criminal investigations.

"You have to remember that this is only used for investigations after the fact. This is not a 24/7 surveillance system. The way we have built our system is to only take publicly available information and index it that way", Ton-That asserted, cited by CNN Business.

According to the Clearview website, the technology has "helped law enforcement track down hundreds of at-large criminals, including pedophiles, terrorists and sex traffickers".

Despite a declaration of goodwill and its role in crime-fighting, the use and sale of the Clearview AI product raises serious privacy concerns.

People walk past a poster simulating facial recognition software at the Security China 2018 exhibition on public safety and security in Beijing, China October 24, 2018 - Sputnik International
Facebook Demands Facial Recognition Startup Stop Scraping Images From Platform
"The weaponization possibilities of this are endless. Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail", Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University in California, said, cited by The New York Times.

New Jersey Attorney General Gurbir Grewal reportedly instructed prosecutors across the state to stop using Clearview AI, despite the company's claim that its product assisted in successfully investigating a child predator ring.

"I was deeply disturbed [...] I was concerned about how Clearview had amassed its database of images that it uses with its technology. I was concerned about its data privacy and cybersecurity measures that it takes", Grewal said, cited by CNN Business.

Newsfeed
0
To participate in the discussion
log in or register
loader
Chats
Заголовок открываемого материала