A controversial facial recognition database, used by police departments across the nation, was built in part with 30 billion photos the company scraped from Facebook and other social media users without their permission, the company's CEO recently admitted, creating what critics called a "perpetual police line-up," even for people who haven't done anything wrong.
The company, Clearview AI, boasts of its potential for identifying rioters at the January 6 attack on the Capitol, saving children being abused or exploited, and helping exonerate people wrongfully accused of crimes. But critics point to wrongful arrests fueled by faulty identifications made by facial recognition, including cases in Detroit and New Orleans.
Clearview took photos without users' knowledge, its CEO Hoan Ton-That acknowledged in an interview last month with the BBC. Doing so allowed for the rapid expansion of the company's massive database, which is marketed on its website to law enforcement as a tool "to bring justice to victims."
Ton-That told the BBC that Clearview AI's facial recognition database has been accessed by US police nearly a million times since the company's founding in 2017, though the relationships between law enforcement and Clearview AI remain murky and that number could not be confirmed by Insider.
Representatives for Clearview AI did not immediately respond to Insider's request for comment.
What happens when unauthorized scraping happens
The technology has long drawn criticism for its intrusiveness from privacy advocates and digital platforms alike, with major social media companies including Facebook sending cease-and-desist letters to Clearview in 2020 for violating their user's privacy.
"Clearview AI's actions invade people's privacy which is why we banned their founder from our services and sent them a legal demand to stop accessing any data, photos, or videos from our services," a Meta spokesperson said in an email to Insider, referencing a statement made by the company in April 2020 after it was first revealed that the company was scraping user photos and working with law enforcement.
Since then, the spokesperson told Insider, Meta has "made significant investments in technology" and devotes "substantial team resources to combating unauthorized scraping on Facebook products."
When unauthorized scraping is detected, the company may take action "such as sending cease and desist letters, disabling accounts, filing lawsuits, or requesting assistance from hosting providers" to protect user data, the spokesperson said.
However, even despite internal policies, once a photo has been scraped by Clearview AI, biometric face prints are made and cross-referenced in the database, tying the individuals to their social media profiles and other identifying information forever — and people in the photos have little recourse to try to remove themselves.
Residents of Illinois can opt out of the technology (by providing another photo that Clearview AI claims will only be used to identify which stored photos to remove) after the ACLU sued the company under a statewide privacy law, and succeeded in banning the sale of Clearview AI technology nationwide to private businesses. However, residents of other states do not have the same option and the company is still permitted to partner with law enforcement.
'A perpetual police line-up'
"Clearview is a total affront to peoples' rights, full stop, and police should not be able to use this tool," Caitlin Seeley George, the director of campaigns and operations for Fight for the Future, a nonprofit digital rights advocacy group, said in an email to Insider, adding that "without laws stopping them, police often use Clearview without their department's knowledge or consent, so Clearview boasting about how many searches is the only form of 'transparency' we get into just how widespread use of facial recognition is."