Civil liberties activists are suing a company that provides facial recognition services to law enforcement agencies and private companies around the world, contending that Clearview AI illegally stockpiled data on 3 billion people without their knowledge or permission.

The lawsuit, filed Tuesday in Alameda County Superior Court in the San Francisco Bay Area, contends that the New York-based firm violates California’s constitution and seeks an injunction to bar it from collecting biometric information in California and requiring it to delete data on Californians.

The lawsuit says the company has built “the most dangerous” facial recognition database in the nation, has fielded requests from more than 2,000 law enforcement agencies and private companies, and has amassed a database nearly seven times larger than the FBI’s.

The lawsuit was filed by four activists and the groups Mijente and Norcal Resist, who have supported causes such as Black Lives Matter and have been critical of the policies of U.S. Immigration and Customs Enforcement, which has a contract with Clearview AI.

“Clearview has provided thousands of governments, government agencies, and private entities access to its database, which they can use to identify people with dissident views, monitor their associations, and track their speech,” the lawsuit contends.

The lawsuit said Clearview AI scrapes dozens of internet sites, such as Facebook, Twitter, Google and Venmo, to gather facial photos. Scraping involves the use of computer programs to automatically scan and copy data, which the lawsuit says is analyzed by Clearview AI to identify individual biometrics such as eye shape and size that are then put into a “faceprint” database that clients can use to ID people.

What the company says:

“TESTED AND COMPLIANT”

“Clearview AI helps law enforcement to accurately, reliably and lawfully identify criminal suspects, as well as the victims upon whom they prey.

“Clearview AI’s image search technology has been independently tested for accuracy and evaluated for legal compliance by nationally recognized authorities. It has achieved the highest standards of performance on every level.”

Reacting to the suit, the company added:

“Clearview AI complies with all applicable law and its conduct is fully protected by the First Amendment,” said a statement from attorney Floyd Abrams, representing the company.

The company has said it saw law enforcement use of its technology jump 26% following January’s deadly riot at the U.S. Capitol.


From Clearview AI website: How tech works

Public information only.

Clearview AI searches the open web. Clearview AI does not and cannot search any private or protected info, including in your private social media accounts.

Search, not surveillance.

Clearview AI is an after-the-fact research tool. Clearview AI is not a surveillance system and is not built like one. For example, analysts upload images from crime scenes and compare them to publicly available images.

Stopping criminals.

Clearview AI helps to identify child molesters, murderers, suspected terrorists, and other dangerous people quickly, accurately, and reliably to keep our families and communities safe.
Independently verified for accuracy.

An independent panel of experts reviewed and certified Clearview AI for accuracy and reliability.

Full compliance with the law.

Just like other research systems, Clearview AI results legally require follow-up investigation and confirmation. Clearview AI was designed and independently verified to comply with all federal, state, and local laws.

Protecting the innocent.

Clearview AI helps to exonerate the innocent, identify victims of child sexual abuse and other crimes, and avoid eyewitness lineups that are prone to human error.

Source: Clearview AI


Who is “scrapeed”?

The images scraped include those posted not only by individuals and their family and friends but also those of people who are inadvertently captured in the background of strangers’ photos, according to the lawsuit.

The company also offers its services to law enforcement even in cities that ban the use of facial recognition, the lawsuit alleges.

Several cities around the country, including the Bay Area cities of Alameda, San Francisco, Oakland and Berkeley, have limited or banned the use of facial recognition technology by local law enforcement.

“Clearview AI complies with all applicable law and its conduct is fully protected by the First Amendment,” said a statement from attorney Floyd Abrams, representing the company.

The company has said it saw law enforcement use of its technology jump 26% following January’s deadly riot at the U.S. Capitol.

Facial recognition systems have faced criticism because of their mass surveillance capabilities, which raise privacy concerns, and because some studies have shown that the technology is far more likely to misidentify Blacks and other people of color than whites, which has resulted in mistaken arrests.

However, Clearview AI’s CEO, Hoan Ton-That, said in a statement that “an independent study has indicated the Clearview AI has no racial bias.”

“As a person of mixed race, having non-biased technology is important to me,” he said.

He also argued that the use of accurate facial recognition technology can reduce the chance of wrongful arrests.

The lawsuit said Facebook, Twitter, Google and other social media firms have asked Clearview AI to stop scraping images because it violated their terms of service with users.

Clearview AI also is facing other challenges. A lawsuit filed in Illinois alleges the company violates that state’s biometric privacy act, while privacy watchdogs in both Canada and the European Union have issued statements of concern.

Clearview stopped operations in Canada last year. But privacy commissioners this year asked the firm to remove data on Canadian citizens, with one commissioner arguing that the system puts all Canadians “continually in a police lineup.”