Clearview AI has no place in our community.

The Minneapolis Police Department uses facial recognition technology created by a company called Clearview AI. On top of the dangers and biases perpetuated by facial recognition at large, Clearview AI lacks basic transparency and honesty about its product. Below, we review the documentation of its repeated shortcomings and unethical conduct.

Let’s be transparent about this:

Minneapolis residents deserve civic infrastructure that champions transparency, respects community standards, and performs effectively. Clearview AI fails across the board.

Don’t just take our word for it. Check out the research below:

Update, January 2021: The recent attack on the United States Capitol on January 6 has prompted a sharp increase in Clearview’s usage to identify participants in the attack (Hill, 2021). Effectively pursuing justice against criminal offenses should not – and need not – rely on a technology company that has itself repeatedly disregarded ethical standards. Clearview’s political ties further render it inappropriate for usage in this instance: the company, including in its leadership, has multiple connections to figures involved in white-supremacist extremism (O’Brien, 2020). Turning to Clearview at this time also advances a dangerous precedent of surveilling unarmed, peaceful demonstrators of any political position in the future, without regard for the legal or ethical integrity of the surveillance mechanism.

Table of Contents

What is Clearview AI?

Back to top

Clearview AI (hereafter Clearview) is a start-up company that has gained international notoriety for building facial recognition technology by training its artificial intelligence on billions of photos scraped from social media websites. Presenting their product as a superior alternative to state facial recognition technology, Clearview has a clientele across the globe and across industries, from retail to law enforcement. Clearview’s front-page profile in the New York Times brought their opaque practices into the national spotlight (Hill, 2020); and the company is operating here in Minneapolis (Haskins & Mac, 2020), as facial recognition at large is on the rise in Hennepin County (Jany, 2020).

Lack of Transparency

Back to top

Evasive Practices

Clearview has a history of encouraging individual law enforcement officers to use their facial recognition technology. This behavior, as reported by Winter and Rector (2020), bypasses and undermines formal chains of command in law enforcement agencies regarding acquisition of new technologies.

Journalists investigating Clearview’s product and ethics have rarely received responses to their inquiries. Furthermore, such investigations into Clearview have prompted Clearview to use their own technology to monitor the very journalists seeking information about the company (Hill, 2020).

Any innovation or invention must hold up to external review by an independent party such as the National Institute of Standards and Technology. External review is the basis of accountability and trust, especially in emerging technologies. For all of its hype, the performance of Clearview’s product has not been independently evaluated (Haskins, Mac, & McDonald, 2020; Hill, 2020).

For all of its hype, the performance of Clearview’s product has not been independently evaluated.

Misleading Claims

Not only does Clearview lack external review of its facial recognition technology; it also fabricates claims about its product’s performance. The company has taken credit for solving crime in New York City, a position refuted by NYPD (Mac et al., 2020). Clearview has also claimed to ace performance tests previously implemented by the American Civil Liberties Union (ACLU), a claim that the ACLU itself disputed and discredited as an attempt to manufacture endorsements (Haskins, Mac, & McDonald, 2020).

Contradictory statements about the product’s usage further clouds Clearview’s ethical integrity. The company’s founder states that the tool is strictly for law enforcement agencies; yet Macy’s, Walmart, and Best Buy are among its clients (Hatmaker, 2020; Cameron et al., 2020).

Disregard for Legal and Community Standards

Back to top

Among Tech Companies

In response to Clearview’s practices violating legal and community standards, major web platforms have issued cease-and-desist letters against Clearview. Some of those platforms include: YouTube, Facebook, Google, Microsoft, Twitter, LinkedIn, and Venmo (Hughey, 2020; Musil, 2020; Fan & Hilligoss (Ed.), 2020).

Among Government and Law Enforcement Agencies

Public authorities, including New Jersey’s Office of the Attorney General, have also sent cease-and-desist letters to Clearview (Statt, 2020). The police departments of Los Angeles and San Diego have also banned the usage of Clearview (Statt, 2020; Cooper, 2020; Hernandez, 2020).

Failure to Perform Effectively

Back to top

No Independent Review

To reiterate a vital point, no independent party has rigorously evaluated the performance of Clearview’s product (Haskins, Mac, & McDonald, 2020; Hill, 2020). This lack of transparency compounds the dangers already widespread in facial recognition technology at large, in which marginalized demographic groups are more likely to be misidentified by the technology. Such inaccuracies exacerbate racial disparities in arrests and incarceration. For more information and resources concerning these risks, please visit the SNS coalition’s primer on facial recognition here.

Security Concerns

Even if Clearview were a highly accurate facial recognition tool – and to be clear, it has failed to produce any evidence to suggest that – its faulty security management is cause for serious concern in itself.

Security flaws have led to exposure of Clearview’s source code and unintended access being granted to Clearview’s cloud storage (Whittaker, 2020). Cloud storage at Clearview has also been left without appropriate protections (Cameron et al., 2020).

In 2020, Clearview reported a breach of its entire customer list, exposing features of customers’ activities like (i) how many user accounts they have created and (ii) how many searches they have conducted using Clearview (Swan, 2020). Such a breach – for any facial recognition product – severely tarnishes the fundamental security and efficacy of the product.

In addition to Clearview’s invasive practices that threaten community members’ privacy, the company’s numerous security flaws strongly indicate that the product is neither reliable for business customers nor safe for communities.

Minneapolis residents deserve civic infrastructure that champions transparency, respects community standards, and performs effectively. Clearview AI fails across the board.

In addition to Clearview’s invasive practices that threaten community members’ privacy, the company’s numerous security flaws leave its product as neither safe for communities nor reliable for its customers.

If you are interested in learning more or supporting the efforts to bring transparency to municipal technologies, please reach out to the SNS coalition here.

References

Back to top

Cameron, D., Mehrotra, D., & Wodinsky, S. (2020, February 27). We found Clearview AI’s shady face recognition app. Gizmodo. https://gizmodo.com/we-found-clearview-ais-shady-face-recognition-app-1841961772

Cooper, D. (2020, November 18). LAPD bans the use of Clearview’s controversial facial recognition software. Engadget. https://www.engadget.com/lapd-ban-third-party-facial-recognition-clearview-ai-112526446.html

Fan, K., & Hilligoss, H. (Ed.). (2020, February 25). Clearview AI responds to cease-and-desist letters by claiming First Amendment right to publicly available data. Harvard Journal of Law & Technology Digest. https://jolt.law.harvard.edu/digest/clearview-ai-responds-to-cease-and-desist-letters-by-claiming-first-amendment-right-to-publicly-available-data

Haskins, C., & Mac, R. (2020, May 29). Here are the Minneapolis Police’s tools to Identify Protesters. BuzzFeed News. https://www.buzzfeednews.com/article/carolinehaskins1/george-floyd-protests-surveillance-technology

Haskins, C., Mac, R., & McDonald, L. (2020, February 10). The ACLU slammed a facial recognition company that scrapes photos from Instagram and Facebook. BuzzFeed News. https://www.buzzfeednews.com/article/carolinehaskins1/clearview-ai-facial-recognition-accurate-aclu-absurd

Hatmaker, T. (2020, February 27). Clearview said its facial recognition app was only for law enforcement as it courted private companies. TechCrunch. https://techcrunch.com/2020/02/27/clearview-facial-recognition-private-companies/

Hernandez, D. (2020, March 16). San Diego police, DA ban use of facial recognition app – but not before it was tested. San Diego Union-Tribune. https://www.sandiegouniontribune.com/news/public-safety/story/2020-03-16/san-diego-police-das-office-tried-out-a-facial-recognition-app

Hill, K. (2020, January 18). The secretive company that might end privacy as we know it. The New York Times. https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html

Hill, K. (2021, January 9). The facial-recognition app Clearview sees a spike in use after Capitol attack. The New York Times. https://www.nytimes.com/2021/01/09/technology/facial-recognition-clearview-capitol.html

Hughey, C. (2020, August 17). Facial recognition technology riddled with racial bias; cities are fighting back. People’s World. https://www.peoplesworld.org/article/facial-recognition-technology-riddled-with-racial-bias-cities-are-fighting-back/

Jany, L. (2020, December 4). Police use of facial recognition technology soars in Minnesota. Star Tribune. https://www.startribune.com/police-use-of-facial-recognition-technology-soars-in-minnesota/573294251/

Mac, R., Haskins, C., & McDonald, L. (2020, January 23). Clearview AI says its facial recognition software identified a terrorism suspect. The cops say that’s not true. BuzzFeed News. https://www.buzzfeednews.com/article/ryanmac/clearview-ai-nypd-facial-recognition

Musil, S. (2020, June 10). Clearview AI still backs facial recognition, despite competitors’ concerns. CNET. https://www.cnet.com/news/clearview-ai-still-backs-facial-recognition-despite-competitors-concerns/

O’Brien, L. (2020, April 7). The far-right helped create the world’s most powerful facial recognition technology. HuffPost. https://www.huffpost.com/entry/clearview-ai-facial-recognition-alt-right_n_5e7d028bc5b6cb08a92a5c48

Statt, D. (2020, January 24). Controversial facial recognition firm Clearview AI facing legal claims after damning NYT report. The Verge. https://www.theverge.com/2020/1/24/21079354/clearview-ai-nypd-terrorism-suspect-false-claims-facial-recognition

Swan, B. (2020, February 26). Facial recognition company that works with law enforcement says entire client list was stolen. The Daily Beast. https://www.thedailybeast.com/clearview-ai-facial-recognition-company-that-works-with-law-enforcement-says-entire-client-list-was-stolen

Whittaker, Z. (2020, April 16). Security lapse exposed Clearview AI source code. TechCrunch. https://techcrunch.com/2020/04/16/clearview-source-code-lapse/

Winton, R., & Rector, K. (2020, November 17). LAPD bars use of third-party facial recognition system, launches review after BuzzFeed inquiry. The Los Angeles Times. https://www.latimes.com/california/story/2020-11-17/lapd-bars-outside-facial-recognition-use-as-buzzfeed-inquiry-spurs-investigation