ACLU cries foul over government use of facial recognition

Aug. 17, 2018
Privacy advocates, civil liberties groups claim technology has a racial, gender bias problem

Recent advancements in video surveillance technology combined with the rise of artificial intelligence (AI) software have opened the proverbial floodgates for the development of facial recognition systems. What was once thought to be a crime-fighting tool for the fictitious detectives of books and movies, facial recognition solutions have garnered increasing interest from real-world organizations in both the public and private sectors.

However, as with any new surveillance technology, facial recognition and the capabilities it provides has also received its fair share of criticism from privacy advocates. Chief among these critics is the American Civil Liberties Union (ACLU), which has publicly called upon Amazon to stop supplying its Rekognition product to law enforcement and other government entities.

In June, the ACLU along with a number of other civil rights organizations sent a letter to Amazon Founder and CEO Jeff Bezos expressing its concerns that the company’s facial recognition system could be abused by the government and also unfairly target minorities.

“People should be free to walk down the street without being watched by the government. Facial recognition in American communities threatens this freedom. In overpoliced communities of color, it could effectively eliminate it,” the letter read. “The federal government could use this facial recognition technology to continuously track immigrants as they embark on new lives. Local police could use it to identify political protesters captured by officer body cameras. With Rekognition, Amazon delivers these dangerous surveillance powers directly to the government.”

Just last month, the ACLU also called into question the accuracy of Rekognition itself by conducting a test in which the system reportedly falsely matched 28 members of Congress with mugshots of people arrested for a crime. In addition, the test found that nearly 40 percent of the system’s false matches were of people of color, even though they comprise only 20 percent of Congress.

“An identification — whether accurate or not — could cost people their freedom or even their lives,” the organization wrote in a blog. “People of color are already disproportionately harmed by police practices, and it’s easy to see how Rekognition could exacerbate that.”

Inherent Problems with Facial Recognition

Though Amazon has had to the bear the brunt of this most recent round of criticism, it’s clear that they will not be the only target if the industry does not address the perception that facial recognition technology has an inherent problem with racial and gender biases. In fact, researchers recently found algorithmic racial and gender bias in the facial recognition solutions of three different companies, including Microsoft, IBM and Face++. The research paper, dubbed “Gender Shades,” which was written by Joy Buolamwini, a researcher in the MIT Media Lab’s Civic Media group, and Timnit Gebru, a graduate student at Stanford, found an error rate that was never worse than 0.8 percent for light-skinned men among the three programs evaluated. However, the error rate for darker skinned women jumped in one program to more than 20 percent and was over 34 percent in the other two.  

Because many of the facial recognition systems currently available on the market are powered by machine learning and AI technology, Asem Othman, Team Lead for Biometric Science at biometric authentication company Veridium, says many of the data sets companies have used to train these systems have consisted primarily of white, Caucasian male images which can subsequently have a negative effect on their ability to match people of other races and genders with their correct identities.

While this may cause a mere inconvenience for someone who is using a facial recognition system for access to a building, Othman says when you’re talking about a government or law enforcement database of facial images used for identifying potential terrorists or criminals, the stakes for mistaken identity couldn’t be higher.

“I think any law enforcement (agency) that wants to start using facial recognition needs to make sure these (systems) are tested and make sure they don’t have a racial or gender bias,” Othman says. “That (testing) also cannot be done only time but should be performed on an ongoing basis because these systems keep updating.”

One way to accomplish this, according to Othman, would be to have the federal government provide funding to the National Institute of Standards and Technology (NIST) to conduct independent testing of various facial recognition platforms to see which ones are free of potential bias.

Though many biometric access control systems have historically used facial geometry as a method for uniquely identifying individuals, Othman says most of the solutions being considered for large-scale surveillance deployments in public venues have relied on deep learning technology, which while they continue to improve, still struggle with bias.       

“As a public, we need to make sure this has been tested by an independent agency,” Othman adds. “I’m a big believer in biometrics and I think biometrics will help but we want to be sure these systems have no racial or gender bias and that they only know your identity. That’s the main thing for biometrics; I need to know your identity without adding any other attributes or labels to it. We need to understand if the system we deploy is a good one or not and we need to know if we’re asking the right questions.”

How to Address Bias Concerns

To alleviate some of these bias problems Othman says that developers of facial recognition technology should first of all leverage good variation in their training base with respect to race, gender and age. Secondly, he says that after algorithms have been trained, they shouldn’t associate different racial or gender attributes with identity.

“When you start matching people, you want to make sure you’re just matching numeric presentations of identity that don’t have special distribution for African Americans or different genders,” Othman explains. “That’s something that can be done through special testing to ensure it doesn’t have racial or gender bias.”

About the Author:

Joel Griffin is the Editor-in-Chief of and a veteran security journalist. You can reach him at [email protected].