Facial Recognition Software Biases

The Detriments to Modern-day Facial Recognition

Resist Facial Recognition Liberty Activist

The public frequently perceives algorithms and software as impartial, not recognizing that these tools are designed and calibrated by humans. This human element creates the opportunity to introduce bias and thereby skew the results.

Many artificial intelligence (AI) systems and algorithms are programmed by providing them with initial data sets to learn from. The programs then use this historical data to make predictions and, therefore, possess these inherent limitations:

The consequences of these limitations can be very obvious once the technology is deployed but currently, there are no regulations or appeal procedures for the majority of its applications. 

It is important to note that the developers and deployers of this technology are largely individuals, corporations, and governing bodies that already occupy positions of power

Ruha Benjamin: Sociology Lecture Series

“The implicit assumption that technology is race-neutral – through which Whiteness becomes the default setting for tech development” - Ruha Benjamin, Race After Technology

Alexandria Ocasio-Cortez Exposes Dangers of Facial Recognition

Alexandria Ocasio-Cortez attacks technology companies that use facial recognition. These companies, created by white men specifically for white men, are being challenged by AOC who explains that they are selling data without Americans' consent.

Thesis

Algorithms and software are very prevalent in everyday technology. For example, facial recognition software is used in applications ranging from user identification on smartphones to criminal detection in large crowds. Studies have shown that facial recognition programs are typically tested in low-income areas and, furthermore, often fail to correctly classify women and dark-skinned people. Algorithms on the other hand dictate everyday decisions ranging from loan approvals to resume auditing. The biases in these algorithms target already marginalized communities; given the high-stakes applications of this technology, unregulated and unaudited programs place these marginalized groups at even higher risk in our society.

Findings and Results

Image from Nature.com

  • The above poster documents how developers program technology to recognize faces.
  • The Coded Bias documentary tells us how these systems are programmed: developers “show” the algorithm pictures that contain faces and pictures that do not contain faces. Over time, the software learns what defines a face. This information tells us that the issue comes down to not only the demographics of the developers of this software, but also the makeup of the data sets they use. 

“The outsourcing of human decisions is, at once, the insourcing of coded inequity” - Ruha Benjamin, Race After Technology

Joy Buolamwini, a graduate student at MIT researched racial and gender classification within these programs. She had found that some software failed to recognize her as female, if it even recognized her at all. For this research, she created a diverse dataset to test using three different programs: IBM, Microsoft, and Face++.  It is important to note that although the dataset used for this research was representative of women and racial minorities, the programs that analyzed it were trained on less diverse data sets.

Gender classification error rates for the 3 programs

Racial classification error rates for the 3 programs

Intersectional error rates: darker female error rates are substantially lower

    We can see just from this data that the algorithms favor male faces. Face++ interestingly performed slightly better on darker male faces, but the same overall patterns persist. We can clearly observe this preference in Figure x, where race is directly correlated with accuracy in determining gender. As skin tone in the images gets darker, correctly identifying gender as female becomes less and less likely. It is important to note that, seeing as the algorithms treat gender as a binary decision, the probability of guessing gender correctly even in completely random conditions is 50%.

The  Gender Shades Project : Auditing five face recognition technologies

The Gender Shades Project tested five separate facial recognition technologies to compare their accuracy. For each technology, the best accuracy coincided with the lighter-skinned males while the lowest accuracy was consistently the darker-skinned female. The project also confirmed male face recognition was more accurate than female face recognition.

Implications and Broader Impacts

Who

Everyone is affected, but racial and gender minorities, along with poorer communities are disproportionately impacted by facial recognition software biases and surveillance-focused tools.

“The most punitive, most invasive, most surveillance-focused tools that we have, they go into poor and working communities first.” Tested first in an environment where there’s low expectation that people’s rights will be respected." - Coded Bias, a Netflix documentary

The Face: The Lasting Effect of Digital Surveillance at Black Lives Matter Protests

A more recent example: face recognition was employed to monitor and identify Black Lives Matter protests.

  • Additional Facial recognition usage: police use of facial recognition for surveillance, credit cards, loans, Amazon recruiting, college applications, housing applications, monitoring protestors, unlock phones, Finding missing persons, forensic investigations, air travel, drones

“Tech fixes often hide, speed up,and even deepen discrimination, while appearing to be neutral or benevolent when compared to the racism of a previous era.” - Ruha Benjamin, Race After Technology

The left picture shows the locations of  Project Green Light Detroit partners  while the right shows data from the U.S. census, demonstrating a clear relationship between surveillance and primarily Black communities.

"A critical analysis of PGL reported in 2019 that 'surveillance and data collection was deeply connected to diversion of public benefits, insecure housing, loss of employment opportunities, and the policing and subsequent criminalization of the community members that come into contact with these surveillance systems.' PGL illustrates how systems of face monitoring can perpetuate racial inequality if their application is not regulated."

  • Project Green Light, known as PGL, began in 2016 and installed several high-definition cameras in Detroit, Michigan to stream to Detroit PD. These cameras used facial recognition to test faces against several databases like ID photos, driver's licenses, and criminal databases. Unfortunately, the cameras are not distributed equally around the entire city of Detroit, rather they focus on communities dominated by African Americans and miss Asian and white-dominated communities.
  • These immoral uses of facial recognition and surveillance harm African Americans by invading privacy, stripping them of rights like due process, freedom of expression, and freedom of association. African Americans who are victims of this surveillance begin to change their behaviors in the form of self-censorship and more to address the fear of retribution. Continual censorship causes significant psychological harm in the form of intense fear because innocent African Americans are continually incarcerated due to the use of biased facial recognition.

Broader Impacts: Disproportionate negative impact on the lives of women, people of color, and the poor.

Ethical and societal challenges are posed by the use of facial recognition technology like the question of privacy and equality.

  • basInduces fear and psychological harm, vulnerability, unjust targeting
  • Enables companies to deny people jobs, healthcare, welfare based on biased facial recognition
  • Leads people to change their behavior to avoid unwanted surveillance 
  • Leads police to misidentify suspects, incarcerate innocent individuals 
  • Lack of safeguarding / regulation at the federal level has led to violations of privacy, forcing states to create and pass state level police

Why does it matter?

The inequalities within facial recognition disproportionately harm the poor and gender and racial minorities. Collective societal awareness of the biases within AI and proactive focus on mitigating algorithmic disparities is imperative as our future will only entail more decision-making that involves facial recognition software and AI. The algorithms that companies and governments currently use for surveillance, hiring, applications, and recruiting are discriminatory. Biased facial recognition softwares have created unjust, racist, and classist barriers, and if they continue to go unchecked behind algorithms and built technology, these softwares will increasingly negatively impact the lives of the poor and gender and racial minorities.

Resist Facial Recognition Liberty Activist

Ruha Benjamin: Sociology Lecture Series

Image from Nature.com

Gender classification error rates for the 3 programs

Racial classification error rates for the 3 programs

Intersectional error rates: darker female error rates are substantially lower

The  Gender Shades Project : Auditing five face recognition technologies

The Face: The Lasting Effect of Digital Surveillance at Black Lives Matter Protests

The left picture shows the locations of  Project Green Light Detroit partners  while the right shows data from the U.S. census, demonstrating a clear relationship between surveillance and primarily Black communities.