Face Recognition and Racial Prejudice: The Pitfalls of "Smart" Technology

Face Recognition and Racial Prejudice: The Pitfalls of "Smart" Technology

We unlock our iPhones with our face and wonder how Facebook can tag us in photos. 

 

Facial recognition, the technology behind these features, is more than just a shortcut. It is employed for law enforcement surveillance, airport passenger screening, employment and housing decisions. 

 

Despite widespread adoption, facial recognition has recently been banned by police and local agencies in several cities in the United States, including Boston and San Francisco. Why? Of the major biometrics in use (fingerprint, iris, palm, voice, and face), facial recognition is the least accurate, and there are great privacy concerns surrounding this technology.

 

Police uses  facial recognition to compare photos of suspects with mug shots and driver's license images; it is estimated that nearly half of American adults (more than 117 million people as of 2016) have photos within a facial recognition network used by law enforcement. This data entry occurs without consent, or even awareness, and is supported by a lack of legislative oversight. 

 

The current implementation of these technologies involves significant racial bias, particularly against African Americans. Although accurate, facial recognition enhances a law enforcement system with a long history of racist and anti-activist surveillance and may widen pre-existing inequalities. 

 

Iniquity in facial recognition algorithms

 

Facial recognition algorithms boast high classification accuracy (over 90 percent), but these results are not universal. A growing body of research exposes divergent error rates among demographic groups, with the lowest accuracy consistently found in individuals who are female, black, and between the ages of 18 and 30. 

 

In the seminal Gender Shades project (2018), an intersectional approach was applied to evaluate three gender classification algorithms, including those developed by IBM and Microsoft. Subjects were grouped into four categories: darker-skinned females, darker-skinned males, lighter-skinned females, and lighter-skinned males. All three algorithms yielded the least accurate results on darker-skinned females, with error rates up to 34 percent higher than on lighter-skinned males. 

 

Independent evaluation by the National Institute of Standards and Technology (NIST) confirmed these studies, finding that facial recognition technologies on 189 algorithms were less accurate on black females.

 

Source: https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/

Published on  Updated on