Algorithmic bias in facial recognition technology
by Cody Pan

Photo from: Bestman, Owanate. “Ethnic Minority Report! A Look into Facial Recognition Technology and Its Implications Today.” LinkedIn, https://www.linkedin.com/pulse/ethnic-minority-report-look-facial-recognition-its-today-bestman/.
What is Facial Recognition Technology and Algorithmic Bias?
Algorithmic bias in facial recognition technology has become a prominent issue in recent years.[1] Facial recognition technology and the issues surrounding it used to only exist in dystopian, futuristic movies like Minority Report. However, these dystopian fictional futures might be closer and more real than we think. So, what is facial recognition technology exactly? Facial recognition technology is defined as technology that uses artificial intelligence (AI) to identify people based on their facial features. Algorithmic bias, on the other hand, refers to the way that facial recognition technology can demonstrate bias and error rates between different demographic groups.
What is this technology used for and are there issues with it?
Facial recognition technology is currently used in many applications in society, including security and law enforcement as well as advertising on social media. However, studies have shown that these facial recognition algorithms can demonstrate significant disparities in error rates between different demographic groups. [2] For example, one study found that error rates were higher for people with darker skin tones, which could lead to false arrests or wrongful convictions in law enforcement. Another study found that facial recognition algorithms are less accurate in identifying transgender and gender nonconforming individuals. [3] There are some real-life documented examples of this happening such as when Google photos misclassified a Black couple photos as being photos of “Gorillas”. [5]
What causes these issues?
There are several factors that can cause algorithmic bias in facial recognition technology. One of the main, commonly seen factors is the diversity of data used to create the algorithms. If the data is not diverse enough, the facial recognition technology algorithm will not be able to accurately recognize people from less represented groups. Further, the way that the algorithm is designed and coded can also contribute to bias. For example, an algorithm that focuses on facial features will be less accurate for people with less common facial features. [4]
However, there are efforts to address algorithmic bias in facial recognition technology. Researchers are working on developing algorithms that are more aware of differences in facial features. Additionally, some AI researchers are working on training the AI with more complex and diverse data, which could improve their fairness and accuracy. There is also a push for increased regulation of facial recognition technology to make sure that it is used correctly and ethically. For example, in 2019 San Francisco became the first city in the United States to completely ban the use of facial recognition technology by law enforcement. Also, Maryland passed a law in 2021 that requires a warrant or judicial review before law enforcement can use facial recognition technology to be used to identify a person.[6] These examples can be seen as the starting steps towards a broader trend of regulating facial recognition technology.
What are the impacts for the Future?
Algorithmic bias in facial recognition technology is a complex and concerning issue that has powerful implications for the future. Addressing this issue will require collaboration between researchers, industry experts, and policymakers to develop more fair and accurate facial recognition technology that can be commonly used. Again, the last thing anyone wants is for us to wade into a Minority Report type future.
References:
[1] Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 77-91.
[2] Buolamwini, J., & Raji, I. D. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1-15.
[3] Kosinski, M., & Wang, Y. (2018). Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of Personality and Social Psychology, 114(2), 246-257.
[4] Raji, I. D., & Buolamwini, J. (2019). Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. Proceedings of the Conference on Fairness, Accountability, and Transparency, 64-73.
[5] Wakefield, Jane. "Google's Project Loon Internet Balloons to Circle Earth." BBC News, 15 June 2015, https://www.bbc.com/news/technology-33347866.
[6] National Conference of State Legislatures. "Facial Recognition Technology." NCSL, 17 Feb. 2022, www.ncsl.org/research/telecommunications-and-information-technology/facial-recognition-technology.aspx.