Are AI and Facial Recognition A Risk In Hiring
AI is moving very quickly and so is the rise of biometrics. What do you know about AI and facial recognition? We are seeing the combination of the two across a range of industries including human resources and employment screening. It’s worth sharing this article from The NY Times that takes a close look at the risks inherent in putting too much faith in these systems when it comes to assessing human beings. For example, facial recognition and the evaluation of physical gestures can actually lead not only to erroneous conclusions but to prejudiced ones.
Here’s an excerpt from the story.
When I was a college student using A.I.-powered facial detection software for a coding project, the robot I programmed couldn’t detect my dark-skinned face. I had to borrow my white roommate’s face to finish the assignment. Later, working on another project as a graduate student at the M.I.T. Media Lab, I resorted to wearing awhite mask to have my presence recognized.
My experience is a reminder that artificial intelligence, often heralded for its potential to change the world, can actually reinforce bias and exclusion, even when it’s used in the most well-intended ways.
A.I. systems are shaped by the priorities and prejudices — conscious and unconscious — of the people who design them, a phenomenon that I refer to as “the coded gaze.” Research has shown that automated systems that are used to inform decisions about sentencing produce results that are biased against black people and that those used forselecting the targets of online advertising can discriminate based on race and gender.