Artificial intelligence (AI) promises new efficiencies in making employment decisions: instead of human eyes having to review stacks of resumes, an algorithm-based selection process aids in making a “rough cut” based on objectively desirable characteristics. This ought to reduce the opportunity for human bias—read “discrimination”—to enter into the process. For the same reason, an employer’s use AI to identify candidates based purely on objective standards minimizes a candidate’s ability to allege that the decision considered any protected status such as their race, religion or national origin—in theory, at least.
Regulators have asked a legitimate question, however: what if the AI algorithm looks for characteristics that disproportionally, even if unintentionally, impact one kind of legally-protected status more than some other class? Consider this example: during a Zoom interview, AI reads facial expressions to capture information about mood, personality traits, and even honesty. (Yes, this is a thing.) What if an applicant has limited facial movement because of a stroke? Would that potentially impact AI’s assessment of a candidate’s “mood”? (Hint: yes, it would.)