The New Regulatory Frontier: Using AI Tools is About to Become More Difficult

Employers can be forgiven for diverting their attention during the past three years to pressing pandemic-related employment issues—vaccine mandates, return-to-work challenges, managing hybrid workforces, with all the novel and thorny legal issues that emerged from a transformed workplace. But in an ever-changing employment law landscape, a new compliance challenge has emerged: federal, state, and local regulations governing the use of artificial intelligence (“AI”) in the hiring process. These new laws and regulations are a perfect storm for liability. They are new and unfamiliar, and they are easy to violate despite employers’ best intentions.

The rise of single-click job application programs like Easy Apply” on LinkedIn or 1 Click Apply” on ZipRecruiter has made it extraordinarily easy for applicants to submit job applications. But as any recruiting manager knows, the task of filtering resumes and job applications for hundreds of applicants per position ranges from difficult to almost impossible. So how does a hiring manager make a rough cut” from the piles of electronically-submitted resumes and job applications? Happily, AI offers a powerful tool for making that rough cut; unhappily, AI can result in employment decisions, literally without human intervention, that may violate anti-discrimination laws—or at least, that’s the regulatory concern, as an increasing number of jurisdictions have articulated it.

The idea of using AI in making hiring decisions is simple: machine-based algorithms can identify certain objectively desirable characteristics or experience in candidates, and in theory, those algorithms (precisely because they are supposedly objective) actually reduce the opportunity for human bias. The principal concern of regulators is that, at least so far, AI technology is a black box. To date, there has been little to no meaningful transparency in exactly what the technology is considering and evaluating in that algorithmic process of making the rough cut. The plethora of new regulation is aimed at exactly this perceived lack of transparency.

REGULATORS TAKE NOTICE

The concern with transparency in automatic hiring technologies has resulted in a flurry of new regulation and legislation. In October 2021, Charlotte Burrows, the Chair of the U.S. Equal Employment Opportunity Commission (“EEOC”) announced a new focus at the EEOC on ensuring that that use of AI does not run afoul anti-discrimination laws. As part of that initiative, in May 2022, the EEOC issued guidance, The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees,” which creates guidelines and standards for the use of AI in hiring decisions that avoid intentional or unintentional bias against disabled applicants. This is likely to be just the first of many guidance documents issued by the EEOC concerning the intersection of AI and the various laws the EEOC enforces.

Of course, in the U.S. the EEOC is not the last—or in this case, even the first—authority to regulate the use of AI in hiring decisions. Employers operating in multiple jurisdictions are accustomed to this complexity, but the novelty of AI regulation makes compliance in multiple jurisdictions that much more complex and difficult.

Illinois was first to the scene when, in 2020, it enacted the Artificial Intelligence Video Interview Act, regulating the interview process for jobs that are based” in Illinois. The Illinois law applies to employers who use AI to analyze video interviews by job candidates. If the employer uses AI in this manner, it must notify the applicant in advance of the interview, explain how the AI is used, and obtain consent from the candidate. The employer must also limit distribution of the video and destroy the video within 30 days. Illinois later amended the law to include a demographic reporting requirement on behalf of the employer, requiring employers who rely solely upon an AI analysis of a video interview to collect and report the race and ethnicity of applicants who are (or are not) given an in-person interview, and who are ultimately hired.

New York City has also legislated in the area of the use of AI in hiring decisions, with a new law becoming effective on January 1, 2023. Unlike the Illinois law, which focuses solely on video interviews, New York City’s law is far broader and applies to all automated decision tools that are used to assist with hiring or promotion decisions. The new law prohibits the use of automated decision tools unless the tool has been subject to an annual independent bias audit (which the law does not define), and the employer (or third-party agency) makes the results of this audit publicly available. In addition, employers using automated decision tools must notify employees or candidates of their use of the decision tool (and allow the candidate to request an alternative process or accommodation), and must explain the job qualifications or characteristics that the tool uses to assess the candidate.

Finally, following suit with its peers, in March 2022, California’s Department of Fair Employment and Housing (“DFEH”) issued proposed regulations regarding the use of automated decision tools in the employment sphere. Specifically, these proposed regulations would make it unlawful for an employer or a covered entity to use qualification standards, employment tests, automated decision systems, or other selection criteria that screen out or tend to screen out an applicant or employee or a class of applicants or employees on the basis of a characteristic protected by this [law], unless the standards, tests, or other selection criteria, as used by the covered entity, are shown to be job-related for the position in question and are consistent with business necessity.” (Cal. Code Reg. § 11009 (proposed)). These regulations are not final, but employers in California should prepare for these regulations to be implemented in one form or another in the coming months.

These new laws and regulations are not the last word on AI regulation—other jurisdictions are considering similar legislation. For example, the Stop Discrimination in Algorithms Act,” pending legislation in Washington, D.C., would prohibit employers from using certain types of data in algorithmic decision-making technology that would tend to result in discrimination. Similarly, New Jersey is considering the New Jersey Algorithmic Accountability Act,” which would require certain businesses to reduce the use of high-risk” automated decision systems in their everyday business. Significantly, the New Jersey law would require the business using AI to record any racial or other bias that the technology creates in practice. Similar laws in other jurisdictions are sure to follow.

WHAT SHOULD EMPLOYERS DO?

It’s no coincidence that the three jurisdictions leading the charge on AI in the employment context are New York City, Illinois, and California—three places with strong ties to the international business community. In passing these laws, these three jurisdictions likely seek to influence policy beyond their borders, knowing that any employer that wants to hire individuals in that jurisdiction will have to abide by these requirements, and hoping the employer will adopt the requirements in a broader scope due to administrative ease. In fact, New York City’s law will ostensibly ensure that any automated decision tool that an employer uses will pass the required bias audit, since it’s unlikely an employer will choose to use a different set of automated decision tools in New York City than they would in other locations.

Employers operating in these jurisdictions should immediately assess whether they are using AI that would be covered by these laws. If so, they should engage with counsel to ensure they are in compliance, and if they engage third-party vendors to provide this AI, they must ensure the vendors are in compliance as well.

Of critical importance, employers will not be able to deflect liability to their third-party vendors if those vendors run afoul of the law. Instead, government agencies will likely simply seek to hold both the employer and the third-party vendor liable. To that end, employers must carefully review their vendor contracts to ensure they are protected from liability by ensuring there is appropriate indemnification by their vendor in case of non-compliance, and that their vendors are representing and warranting they are in compliance with all laws. In the case of an investigation by a government agency, this documentation will be significant.

Finally, employers should monitor AI legislation in all of the jurisdictions where they have employees. If the pending legislation discussed above is any indication, the regulatory landscape regarding AI will become far more difficult before it becomes easier. As always, employers should consult with their counsel regarding these issues.