Q. Has the U.S. Equal Employment Opportunity Commission (EEOC) issued any recent guidance regarding employers’ use of artificial intelligence (AI)?
A. Yes. On May 18, the EEOC released new guidelines, explaining how employers’ use of AI could trigger a federal employment law violation. This development makes the government’s position clear: Employers using AI in the workplace run the risk of violating antidiscrimination law — specifically, Title VII of the Civil Rights Act of 1964.
It goes without saying that AI is a major buzzword today, and with good reason. The growing technology promises increased ease, speed, and productivity in the employment space. But to ensure compliance with antidiscrimination law, employers need to take measures to ensure their AI doesn’t run afoul of Title VII.
This is not the first time the EEOC has issued AI-related guidance.
On May 12, 2022, the EEOC released guidance, identifying several ways where an employer’s AI use may run afoul of the Americans with Disabilities Act (ADA). Here are some things to avoid, according to the EEOC:
- AI-driven “screen outs” that affect disabled employees or applicants — even if a vendor has marketed the algorithmic tool as “bias free.” In other words, employers are still on the hook for a vendor’s mistake if its AI violates the ADA.
- A lack of reasonable accommodations for employees and applicants being evaluated with AI tools.
- Disability-related questions or medical examinations prior to a conditional offer of employment.
The EEOC’s new guidance does not focus on the ADA. Rather, it provides insight into how the EEOC interprets whether an employer’s selection procedures have a disparate impact under Title VII.
When might an employer’s use of AI technology have a disparate impact on a protected class in violation of Title VII?
An employment decision violates Title VII under a disparate impact theory when it disproportionately excludes protected individuals via facially neutral processes.
While the EEOC made it clear that not all AI technology is inherently unlawful, it suggested that it will be increasingly watchful of the following employment-related tools:
- Resume scanners that prioritize applicants with certain key words.
- Employee monitoring software that counts keystrokes or other factors and rates employees.
- Virtual assistants or chatbots that interact with job candidates and reject those without certain qualifications or requirements.
- Video interview software that evaluates candidates’ facial expressions and speech patterns.
- Employment testing software that generates “job fit” scores from factors like personality, aptitudes, cognitive skills, or perceived “cultural fit.”
The EEOC also noted that its Uniform Guidelines on Employee Section Procedures — a decade-long regulatory framework — applies to AI-driven hiring, promoting, and firing in the workplace. In other words, AI that causes a selection rate for individuals in the protected group that is ‘substantially’ less than the selection rate for individuals in another group” will be evaluated under the guidelines.
Key Takeaways for Employers
- Employers will be held responsible for Title VII violations caused by third-party AI software vendors. The EEOC recommends that employers reach out to their AI vendor to determine whether measures have been taken to avoid any disparate impact.
- The EEOC’s demonstrated interest in preventing AI-driven employment discrimination is not going anywhere. Again, this newly released guidance comes on the heels of another EEOC document released last year, explaining how employers’ use of AI can run afoul of the ADA. The agency’s concerns have only become more apparent and should not be ignored.
- Companies should perform ongoing assessments of their AI tools to ensure legal compliance.
Anna Cincotta, a 2023 summer associate with Troutman Pepper, is a co-author of this blog post. Anna is not admitted to practice law in any jurisdiction.