The EEOC issued technical guidance on how employers’ use of software that relies on algorithmic decision-making may violate the Americans with Disabilities Act.
The EEOC technical assistance focuses on three primary concerns under the ADA:
- Employers should have a process in place to provide reasonable accommodations when using algorithmic decision-making tools;
- Without proper safeguards, workers with disabilities may be “screened out” from consideration in a job or promotion even if they can do the job with or without a reasonable accommodation; and
- If the use of AI or algorithms results in applicants or employees having to provide information about disabilities or medical conditions, it may result in prohibited disability-related inquiries or medical exams.
The EEOC emphasized that “screening out” could occur when a disability lowers a job applicant or employee’s performance on a selection criterion, or prevents such an individual from meeting said criterion, and the applicant or employee loses a job opportunity as a result.
Employers could be liable even when an algorithmic decision-making tool that discriminates against individuals with disabilities was developed by an outside vendor. In addition, employers may be held responsible for the actions of their agents if given authority to act on the employer’s behalf. The EEOC would likely extend this approach to other anti-discrimination laws.
The EEOC suggested several steps to comply with the ADA and minimize the chances that algorithmic decision-making tools will disadvantage individuals with disabilities. These include training staff recognize and process requests for reasonable accommodation as quickly as possible, using algorithmic decision-making tools that have been designed to be accessible to individuals with as many different kinds of disabilities as possible, and ensuring that the algorithmic decision-making tools only measure abilities or qualifications that are truly necessary for the job, among other practices.
The Commission recently sued an employer who programed its online recruiting software to automatically reject older applicants based on age, violating the Age Discrimination in Employment Act. “This case is an example of why the EEOC recently launched an Artificial Intelligence and Algorithmic Fairness Initiative,” said EEOC Chair Charlotte Burrows. “Workers facing discrimination from an employer’s use of technology can count on the EEOC to seek remedies.”
More to come: The technical guidance is the first effort in what is to be a multi-year initiative. EEOC Vice Chair Samuels tweeted “look out for more work from the AI initiative” and an EEOC official said in a briefing with employers that the technical assistance is the “first of many things to aid employers” to be expected.