As new technologies continue to emerge, changes to the ways companies hire and work are a natural consequence. Artificial intelligence (AI), and the software applications that utilize it, are finding an increasingly larger role in the workplace. Employers can use algorithms and other AI tools to help sort through potential job candidates, select new employees, monitor performance of existing employees, and determine compensation. These tools, however, can also present legal challenges to employers who do not properly monitor their AI-aided technologies to avoid unlawful discrimination under federal or state law. Davenport Evans Litigation attorney Shane Eden explains.
The Americans with Disabilities Act (ADA) prohibits companies with 15 or more employees from discriminating on the basis of disability and requires such employers to provide reasonable accommodations to individuals with disabilities, unless doing so would create an undue hardship. On May 12, 2022, the U.S. Equal Employment Opportunity Commission (EEOC) and the U.S. Department of Justice (DOJ) each released technical assistance documents concerning disability discrimination associated with an employer’s use of AI when making employment decisions. The EEOC document, titled, “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees,” provides guidance to employers on how to prevent discrimination against job seekers and existing employees with disabilities. This publication is part of an ongoing effort by the EEOC to educate employers about the application of anti-discrimination laws when using software and applications that utilize algorithmic decision-making. The DOJ’s guidance document, “Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring,” similarly provides a general overview of rights and responsibilities designed to be accessible to lay employers without a strong background in technical issues.
One primary concern highlighted by the EEOC and DOJ involves use of AI technology that has the potential to “screen out” or exclude individuals with disabilities who are nevertheless qualified for a position. For example, tools that screen out applications using certain keywords, or tools that rank or score candidates based on certain data sets can be problematic. Regardless of an employer’s intent, use of an overly restrictive algorithm has the potential to violate the ADA by screening out individuals based on a disability. Additional problems can arise where the use of AI or algorithms results in applicants or employees being required to answer legally prohibited disability-related questions.
Employers are advised to be conscientious of the types of technology they are using and to consider how such tools could impact those with different disabilities. The EEOC provides employers with a list of “promising practices” to help ensure compliance with the ADA when using algorithmic decision-making software. Through recognition of the ways that new technologies can potentially discriminate, employers can take steps to prevent the same, while still utilizing and benefiting from these increasingly helpful tools.