HR Newsletter
Posted on September 28, 2022 | Compliance
New Laws and Guidance Address Algorithm Bias
Employers are increasingly relying on intelligent software to save time and resources when assessing applicants and employees. This technology typically leverages algorithms and artificial intelligence (AI) to read, collect, process and analyze data to present to the employer. While these tools may seem neutral to employers, they can actually result in violations of nondiscrimination laws without proper safeguards in place. As a result, lawmakers and regulators have begun to enact legislation and issue guidance to address these tools.
Table of Contents |
Examples of the types of tools covered by these laws and guidance include:
- Resume scanners that prioritize applications using certain keywords
- Employee monitoring software that rates employees on the basis of their keystrokes or other factors
- "Virtual assistants" or "chatbots" that ask job candidates about their qualifications and reject those who don't meet pre-defined requirements
- Video interviewing software that evaluates candidates based on their facial expressions and speech patterns
- Testing software that provides "job fit" scores for applicants or employees regarding their personalities, aptitudes, cognitive skills or perceived "cultural fit," based on their performance on a game or on a more traditional test.
New York City has enacted legislation that regulates automated employment decision tools like the ones above. The law takes effect on January 1, 2023. Among other things, the law requires covered employers to:
- Provide notice of their use of such tools;
- Have an independent auditor assess each year whether the tool's selection criteria result in any disparate impact based on race, ethnicity or sex; and
- Retain information about the source and type of data collected for the tool.
At the federal level, the U.S. Equal Employment Opportunity Commission (EEOC) has released guidance on how algorithms can violate the Americans with Disabilities Act (ADA). The ADA:
- Prohibits employers with 15 or more employees from discriminating against applicants and employees because of a disability.
- Requires these employers to provide reasonable accommodations to qualified applicants and employees with disabilities.
Note: A reasonable accommodation is a change that helps a job applicant or employee apply for a job, perform a job, or enjoy equal benefits and privileges of employment.
Most common violations:
In the guidance, the EEOC says the most common ways that an employer's use of algorithmic decision-making tools could violate the ADA are:
The employer fails to provide a "reasonable accommodation" that is necessary for a job applicant or employee to be rated fairly and accurately by the algorithm. Example: A job applicant who has limited manual dexterity because of a disability may report that they would have difficulty taking a knowledge test that requires the use of a keyboard, trackpad or other manual input device. This kind of test might not accurately measure this applicant's knowledge. In this situation, the employer should provide an accessible version of the test (for example, one in which the applicant is able to provide responses orally, rather than manually) as a reasonable accommodation (or an alternative test), unless doing so would cause undue hardship. |
The employer relies on an algorithmic decision-making tool that intentionally or unintentionally "screens out" an individual with a disability, even though that individual is able to do the job with a reasonable accommodation. "Screen out" occurs when a disability prevents a job applicant or employee from meeting — or lowers their performance on — a selection criterion, and the applicant or employee loses a job opportunity as a result. A disability could have this effect by, for example, reducing the accuracy of the assessment, creating special circumstances that haven't been taken into account, or preventing the individual from participating in the assessment altogether. Example: A chatbot might be programmed with a simple algorithm that rejects all applicants who, during the course of their "conversation" with the chatbot, indicate that they have significant gaps in their employment history. If a particular applicant had a gap in employment, and if the gap had been caused by a disability (for example, if the individual needed to stop working to undergo treatment), then the chatbot may function to screen out that person because of the disability. |
The employer adopts an algorithmic decision-making tool for use with its job applicants or employees that violates the ADA's restrictions on disability-related inquiries and medical examinations. Example: An employer might violate the ADA if it uses an algorithmic decision-making tool that poses "disability-related inquiries" or seeks information that qualifies as a "medical examination" before giving the candidate a conditional offer of employment. This type of violation may occur even if the individual doesn't have a disability. Once employment has begun, disability-related inquiries may be made and medical examinations may be required only if they are justified under the ADA. |
Best practices:
In the guidance, the EEOC identifies several promising practices for employers. The EEOC addresses: providing reasonable accommodations; reducing the chances that the tools will disadvantage individuals with disabilities; minimizing situations where the tools will assign poor ratings to individuals who can perform the essential functions of the job with a reasonable accommodation; and asking vendors questions about the tools they offer.
Conclusion:
Employers that intend to use algorithmic tools to make employment decisions should review the EEOC's guidance in full and ensure compliance with federal, state, and local nondiscrimination laws. Employers in jurisdictions with laws that expressly address such tools, such as New York City, should ensure compliance with those laws as well.