April 27th, 2022 | Chris Christian, Director of Compliance, Sterling
The Emergence of Federal and State Focus on Employer Use of Artificial Intelligence and Algorithmic Automation in Hiring
The use of artificial intelligence (A.I.) to evaluate candidates is part of a growing trend to try to remove bias and increase objectivity in the hiring process. However, the emergence of A.I. and algorithmic automation for use in the hiring process has steadily gained the attention of both federal agencies and state law makers. Lately, we’ve noticed that some state and local lawmakers are concerned that instead of removing bias, there may be a discrimination bias unintentionally rooted in hiring tools that use A.I. And at the federal level, while there is no federal law that regulates A.I., several federal agencies have turned their focus on A.I. and algorithmic automation and its impact on discrimination bias. If and when employers begin to use A.I.-based tools, it is critical that they remain abreast of the laws and regulations which govern their use.
Federal Trade Commission
Over the past several years, various federal agencies have begun showing an interest in the use of artificial intelligence and algorithmic automation decision making. In 2016, the Federal Trade Commission (FTC) issued its report titled Big Data: A tool for inclusion or exclusion? The report focused on the benefits and risks created by the use of big data analytics; the consumer protection and equal opportunity laws that currently apply to big data; research in the field of big data; and lessons that companies should take from the research. Subsequently in 2018 the FTC held a hearing titled The Competition and Consumer Protection Issues of Algorithms, Artificial Intelligence, and Predictive Analytics. In 2020 the FTC posted its guidance on Using Artificial Intelligence and Algorithms. Building off this 2020 guidance, the FTC posted additional guidance in 2021: Aiming for truth, fairness, and equity in your company’s use of AI.
Equal Employment Opportunity Commission
In 2021, the U.S. Equal Employment Opportunity Commission (EEOC) launched its Initiative on Artificial Intelligence and Algorithmic Fairness. The EEOC indicated its initiative would examine how existing and developing technologies change the ways employment decisions are made. The initiative’s goal is to guide employers, employees, job applicants, and vendors to ensure that these technologies are used fairly and consistently with federal equal employment opportunity laws. Additionally, the EEOC’s systemic investigators received extensive training in 2021 on the use of A.I. in employment practices.
State and Local Focus
State and local jurisdictions have also been focusing their attention and efforts to regulate employer use of A.I. and algorithmic automation decision-making. There are a few recent examples of state and local laws addressing the use of A.I. in the hiring process by employers.
In 2019, Illinois established the Artificial Intelligence Video Interview Act (the “Act”). The Act regulates the use of Artificial Intelligence (“A.I.”) in the hiring process and imposes requirements on Illinois employers who utilize A.I. video interview processes to evaluate candidates. Employers in the state who utilize A.I. interviewing must comply with the Act by January 1, 2020.
New York City
On December 11, 2021, New York City enacted a new law to amend the administrative code of New York City, which will regulate employer’s use of automated employment decision tools for hiring or promotion purposes within the city limits. Local Law Int. No. 1894-A, which takes effect on January 1, 2023, applies to employers and employment agencies. The new law makes it unlawful for an employer or an employment agency to use an automated employment decision tool to screen a candidate or employee for an employment decision unless; 1) such tool had been subject of a bias audit no more than one year prior to its use, and 2) make publicly available a summary of the bias audit on the employer’s website.
On March 15, 2022, The California Fair Employment & Housing Council released draft revisions which, if passed, would expand regulations of the state’s existing Discrimination in Employment law. The proposed draft regulation, Employment Regulations Regarding Automated-Decision Systems, would expand both liability risks and requirements of employers and their vendors that use, sell, or administer employment-screening tools or services that leverages artificial intelligence, machine learning, or other data-driven statistical processes to automate decision-making.
The proposed draft defines “Automated-Decision System” as “a computational process including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decision making that impacts employees or applicants.” An “Automated-Decision System” includes, but is not limited to, the following:
- Algorithms that screen resumes for particular terms or patterns;
- Algorithms that employ face and/or voice recognition to analyze facial expressions, word choices, and voices;
- Algorithms that employ gamified testing that include questions, puzzles, or other challenges used to make predictive assessments about an employee or applicant, or to measure characteristics including but not limited to dexterity, reaction time, or other physical or mental abilities or characteristics; and
- Algorithms that employ online tests meant to measure personality traits, aptitudes, cognitive abilities, and/or cultural fit.
The use of A.I. to evaluate candidates is part of a growing trend to try to remove bias and increase objectivity in the hiring process. If and when employers begin to use these services, it is critical that they remain abreast of the laws and regulations which govern their use. Employers should review their background screening policies and practices and consult legal counsel before implementing the use of A.I. and automated algorithmic decision tools. Sterling’s compliance experts regularly post timely compliance updates to help you stay ahead of ever-changing regulations.
This blog post is part of a Compliance blog series, diving into compliance trends, best practices, and updates.
Sterling is not a law firm. This publication is for informational purposes only and nothing contained in it should be construed as legal advice. We expressly disclaim any warranty or responsibility for damages arising out this information. We encourage you to consult with legal counsel regarding your specific needs. We do not undertake any duty to update previously posted materials.