October 13th, 2023 | Chris Christian, Director of Compliance, Sterling
New Updates in Employer Use of Artificial Intelligence and Algorithmic Automation Tools
Since Sterling published The Emergence of Federal and State Focus on Employer Use of Artificial Intelligence and Algorithmic Automation in Hiring, regulatory focus on the use of AI and algorithmic automation for employment decisions has intensified. Federal and state regulators have published additional resources for employers with details and insight how employers are expected to assess the use of AI in employment decision-making practices. Regulators have also defined automated employment decisions tools (AEDTs) and clarified their appropriate use in helping to make hiring decisions.
Read on for some of the key federal and state/local regulatory and legislative updates since last year on the use of AI and algorithmic automation by employers. Employers — especially compliance and hiring teams — should take note of these updates to adhere to compliant use of hiring and screening tools that enhance the background check experience for themselves and their candidates.
Federal Focus Update
As previously reported, since 2020, federal entities such as the Federal Trade Commission (FTC) and the U.S. Equal Employment Opportunity Commission (EEOC) have launched numerous initiatives to provide oversight, assess adverse impacts, and legislate the use of AI by employers. Recently, EEOC guidance has been updated for employers to reference when they’re considering using AI to assist them in making hiring decisions.
In 2021, the EEOC launched its Initiative on Artificial Intelligence and Algorithmic Fairness, which examines how existing and developing technologies change the ways employment decisions (including AI-enabled employment decisions) are made. The goal of the initiative is to guide employers, employees, job candidates, and vendors to use these technologies fairly and consistently with federal equal employment opportunity laws. Additionally, the EEOC’s systemic investigators received extensive training in 2021 on the use of AI in employment practices.
On May 18, 2023, the EEOC issued new technical guidance on how to measure adverse impact when employment selection tools use AI, titled Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964. The new technical guidance document addresses whether and how to monitor algorithmic decision-making tools that may cause disproportionately large negative effects on the basis of race, color, religion, sex, or national origin under Title VII of the Civil Rights Act of 1964.
To summarize, the new EEOC guidance consists of three sections:
Section one explains the meaning of central terms used in the document — “software,” “algorithm,” and “artificial intelligence” — and how, when used in a workplace, they relate to each other and to basic Title VII principles. For example, the document addresses the term “artificial intelligence” by calling out that while the public usage of this term is evolving, Congress defined “AI” (in the National Artificial Intelligence Initiative Act of 2020 at section 5002(3)) as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”
Section two of the document recaps certain elements of Title VII of the Civil Rights Act of 1964 related to prohibited disparate impact or adverse impact discrimination on employees through the use of employment tests or selection procedures. This section goes on to cover disparate impact cases and typical questions used to determine if an employer’s practices have a disparate impact by excluding a person based on race, color, sex, or national origin.
Section three consists of questions and answers which cover a variety of topics. One question pertains to whether employers can assess their use of an algorithmic decision-making tool for adverse impact in the same way that they assess more traditional selection procedures for adverse impact. Another question asks (under Title VII) who is responsible for the use of algorithmic decision-making tools if they are designed or administered by another entity. The EEOC’s answers to most of the questions in the guidance references the EEOC adopted Uniform Guidelines on Employee Selection Procedures (“Guidelines”) under Title VII. These Guidelines provide guidance from the EEOC for employers about how to determine if their employment tests and selection procedures are lawful for purposes of Title VII disparate impact analysis.
It should be noted that the EEOC indicates the guidance is limited in scope. For example, the guidance is limited to the assessment of whether an employer’s “selection procedures” — the procedures it uses to make employment decisions such as hiring (including AI hiring), promotion, and firing — have a disproportionately large negative effect on a basis that is prohibited by Title VII. This document does not address other stages of the Title VII disparate impact analysis, such as whether a tool is a valid measure of important job-related traits or characteristics. The document also does not address Title VII’s prohibitions against intentional discrimination (called “disparate treatment”) or the protections against discrimination afforded by other federal employment discrimination statutes. Furthermore, the EEOC indicates that the guidance contents do not have the force and effect of law and are not meant to bind the public in any way, and that the document is only intended to provide clarity to the public regarding existing requirements under the law.
However, employers seeking to use (or who are currently using) automated or AI-powered tools to make hiring decisions for their organization can benefit by reviewing the new EEOC guidelines before further implementing them. The guidelines can also help employers to create company policies regarding their use of AI and similar technological tools. After all, the lack of an AI policy prevents employers from being able to set and maintain organizational standards defining exactly what constitutes acceptable (and unacceptable) use of AI tools and automation.
State and Local Focus Update: NYC
In order to better understand the current state of AI regulation and what it looks like in practice, let’s now shift our focus from the federal developments to the state and local level. Fortunately, New York City legislators have recently answered many employers’ questions about AI with a new FAQ on the subject.
On June 29, 2023, the New York City Department of Consumer and Worker Protection (DCWP) published Frequently Asked Questions (FAQs) related to Local Law 144 of 2021, which regulates employers’ use of automated employment decision tools (AEDTs). The FAQs provide general information and guidance for employers. For background, on December 11, 2021, New York City enacted a new law to amend their administrative code to regulate employers’ use of AEDTs used for hiring or promotion purposes within the city limits. Local Law 144 of 2021 took effect on January 1, 2023, but concerns around the proposed rules were raised by employers and other stakeholders, resulting in extended rule review and public comment. On April 6, 2023, the DCWP published the long-awaited Notice of Adoption of Final Rules related to Local Law 144. Enforcement of the law was extended to July 5, 2023, to provide employers and others impacted with some additional time to come into compliance.
New York-based employers should consider reviewing all the FAQs; however, there are several notable FAQ highlights employers should review that they may find useful:
FAQ I.2: What is an AEDT?
An AEDT is a computer-based tool that:
- Uses machine learning, statistical modeling, data analytics, or artificial intelligence. AND
- Helps employers and employment agencies make employment decisions. AND
- Substantially assists or replaces discretionary decision-making.
Location of the Job
FAQ I.4: What are the Law’s requirements and how do they apply to an AEDT used “in the city”?
“In the city” means that:
- The job location is an office in NYC, at least part-time. OR
- The job is fully remote, but the location associated with it is an office in NYC. OR
- The location of the employment agency using the AEDT is in NYC or, if the location of the employment agency is outside NYC, one of the bullets above is true.
FAQ II.3: Do employers and employment agencies have to publicly share the results of a bias audit?
The law defines “bias audit” to mean an impartial evaluation by an independent auditor. Employers and employment agencies must publish the bias audit results. The FAQ goes on to state that the published bias audit must include a summary of the results of the most recent bias audit AND the distribution date of the AEDT. The distribution date is the date employers and employment agencies began using the AEDT. The summary of the information must include:
- The date of the most recent bias audit of the AEDT. AND
- The source and explanation of the data used to conduct the bias audit. AND
- The number of individuals the AEDT assessed that fall within an unknown category. AND
- The number of applicants or candidates, the selection or scoring rates, as applicable, and the impact ratios for all categories.
The employer can publish the bias audit on the employment section of their website AND/OR provide an active hyperlink to a website with the information.
FAQ IV.2: Who can be an independent auditor?
An independent auditor is someone who exercises objective and impartial judgment in the performance of a bias audit. The FAQ notes that auditors are not independent if they work for the employer or employment agency using the AEDT or the vendor that developed or distributes the AEDT OR were involved in using, developing, or distributing the AEDT OR have a direct or indirect financial interest in the employer or employment agency that uses the AEDT or the vendor that developed or distributes the AEDT.
FAQ VI.1: How must employers and employment agencies provide notice of AEDT use?
Employers and employment agencies must notify employees and job candidates who are residents of New York City that they are using an AEDT and the job qualifications or characteristics the AEDT will assess. Employers and employment agencies must:
- Include in the notice instructions to request a reasonable accommodation under other laws. AND
- Provide the notice ten business days before using an AEDT. AND
- Provide the notice in a job posting or by mail or email. Note:
- For job applicants: As an alternative, employers and employment agencies can provide notice on the employment section of their website. Notice on a website does not have to be position-specific.
- For candidates for promotion: As an alternative, employers and employment agencies can include notice in a written policy or procedure. Notice provided in this way does not have to be position-specific.
Laws addressing the use of AI in the workforce, specifically for employment purposes, are still a relatively new concept, and the regulatory and legislative landscape is still unsettled and evolving.
What are possible next steps for employers considering using AI tools and automation to help make hiring decisions?
- Review and Assess: Employers who are considering using (or who are currently using) automated or AI-powered employment tools in their workplace should consider careful review and assessment of any automate or AI-powered employment tools prior to implementation for employment decision-making purposes.
- Create Policies: Employers should also consider developing company policies on the use of AI (for example, concerning background checks with AI) and technology-enabled tools. Without AI policies in place, employers cannot establish and enforce business-wide standards on the acceptable uses of automation and AI tools.
- Leverage Legal Counsel: In addition, employers should consider leveraging legal counsel to review policies and practices.
- Monitor AI Developments: Lastly, employers should examine the regulatory and legislative landscape for new laws and changes in enforcement and guidance. Sterling closely monitors developments in AI for our clients, providing updates on this and many other timely compliance topics.
In these ways, employers can help to leverage the emerging efficiency of AI tools while also helping to stay compliant with the rapidly-evolving regulatory framework. For ongoing compliance blogs, updates, webinars, and more, visit Sterling’s Compliance Hub, and sign up for our Compliance Roundup newsletter.
Sterling is not a law firm. This publication is for informational purposes only and nothing contained in it should be construed as legal advice. We expressly disclaim any warranty or responsibility for damages arising out this information. We encourage you to consult with legal counsel regarding your specific needs. We do not undertake any duty to update previously posted materials.