Web Analytics

Illinois Restricts Use of AI in Interviews

Newsletter

Article updated as of 6-28-2019

The same state that brought us the Biometric Information Privacy Act, 740 ILCS 14/1 et al., and its resulting flood of litigation is once again blazing a new trail in protecting workers from emerging technologies. On May 29, 2019, the Illinois legislature passed the Artificial Intelligence Video Review Act, which will go into effect once it is signed by the governor. In taking this step, Illinois became the first state to legislate the use of Artificial Intelligence ("AI") in the employment context. The new law targets employers that require applicants to provide a video interview and thereafter utilize AI technology to analyze the candidate's body language, speech patterns and other characteristics to score and predict a candidate's likelihood of success at that organization. Now, employers utilizing such technology must: (i) notify each applicant that AI may be used to analyze the applicant's facial expressions and other characteristics as part of evaluating the applicant's fitness for the position; (ii) provide each applicant with information regarding how the AI technology works and what characteristics it uses to evaluate applicants; and (iii) obtain consent from the applicant based on the description provided. This notice and consent must occur before the applicant is required to provide the video interview. The law applies to all applicants for an Illinois based position, regardless of whether the candidate lives in Illinois or elsewhere.

Once the video is created, the law prohibits the employer from sharing the video with anyone other than persons whose expertise is necessary to evaluate an applicant's fitness for the position. While not entirely clear, this prohibition presumably applies to disclosures outside the company and is not a prohibition on unnecessary disclosures within the organization. Out of an abundance of caution, employers should limit disclosure of such videos to only the individuals in recruiting and management that are necessary to the hiring process.

The law further gives applicants the right to request the destruction of their video. Companies receiving such requests must comply with the destruction of the video within thirty (30) days of receipt of the request, including any copies shared or any electronically generated backup copies. The requirement to destroy electronically generated backup copies could create headaches for employers utilizing technologies that are not conducive to such targeted deletion of specific files in its data backups. The thirty (30) day destruction requirement also contradicts (i) Equal Employment Opportunity Commission record retention requirements, which presumably require the retention of such videos for one year, and (ii) any data preservation requirements when a threat of litigation exists.

Another potential problem with the new AI focused law is that it does not define what constitutes AI. AI is a term frequently applied to numerous technologies that attempt to leverage big data for better decision-making, but some would argue that true AI is a much narrower subset that involves actual machine learning from the data that is "taught." Rather than debate whether your technology constitutes real AI, the better course is to comply with this law for any technology that reviews and assesses video interviews.

Additional Considerations for AI Technologies that Impact Employees

Employers wishing to implement AI technologies in HR and recruiting must also thoroughly vet such technologies and continuously monitor their use for implicit (and sometimes hidden) biases against individuals based on their race, gender, age, national origin, disability or other protected characteristic. For example, an AI technology that assesses candidate video interview submissions by analyzing speech patterns, body language and similar characteristics must be able to recognize and take into account situations where a candidate's disability is impacting his or her speech patterns or body language, without rejecting or downgrading that candidate's score. Employers must also recognize that AI based technologies constantly learn and evolve and, in some instances, may learn to prefer one group over another, such as preferring men over women or preferring younger candidates. Employers must continually monitor and audit how the AI technology is working and how it scores candidates in order to uncover both obvious and hidden biases as well as correct any disparate impact affecting a protected group.

Practical Advice for Companies Considering AI Technologies

Companies utilizing AI technology to evaluate candidate video interviews in Illinois should:

  • Create a written document: (i) notifying the candidate that AI may be used to evaluate his/her fitness for the position; (ii) describing the AI technology to be used, how it works and the characteristics it uses to evaluate applicants; and (iii) providing for written consent to the use of this technology consistent with the description.
  • Revise any recruiting checklists and procedures to note the new notice and consent requirement as well as the need to destroy the video within thirty (30) days of any requests from applicants.; and
  • Create a policy identifying the individuals with whom the videos can be shared and a protocol for (i) retrieving and destroying any videos shared with those individuals and (ii) ensuring any electronically generated backup copies are destroyed as well.

With regard to companies looking to incorporate other AI technologies into the human resources or recruiting function, recommendations include:

  • Thoroughly investigating the potential for obvious and hidden biases against individuals in certain protected groups (e.g. race, gender, age, national origin or disability) before implementing the new technology;
  • Test driving the new technology separate from your normal human resources and recruiting processes to evaluate how the AI technology performs compared with your normal methods and to discover any implicit biases in the technology;
  • Assigning the appropriate stakeholder with responsibility for the initial investigation of the technology and the continuous monitoring and auditing of the technology throughout its use to make sure the necessary oversight does not fall through the cracks;
  • Evaluating whether the vendor providing the AI technology is willing to offer indemnification for any lawsuits arising from its use (e.g. a disparate impact class action) and the value of any such commitment when considering potential exposure;
  • Ensuring your company's digital ethics program has a strong compliance framework that addresses: (i) your company's ethics regarding how AI will and will not be used; (ii) transparency and open communication regarding how AI is used with regard to your employees and applicants; and (iii) the internal processes and governance related to ensuring that your use of AI complies with your organization's code of ethics and the law, including discovering and rooting out implicit biases that directly or indirectly discriminate on account of race, sex, age, national origin and other protected characteristics.

For more information on the legal and ethical implications of utilizing AI in the employment context, please contact your local Quarles & Brady attorney or:

Follow Quarles

Subscribe Media Contact
Back to Main Content

We use cookies to provide you with the best user experience on our website and to analyze statistics related to our website. To understand more about how we use cookies, or for instructions to change your preference and browser settings, please see our Privacy Notice. Please note that if you choose to reject cookies, doing so may impair some of our website's functionality.