Abstract: |
The use of artificial intelligence (AI) in recruitment decision making has been increasing (Guler and Cahalane, 2022). The increased development and use of these technologies may exacerbate bias or give rise to new biases. Delivering unbiased judgments in recruiting is critical because, if proven otherwise, companies can be legally liable. This paper discusses bias in AI recruiting, examines legal considerations applicable to potentially biased and discriminatory outcomes, and concludes by offering recommendations intended to assist companies deploying AI with insights on how to manage or mitigate bias in AI.
Proponents of AI in recruiting contend that recommendations generated are “efficient, cost effective and impartial” (Guler and et al., 2022, p. 3, emphasis added). However, gender bias (Carpenter 2015; Larson 2016; Tatman, 2016) and racial bias (Angwin et al. 2016; Crawford 2016) have been uncovered in AI applications. A classic example of bias in an AI recruiting system is that developed by Amazon.com, which was reportedly never used (Dastin, 2018). In analysing the system development, Lauret (2019) determined that the sample of software engineering resumes used for training did not have the same statistical distribution as the overall population. Thus, if human experts are biased, the algorithm using data from those hired by biased experts will learn to replicate the hiring decisions of those biased experts (Lauret, 2019).
Employee bias in using the application could also be introduced. The concept of heuristics and biases in human judgment was introduced by Tversky and Kahneman (1974; 1986), who theorize that decision-makers rely on five main heuristics, including representativeness, availability, anchoring and adjustment, framing, and overconfidence. Considerations in AI development and human biases in use are summarized in Figure 1.
Companies may be vulnerable to legal exposure if adverse employment decisions are made by relying on AI (Dattner, 2019). Legal systems have not kept pace with the development of AI. Bias and privacy aspects are the most relevant legal considerations, according to the Algorithmic Accountability Act of (2019), which is the first federal legislative effort in the United States (USA) to regulate AI in response to concerns about biased and discriminatory outcomes. Introduced in 2019 and updated in 2022 (Algorithmic Accountability Act of 2022) to require audits of AI systems, this act was not passed. However, a recent AI bias law, Automated Employment Decision Tools (2021), was enacted as Local Law 144 in New York City, requiring companies to conduct audits to assess biases in AI used in hiring. Taking a lead role globally, the European Commission (2021) proposed the first legal framework on AI, the Proposal for a Regulation laying down harmonised rules on artificial intelligence, which addresses the risks of specific uses of AI, categorizing them into 4 different levels, including unacceptable, high, limited, and minimal risk. Other legal initiatives have been proposed in other countries and states to address the employment-related use of AI. Challenges in these efforts include the definition of AI, which does not yet exist in either EU or USA legislation; legal rights issues (i.e., what grants a person or organization rights and responsibilities under the law); regulatory guidance to comply with required bias audits; legal liability; and privacy issues. Recommendations are offered in Figure 2. |