Abstracts Track 2024

Area 1 - Artificial Intelligence and Decision Support Systems

Nr: 269

Risk in the Use of GenAI


Janice C. Sipior, Burke T. Ward, Danielle R. Lombardi and Renata Gabryelczyk

Abstract: Generative artificial intelligence (GenAI) is expected to revolutionize the workforce (Thormundsson, 2024). About one-third of organizations are using GenAI regularly in at least one business function (McKinsey & Company, 2023). The potential of GenAI is demonstrated by ChatGPT-4 passing the CPA, CMA, CIA, and EA accounting exams (Eulerich et al., 2023), the bar exam, and the legal ethics exam (Ambrogi, 2023). While seemingly impressive, caution is warranted as GenAI may have unexpected limitations, disrupt practices, threaten privacy and security, and bring many consequences from biases, misuse, and misinformation (Dwivedi et al., 2023). Our research question is: What are the risks of using GenAI and how can the risks be managed? To answer, we follow a taxonomy development approach (Nickerson et al., 2013), similar to that of Petrik et al. (2023), by drawing upon operational risk to which provides a solid theoretical foundation. We identify risks in using GenAI based upon example reported uses, presented in Table 1, and a review of literature. Our taxonomy makes two main contributions. First, it offers a categorization of possible risks of using GenAI and identifies the types of operational risks to manage, providing a basis for future research of risks. Second, it proposes risk management considerations specific to GenAI, which can be utilized in practice. Operational risk events, as defined by the Basel Committee on Banking Supervision (BCBS, 2017), “result[ing] from inadequate or failed internal processes, people, and systems or from external events” (p. 128). Firms may incur not only reputational loss but may also suffer significant declines in market value (Benaroch et al., 2012). Our focus is on information technology (IT) operational risk, which has received little attention within the Information Systems (IS) literature (Bauer and Bernroider, 2013; Goldstein et al., 2011; Hsu et al., 2014; Sipior et al., 2019), and specifically GenAI operational risk. The importance is underscored by the recognition of technological instabilities as the fourth biggest global risk by the World Economic Forum (World Economic Forum, 2019). Jordan and Silkock (2005, p. 48) define IT risk as “something that can go wrong with IT and cause a negative impact on the business.” IT-related risks have traditionally been viewed as only a specialized subset of operational risk, but span all seven types of risk event categories of operational risk (Goldstein et al., 2011; Osken and Onay, 2016). Data-related risks and function-related risks were distinguished as two types of IT operational risk events (Goldstein et al., 2021), to which we add use-related IT operational risks. In our taxonomy depicted in Figure 1, we conceptualize IT operational risk as a component, rather than a subset, of the overall operational risk of, spanning all seven types of risk events. We formulate GenAI operational risk as a component of IT operational risk, comprised of eight areas of risk events (see Table 2), spanning data-, function-, and use-related risks. The arrival of GenAI is expected to give rise to “unforeseen legal, ethical, and cultural questions” (Pazzanese, 2023, fourth paragraph). GenAI was identified as a top concern for risk executives for the first time in 2023 (Deloitte, 2023). Our taxonomy is intended to provide awareness building to create a proactive IT risk culture, in line with Bauer and Bernroider (2013), with a focus on the risks in using GenAI.