Abstracts Track 2023


Area 1 - Databases and Information Systems Integration

Nr: 7
Title:

Integration of Knowledge and Metadata for Complex Data Warehouses and Big Data

Authors:

Razafindraibe Fabrice, Ralaivao J. Christian and Rakotonirainy Hasina

Abstract: This document constitutes a resumption of work carried out in the field of complex data warehouses (DW) relating to the management and formalization of knowledge and metadata. It offers a methodological approach for integrating two concepts, knowledge and metadata, within the framework of a complex DW architecture. The objective of the work considers the use of the technique of knowledge representation by description logics and the extension of Common Warehouse Metamodel (CWM) specifications. This will lead to a fallout in terms of the performance of a complex DW. Three essential aspects of this work are expected, including the representation of knowledge in description logics and the declination of this knowledge into consistent UML diagrams while respecting or extending the CWM specifications and using XML as pivot. The field of application is large but will be adapted to systems with heterogeneous, complex and unstructured content and moreover requiring a great (re)use of knowledge such as medical data warehouses.

Area 2 - Artificial Intelligence and Decision Support Systems

Nr: 9
Title:

Bias in Using AI for Recruiting: Legal Considerations

Authors:

Janice C. Sipior, Burke T. Ward, Cathy A. Rusinko and Danielle R. Lombardi

Abstract: The use of artificial intelligence (AI) in recruitment decision making has been increasing (Guler and Cahalane, 2022). The increased development and use of these technologies may exacerbate bias or give rise to new biases. Delivering unbiased judgments in recruiting is critical because, if proven otherwise, companies can be legally liable. This paper discusses bias in AI recruiting, examines legal considerations applicable to potentially biased and discriminatory outcomes, and concludes by offering recommendations intended to assist companies deploying AI with insights on how to manage or mitigate bias in AI. Proponents of AI in recruiting contend that recommendations generated are “efficient, cost effective and impartial” (Guler and et al., 2022, p. 3, emphasis added). However, gender bias (Carpenter 2015; Larson 2016; Tatman, 2016) and racial bias (Angwin et al. 2016; Crawford 2016) have been uncovered in AI applications. A classic example of bias in an AI recruiting system is that developed by Amazon.com, which was reportedly never used (Dastin, 2018). In analysing the system development, Lauret (2019) determined that the sample of software engineering resumes used for training did not have the same statistical distribution as the overall population. Thus, if human experts are biased, the algorithm using data from those hired by biased experts will learn to replicate the hiring decisions of those biased experts (Lauret, 2019). Employee bias in using the application could also be introduced. The concept of heuristics and biases in human judgment was introduced by Tversky and Kahneman (1974; 1986), who theorize that decision-makers rely on five main heuristics, including representativeness, availability, anchoring and adjustment, framing, and overconfidence. Considerations in AI development and human biases in use are summarized in Figure 1. Companies may be vulnerable to legal exposure if adverse employment decisions are made by relying on AI (Dattner, 2019). Legal systems have not kept pace with the development of AI. Bias and privacy aspects are the most relevant legal considerations, according to the Algorithmic Accountability Act of (2019), which is the first federal legislative effort in the United States (USA) to regulate AI in response to concerns about biased and discriminatory outcomes. Introduced in 2019 and updated in 2022 (Algorithmic Accountability Act of 2022) to require audits of AI systems, this act was not passed. However, a recent AI bias law, Automated Employment Decision Tools (2021), was enacted as Local Law 144 in New York City, requiring companies to conduct audits to assess biases in AI used in hiring. Taking a lead role globally, the European Commission (2021) proposed the first legal framework on AI, the Proposal for a Regulation laying down harmonised rules on artificial intelligence, which addresses the risks of specific uses of AI, categorizing them into 4 different levels, including unacceptable, high, limited, and minimal risk. Other legal initiatives have been proposed in other countries and states to address the employment-related use of AI. Challenges in these efforts include the definition of AI, which does not yet exist in either EU or USA legislation; legal rights issues (i.e., what grants a person or organization rights and responsibilities under the law); regulatory guidance to comply with required bias audits; legal liability; and privacy issues. Recommendations are offered in Figure 2.

Area 3 - Information Systems Analysis and Specification

Nr: 4
Title:

Cybersecurity Education in Smaller Academic Institutions Using Private Cloud and Nested Virtualization

Authors:

Glenn Papp Jr. and Petter Lovaas

Abstract: For smaller academic institutions, developing and offering an effective cybersecurity education can be challenging due to various technological requirements. Regarding pedagogy, classroom activities and coursework are more difficult to develop until supporting technologies have been approved or implemented. However, choice and acquisition of supporting technologies can be challenging due to cost, both financially and/or in terms of expertise required to manage them. On the other hand, providing and maintaining physical computer labs can be just as costly to finances and resources with the added consideration of replacing hardware every few years. This research seeks to offer a model which smaller academic institutions could use to provide sufficient technologies to support cybersecurity education amid academic, financial, bureaucratic, and risk management concerns. In response, the researchers propose private cloud for technology delivery in cybersecurity education programs at smaller academic institutions. We define private cloud in this context as the use of several complimentary technologies that allow students to securely access a remote desktop environment through virtual desktop infrastructure (VDI) provided by the institution. The environment provided through the VDI provides the same configurable services as public cloud, from infrastructure-as-a-service (compute, network, and storage) to software-as-a-service (applications) (Bhardwaj et al., 2010). Those technologies and their purposes are virtual private networks (VPN) for secure remote access to the dedicated cloud network; virtual machine manager (VMM) for virtualization automation and network virtualization; and self-service cloud portal (SSCP) web services for automated provisioning of virtual machines (VMs). Additionally, the student VMs can use nested virtualization features, which give students the ability to work with other VMs using their primary VM as the hypervisor. Although nested virtualization was problematic in terms of performance and usability years ago (Wannous et al., 2012), recent advances have made nested virtualization usable in numerous environments by passing host virtualization extensions to the guest VM (Ren et al., 2017). Although providing cloud infrastructure is beneficial in terms of cost, access, and management, it still requires competent design and configuration. One analysis of infrastructure-as-a-service vulnerabilities and countermeasures identified numerous threat vectors, including monitoring VMs from the host, communications between VMs and the host, monitoring VMs from other VMs, communication between VMs, VM mobility, denial of service, logical network segmentation, firewalls, traffic encryption, network monitoring, computing resources, and storage resources (Dawoud et al., 2010). Although these risks seem numerous, the benefits of cloud computing could dramatically improve the educational experience, impact, and finances of a cybersecurity program. For future research, we propose testing the hypotheses that adoption of private cloud for technology delivery in cybersecurity education programs at smaller academic institutions: (1) increases the capability and effectiveness of cybersecurity education programs, especially those that are smaller in size; (2) optimizes cost of technology and management; (3) decreases downtime and maintenance costs of technology; and (4) correlates with higher institutional cybersecurity.

Area 4 - Human-Computer Interaction

Nr: 6
Title:

Optimal User Design for Career Websites: The Juvigo Case Study

Authors:

Juergen Bluhm

Abstract: Career websites become more and more important nowadays, because getting the right candidates for your company is an important task in current times, when the pool of suitable candidates becomes smaller and smaller in many countries. University of Munich together with the company Juvigo from Berlin has administered a project with eye-tracking to better understand how the career website of Juvigo is used and understood by its users, i.e. students looking for an internship or a full-time job and how to optimise the interaction with the website to improve the number of applications for these offerings. One major finding was the time it takes to apply for a job, this has to be rather short and the whole application should be as easy as possible. Eye-Tracking also helped to identfy certain problem areas of the website and the process which could be eliminated after the research. Our proesentation will outline best practices derived from the case study with Juvigo.