ICEIS 2025 Abstracts


Area 1 - Databases and Information Systems Integration

Full Papers
Paper Nr: 25
Title:

Exploring Trust in Blockchain Technology: A Critical Review of the Theoretical Acceptance Models

Authors:

Betül Aydogdu and Irina Rychkova

Abstract: Although Blockchain Technology (BCT) is widely acknowledged for its disruptive potential in reshaping industries through decentralization and enhanced security, its adoption has been slower than anticipated. To address this gap, this secondary study examines 21 recently published surveys on BCT acceptance. While existing literature confirms the critical role of trust in BCT acceptance, our study examines its effect in detail, focusing on the role it plays in the theoretical acceptance models — whether as a predictor, mediator, or moderator. This research contributes to a deeper understanding of the BCT adoption process, which is essential for effective policy-making and defining and implementing digital transformation strategies within organizations.
Download

Paper Nr: 47
Title:

Business Impacts of Data Analytics in the Service Sector: A Systematic Literature Review

Authors:

Maria Madlberger and Mykhailo Yesaulov

Abstract: The service sector is one of the industries that is most affected by digital technology and data analytics. Despite a large body of literature on the effects of various data analytics techniques, a comprehensive review of academic insights into impacts of data analytics in the service sector is missing. The goal of this paper is a systematic literature review of impacts that data analytics techniques exert on the performance of service provision. A sample of 70 scholarly articles has been identified and analyzed. A majority of the analyzed articles addresses data analytics in general, big data analytics techniques, or artificial intelligence, whereas a lower number of studies investigates the impacts of concrete data analytics techniques. The impacts of data analytics can be categorized into factors that relate to customer responses, management and decision-making, and long-term indirect effects on competitiveness and monetary impacts. The findings further show that data analytics based on big data analytics techniques yields different outcomes than analytics approaches that are based on artificial intelligence.
Download

Paper Nr: 51
Title:

SEALM: Semantically Enriched Attributes with Language Models for Linkage Recommendation

Authors:

Leonard Traeger, Andreas Behrend and George Karabatis

Abstract: Matching attributes from different repositories is an important step in the process of schema integration to consolidate heterogeneous data silos. In order to recommend linkages between relevant attributes, a contextually rich representation of each attribute is quite essential, particularly when more than two database schemas are to be integrated. This paper introduces the SEALM approach to generate a data catalog of semantically rich attribute descriptions using Generative Language Models based on a new technique that employs six variations of available metadata information. Instead of using raw attribute metadata, we generate SEALM descriptions, which are used to recommend linkages with an unsupervised matching pipeline that involves a novel multi-source Blocking algorithm. Experiments on multiple schemas yield a 5% to 20% recall improvement in recommending linkages with SEALM-based attribute descriptions generated by the tiniest Llama3.1:8B model compared to existing techniques. With SEALM, we only need to process the small fraction of attributes to be integrated rather than exhaustively inspecting all combinations of potential linkages.
Download

Paper Nr: 84
Title:

Warehousing Data for Brand Health and Reputation with AI-Driven Scores in NewSQL Architectures: Opportunities and Challenges

Authors:

Paulo Siqueira, Rodrigo Dias, João Silva-Leite, Paulo Mann, Rodrigo Salvador, Daniel de Oliveira and Marcos Bedo

Abstract: This study explores the use of NewSQL systems for brand health and reputation analysis, focusing on multidimensional modeling and Data Warehouses. While row-based and relational OLAP systems (ROLAP) struggle to ingest large volumes of data and NoSQL alternatives rely on physically coupled models, NewSQL solutions enable Data Warehouses to maintain their multidimensional schemas, which can be seamlessly implemented across various physical models, including columnar and key-value structures. Additionally, NewSQL provides ACID guarantees for data updates, which is instrumental when data curation involves human supervision. To address these challenges, we propose a Star schema model to analyze brand health and reputation, focusing on the ingestion of large volumes of data from social media and news sources. The ingestion process also includes rapid data labeling through a large language model (GPT-4o), which is later refined by human experts through updates. To validate this approach, we implemented the Star schema in a system called RepSystemand tested it across four NewSQL systems: Google Spanner, CockroachDB, Snowflake, and Amazon Aurora. An extensive evaluation revealed that NewSQL systems significantly outperformed the baseline ROLAP (a multi-sharded PostgreSQL instance) in terms of: (i) data ingestion time, (ii) query performance, and (iii) maintenance and storage. Results also indicated that the primary bottleneck of RepSystem lies in the classification process, which may hinder data ingestion. These findings highlight how NewSQL can overcome the drawbacks of row-based systems while maintaining the logical model, and suggest the potential for integrating AI-driven strategies into data management to optimize both data curation and ingestion.
Download

Paper Nr: 96
Title:

An Approach for Product Record Linkage Using Cross-Lingual Learning and Large Language Models

Authors:

Andre Luiz Firmino Alves, Cláudio de Souza Baptista, José Itallo Martins Silva Diniz, Francisco Igor de Lima Mendes and Mateus Queiroz Cunha

Abstract: Organizations increasingly rely on data for the decision-making process. Nevertheless, significant challenges arise from poor data quality, leading to incomplete, inconsistent, and redundant information. As dependency on data grows, it becomes essential to develop techniques that integrate information from various sources while dealing with these challenges in the context of product matching. Our work investigates information retrieval and entity resolution approaches to product matching problems related to short and varied product descriptions in commercial data, such as those found in electronic invoices. Our proposed approach, STEPMatch, employs deep learning models alongside cross-lingual learning techniques, enhancing adaptability in contexts with limited or incomplete data, effectively identifying products accurately and consistently.
Download

Paper Nr: 137
Title:

Implementing the Perturbation Approach for Reliability Assessment: A Case Study in the Context of Flight Delay Prediction

Authors:

Simon Staudinger, Christoph Großauer, Pascal Badzura, Christoph G. Schuetz and Michael Schrefl

Abstract: Organizations employ prediction models as a foundation for decision-making. A prediction model learned from training data is often only evaluated using global quality indicators, e.g., accuracy and precision. These global indicators, however, do not provide guidance regarding the reliability of the prediction for a specific input case. In this paper, we instantiate a generic reference process for implementing reliability assessment methods for specific input cases on the real-world use case of flight delay prediction. We specifically implement the perturbation approach to reliability assessment for this use case and then describe the steps that were taken to train the prediction model, with an emphasis on the activities required to implement the perturbation approach. The perturbation approach consists of slightly altering feature values for an individual input case, e.g., within the margins of error of a sensed value, and observe whether the prediction of the model changes, which would render the prediction unreliable. The implementation of the perturbation approach requires decisions and documentations along the various stages of the data mining process. A generic tool can be used to document and perform reliability assessment using the perturbation approach.
Download

Paper Nr: 164
Title:

Multilevel Hypergraphs: A Conceptual Approach for Complex System Database Modelling

Authors:

José Ribas and Orlando Belo

Abstract: Graphs are very specialized structures for modelling and representing data objects and their relationships in real-world applications. The number and diversity of graph-based applications existing today are clear testimonies of the importance and relevance of the application of graphs in solving real-world problems. However, more conventional graph structures have difficulty keeping up with the evolving complexity of problems, particularly when they involve n-ary relationships between data objects. This can be overcome using hypergraphs, which allow for representing complex relationships between finite sets of data objects. However, their implementation still has some difficulties, such as the establishment of efficient algebras and computing mechanisms to deal with relational content between entities of a dataset. In this paper, we present an extension to conventional hypergraph-based models for modelling real world problems, proposing a new functional abstraction based on a graph structure with several levels of abstraction. Relationships between data objects are established at each level in a traditional way, while relationships between levels are defined by “levelled” virtual data objects, allowing for the establishment of inheritance relationships between other data objects of sequential levels, through a logical governance structure defining the relational flow between the various levels of the established model. We named this structure as multilevel hypergraph.
Download

Paper Nr: 166
Title:

Data Governance Capabilities Model: Empirical Validation for Perceived Usefulness and Perceived Ease of Use in Three Case Studies of Large Organisations

Authors:

Jan R. Merkus, Remko W. Helms and Rob J. Kusters

Abstract: A Data Governance Capabilities (DGC) model for measuring the status quo of Data Governance (DG) in an organisation has been validated in practice. After DG experts gained experience with the operationalised DGC model, we evaluated its perceived usefulness (PU) and perceived ease of use (PEOU) in case studies of three large organisations in the Netherlands. PU and PEOU are evaluated positively, but a moderator and knowledgeable participants remain necessary to make a meaningful contribution.
Download

Paper Nr: 188
Title:

A Framework Model for Supporting Transparent Polyglot Persistence with a Unified API and Extensible for Different Database Types

Authors:

Fernando de Oliveira Pereira, Eduardo Martins Guerra and Reinaldo Roberto Rosa

Abstract: This work introduces the Transparent Polyglot Persistence Framework Model (TPPFM) for supporting polyglot persistence through a unified API for extension. The framework employs the Esfinge Query Builder as its basis, restructuring it to provide polyglot functionality in alignment with the proposed framework model. A real database case study is conducted to demonstrate the viability of the proposed framework and its reference implementation. The ease of implementation for the developer and the transparency concerning the utilization of several databases within the same domain model are demonstrated.
Download

Paper Nr: 277
Title:

Sustainability in Product Models: Leveraging Adjacent Information for CO 2 Profiling in Configurations

Authors:

Anders Jakobsen and Torben Tambo

Abstract: This paper introduces the concept of Sustainability Adjacency as a framework for integrating adjacent information into CO₂ profiling and product configuration systems. By leveraging supplementary data, such as supplier emissions, logistics, and lifecycle assessments, the framework enables a comprehensive evaluation of a product’s sustainability impact. Current sustainability initiatives often operate in silos, neglecting broader trade-offs like transportation emissions in refurbishment or end-of-life scenarios. The proposed framework addresses these gaps by centralizing critical data, ensuring its propagation across organizational functions to prioritize low-emission configurations. Through an action research approach, the study highlights systemic barriers, including data quality issues, supplier transparency, and misaligned workflows, that hinder CO₂ profiling efforts. The findings emphasize the importance of dynamic data integration and cross-functional collaboration in aligning sustainability with operational and financial goals. This paper contributes to advancing sustainable product models and outlines actionable steps for organizations to embed sustainability into product lifecycle management effectively.
Download

Paper Nr: 291
Title:

Knowledge Management in Sustainable Supply Chains in a Developing Field: Case Natural Products

Authors:

Markus Heikkilä and Jyri Vilko

Abstract: Collaboration networks allow different actors inside the industry to exchange knowledge. The knowledge exchange plays an important role in innovation and industry development. Companies join collaboration networks to gain competitive advantages and to gather knowledge from other network members. Acquired knowledge can support innovation without requiring additional investments from the companies. The Finnish natural product sector is an immature industry field where the knowledge exchange inside the collaboration networks is not identified. The study identifies and presents the different collaboration networks and the explicit and tacit knowledge flow between the actors. We found that collaboration between the actors is common and there are both formal and informal networks where the knowledge is exchanged. However, informal networks are more popular, and the exchanged knowledge is mostly in a tacit format. This reflects the underdevelopment of the sectors, characterized by the informality of its network and the reliance on tacit knowledge.
Download

Paper Nr: 293
Title:

Influence of Quantization of Convolutional Neural Networks on Image Classification of Pollen-Bearing Bees

Authors:

Tiago Mesquita Oliveira, José Maria Monteiro, José Wellington Franco and Javam Machado

Abstract: Automatic recognition of pollen-carrying bees can provide important information about the operating conditions of bee colonies. It can also show the intensity of pollination activity, a fundamental aspect for many plant species and of great commercial interest for agriculture. This work analyzes fourteen deep convolutional neural network models for classifying pollen-carrying and non-pollen-carrying bees in images obtained at the hive entrance. We also analyze how the quantization process influences these results. Quantization allows you to reduce the inference time and the size of models because it performs calculations and stores numbers in a lower-precision structure. Due to the everyday use of embedded systems to obtain this image with memory and processing space restrictions, quantized models can be advantageous. We show that improving the inference time and/or the model’s size is possible without decreasing the accuracy, precision, recall, F1-score, and false positive rate performance metrics.
Download

Paper Nr: 325
Title:

Business Process Modeling Techniques for Data Integration Conceptual Modeling

Authors:

Ana Ribeiro, Bruno Oliveira and Óscar Oliveira

Abstract: Data pipelines play a crucial role in analytical systems by managing the extraction, transformation, and loading of data to meet decision-making needs, however, due to the inherent complexity of data management. Despite their importance, the development of data pipelines is frequently carried out in an ad-hoc manner, lacking standardized practices that ensure consistency and coherence across implementations. In recent years, the Business Process Model and Notation (BPMN) has emerged as a powerful tool for conceptual modeling in diverse analytical and operational scenarios. BPMN offers an expressive framework capable of representing a wide range of data processing requirements, enabling structured and transparent design. This work explores the application of BPMN to data integration pipeline modeling, analyzing existing methodologies and proposing a standardized set of guidelines to enhance its use.
Download

Short Papers
Paper Nr: 19
Title:

On the Text-to-SQL Task Supported by Database Keyword Search

Authors:

Eduardo R. Nascimento, Caio Viktor S. Avila, Yenier T. Izquierdo, Grettel M. García, Lucas Feijó L. Andrade, Michelle S. P. Facina, Melissa Lemos and Marco A. Casanova

Abstract: Text-to-SQL prompt strategies based on Large Language Models (LLMs) achieve remarkable performance on well-known benchmarks. However, when applied to real-world databases, their performance is significantly less than for these benchmarks, especially for Natural Language (NL) questions requiring complex filters and joins to be processed. This paper then proposes a strategy to compile NL questions into SQL queries that incorporates a dynamic few-shot examples strategy and leverages the services provided by a database keyword search (KwS) platform. The paper details how the precision and recall of the schema-linking process are improved with the help of the examples provided and the keyword-matching service that the KwS platform offers. Then, it shows how the KwS platform can be used to synthesize a view that captures the joins required to process an input NL question and thereby simplify the SQL query compilation step. The paper includes experiments with a real-world relational database to assess the performance of the proposed strategy. The experiments suggest that the strategy achieves an accuracy on the real-world relational database that surpasses state-of-the-art approaches. The paper concludes by discussing the results obtained.
Download

Paper Nr: 36
Title:

SIR SQL for Logical Navigation and Calculated Attribute Free Queries to Base Tables

Authors:

Witold Litwin

Abstract: SIR SQL stands for SQL with Stored and Inherited Relations (SIRs). Every SIR SQL Create Table makes definable any base attributes one could have in an SQL Create Table at present. In addition, one can define inherited attributes (IAs), definable in SQL queries or views only up to now. One may also define foreign keys (FKs) that are SQL ones or logical pointers in Codd’s original sense. IAs in SIRs with Codd’s FKs usually provide for logical navigation free (LNF) queries, i.e., without equijoins on FKs and referenced keys. The same outcome SQL queries to the same base tables without IAs, must include LN avoided. SIR SQL Create Table may in particular include IAs definable through value expressions also possible in SQL queries or views only up to now, usually referred to as calculated attributes (CAs). CAs may involve, e.g., attributes from different tables or aggregate functions, or sub-queries. CAs in SIRs provide for CAF queries, addressing any CAs in SIRs by name only. In contrast, every SQL query to base tables needing CAs has to fully define each of these. The end result is that most of SQL base table queries, requiring LN or CAs schemes at present, become LNF or CAF queries in SIR SQL. The latter are usually substantially less procedural, i.e., by dozens of characters. They become also quasi-natural, i.e., with Select clause only naming the selected attributes, From clause naming a single base table and Where clause, if any, with short Boolean formulae over usual constraints on some attribute values, at worst. SIR SQL should accordingly significantly boost SQL clients’ productivity. Especially, since most clients are data analysts or application developers, not SQL geeks. While the problematic of LNF and CAF queries is four decades old, our solution is the first practical one, to our best knowledge. Below, we illustrate the problem of LN and of CAs in queries to SQL base tables using Codd’s original Supplier-Part DB. We then present SIR SQL. We show in depth how SIR SQL LNF and CAF queries to base tables become possible. We show in particular that Create Table statements defining an SQL DB at present, usually define also a SIR SQL DB, providing for LNF queries to base tables as free bonus. We discuss the front-end for SIR SQL that should require, for any popular SQL DBS, a few month implementation efforts only, validated by proof-of-concept prototype for SQLite3. We accordingly postulate to upgrade every popular SQL DBS to SIR SQL. 7+ million SQL clients worldwide, of the dominant DB language, providing for 31B+ US\$ market size of SQL apps, will benefit from.
Download

Paper Nr: 67
Title:

Connection Is all You Need! Mining and Linking Disparate Data Sources for Collaboration Network Analysis

Authors:

Benjamin Vehmeyer and Michaela Geierhos

Abstract: Business networks are a key driver of innovation and economic growth. However, a major challenge is how to discover these network relationships in heterogeneous data sources. In this paper, we present an IT artifact that unifies different data types, including patent, research funding, and publication information, into a unified graph database. This allows a comprehensive analysis of cooperation patterns. Community detection algorithms are used to identify research clusters, while centrality measures reveal key players. Visualizations facilitate the interpretation of research results and provide a user-friendly way to display data about research communities and institutional behavior. A prototype visualization of these results provides a proof of concept for the practicality of the method. The proposed design provides a robust framework for understanding the dynamics of collaborative networks.
Download

Paper Nr: 69
Title:

A Survey of Evaluating AutoML and Automated Feature Engineering Tools in Modern Data Science

Authors:

Dinesha Dissanayake, Rajitha Navarathna, Praveen Ekanayake and Sumanaruban Rajadurai

Abstract: This survey provides a comprehensive comparison of several AutoML tools, along with an evaluation of three feature engineering tools: Featuretools, Autofeat, and PyCaret. We conducted a benchmarking analysis of four AutoML tools (TPOT, H2O-AutoML, PyCaret, and AutoGluon) using seven datasets sourced from OpenML and the UCI Machine Learning Repository, covering binary classification, multiclass classification, and regression tasks. Key metrics such as F1-score for classification and RMSE for regression were used to assess performance. The tools are also compared in terms of execution time, memory usage, and optimization success. AutoGluon consistently demonstrated strong predictive performance, while H2O-AutoML showed reliable results but was limited by long optimization times. PyCaret was the most efficient, showing notably shorter execution times and lower memory usage across all datasets compared to other tools, though it had slightly lower accuracy. TPOT frequently struggled to complete optimization within the set time limit, achieving successful completion in only 42.86% of total cases. Overall, this survey provides insights into which AutoML tools are best suited for different task requirements.
Download

Paper Nr: 103
Title:

Author Beta-Liouville Multinomial Allocation Model

Authors:

Faiza Tahsin, Hafsa Ennajari and Nizar Bouguila

Abstract: Conventional topic models usually presume that topics are evenly distributed among documents. Sometimes, this presumption may not be true for many real-world datasets characterized by sparse topic representation. In this paper, we present the Author Beta-Liouville Multinomial Allocation Model (ABLiMA), an innovative approach to topic modeling that incorporates the Beta-Liouville distribution to better capture the variability and sparsity of topic presence across documents. In addition to the prior flexibility our model also leverages the authorship information, leading to more coherent topic diversity.ABLiMA can represent topics that may be entirely absent or only partially present in specific documents, offering enhanced flexibility and a more realistic depiction of topic proportions in sparse datasets. Experimental results on the 20 Newsgroups and NIPS datasets demonstrate superior performance of ABLiMA compared to conventional models, suggesting its ability to model complex topics in various textual corpora. This model is particularly advantageous for analyzing text with uneven topic distributions, such as social media or short-form content, where conventional assumptions often fall short.
Download

Paper Nr: 108
Title:

Optimizing Edge-Based Query Processing for Real-Time Applications

Authors:

Kalgi Gandhi and Minal Bhise

Abstract: The rapid growth of edge devices in large-scale systems presents challenges due to limited processing power, memory, and bandwidth. Efficient resource utilization and data management during query processing are critical, especially for costly join operations. The Column Imprint-Hash Join (CI-HJ) accelerates hash joins using equi-height binning but lacks real-time efficiency and scans unnecessary cachelines. This paper introduces Workload Aware Column Imprint-Hash Join (WACI-HJ), a novel approach that leverages workload prediction to optimize hash joins for real-time edge query processing. WACI-HJ comprises of two phases: the WACI-HJ Generation Phase predicts query workloads and pre-processes data into bins using blocking and hashing techniques, reducing overhead before query arrival. The Query Processing and Resource Utilization Phase efficiently utilizes CPU, RAM, and I/O resources for runtime processing. Evaluations using Benchmark and Real-World datasets demonstrate significant improvements in the Percentage of Cachelines Read PCR, Query Execution Time QET, and Resource Utilization. PCR and QET show 18% and 5% improvement respectively. The proposed technique has been demonstrated to work well for scaled and skewed data. Although PCR is an indirect measure of energy consumption, direct Energy-Efficiency Experiments reveal gains of 1%, 23%, and 18% in CPU, RAM, and I/O utilization respectively. WACI-HJ provides an optimal and sustainable solution for edge database management.
Download

Paper Nr: 110
Title:

Process Mining for Demographic Insights: A Subpopulation Analysis in Healthcare Pathways

Authors:

Priya Naguine, Faiza Bukhsh, Jeewanie Jayasinghe Arachchige and Rob Bemthuis

Abstract: Demographic variations in healthcare pathways are key for delivering effective and equitable patient care. Examining pathway differences across age and gender groups can help uncover demographic-specific disparities in care delivery. In this paper, we demonstrate the use of the Process Mining Project Methodology in Health-care (PM2HC) for the subpopulation-based analysis of treatment pathways, using process mining techniques. We validate this methodology through a case study on frozen shoulder treatment using the MIMIC-IV data set. Key findings reveal distinct procedural sequences for male and female patients, as well as notable age-based variations in treatment choices and timelines. These insights underscore the influence of demographic factors on healthcare processes. Expert evaluations further highlight the practicality of the methodology and its potential to guide targeted interventions that address various patient needs, thus enhancing personalized care. This work contributes to clinical research and practice by identifying inefficiencies and informing tailored interventions. Future efforts will extend the methodology to other medical conditions and integrate multi-institutional data for broader applicability. By advancing process mining in healthcare, this research provides insight into improving patient care and addressing demographic diversity.
Download

Paper Nr: 125
Title:

Handling Inconsistent Government Data: From Acquisition to Entity Name Matching and Address Standardization

Authors:

Davyson S. Ribeiro, Paulo V. A. Fabrício, Rafael R. Pereira, Tales P. Nogueira, Pedro A. M. Oliveira, Victória T. Oliveira, Ismayle S. Santos and Rossana M. C. Andrade

Abstract: The integration of Data Science and Big Data is essential for managing large-scale data, but challenges such as heterogeneity, inconsistency, and data enrichment complicate this process. This paper presents a flexible architecture designed to support municipal decision-making by integrating data from multiple sources. To address inconsistencies, an entity matching algorithm was implemented, along with an address standardization library, optimizing data processing without compromising quality. The study also evaluates data acquisition methods (APIs, Web Crawlers, HTTPS requests), highlighting their trade-offs. Finally, we demonstrate the system’s practical impact through a case study on health data monitoring, showcasing its role in enhancing data-driven governance.
Download

Paper Nr: 155
Title:

Knowledge Reinjection Policies and Machine Learning Integration in CWM-Based Complex Data Warehouses

Authors:

Fabrice Razafindraibe, Jean Christian Ralaivao, Angelo Raherinirina and Hasina Rakotonirainy

Abstract: This article discusses the growing complexity of data warehousing systems and the need for enhanced frameworks that can effectively manage simultaneously metadata and knowledge. While the Common Warehouse Metamodel (CWM) provides a standardized method for metadata management, its semantic limitations hinder its use in complex environments. To overcome these shortcomings, the paper proposes an extended CWM framework that incorporates ontologies, machine learning, and knowledge re-injection policies. This new framework introduces additional layers and components, such as a ’learning package’ and advanced knowledge mapping, to improve semantic interoperability, adaptability and usability. The research also explores the integration of hybrid AI systems that use both inductive and deductive methods to facilitate knowledge discovery and improve decision making.
Download

Paper Nr: 179
Title:

Prompt-Driven Time Series Forecasting with Large Language Models

Authors:

Zairo Bastos, João David Freitas, José Wellington Franco and Carlos Caminha

Abstract: Time series forecasting with machine learning is critical across various fields, with Ensemble models and Neural Networks commonly used to predict future values. LSTM and Transformers architecture excel in modeling complex patterns, while Random Forest has shown strong performance in univariate time series forecasting. With the advent of Large Language Models (LLMs), new opportunities arise for their application in time series prediction. This study compares the forecasting performance of Gemini 1.5 PRO against Random Forest and LSTM using 40 time series from the Retail and Mobility domains, totaling 65,940 time units, evaluated with SMAPE. Results indicate that Gemini 1.5 PRO outperforms LSTM by approximately 4% in Retail and 6.5% in Mobility, though it underperforms Random Forest by 5.5% in Retail and 1% in Mobility. In addition to this comparative analysis, the article contributes a novel prompt template designed specifically for time series forecasting, providing a practical tool for future research and applications.
Download

Paper Nr: 181
Title:

Using Graph Convolutional Networks to Rank Rules in Associative Classifiers

Authors:

Maicon Dall’Agnol, Veronica Oliveira de Carvalho and Daniel Carlos Guimarães Pedronette

Abstract: Associative classifiers are a class of algorithms that have been used in diverse domains due to their inherent interpretability. For models to be induced, a sequence of steps is necessary, one of which is aimed at ranking a set of rules. This sorting usually occurs through objective measures, more specifically through confidence and support. However, as many measures exist, new ranking methods have emerged with the aim of (i) using a set of them simultaneously, so that each measure can contribute to identify the most important rules and (ii) inducing models that present a good balance between performance and interpretability in relation to some baseline. This work also presents a method for ranking rules considering the same goals ((i);(ii)). This new method, named AC.RANKGCN, is based on ideas from previous works to improve the results obtained so far. To this end, ranking is performed using a graph convolutional network in a semi-supervised approach and, thus, the importance of a rule is evaluated not only in relation to the values of its OMs, but also in relation to its neighboring rules (neighborhood) considering the network topology and a set of features. The results demonstrate that AC.RANKGCN outperforms previous results.
Download

Paper Nr: 241
Title:

A Hybrid Music Recommendation System Based on K-Means Clustering and Multilayer Perceptron

Authors:

Rafael Cintra de Araúijo, Victor Moisés Silveira Santos, João Fausto Lorenzato de Oliveira and Alexandre M. A. Maciel

Abstract: Music recommendation systems have become indispensable tools for enhancing user experiences by offering personalized playlists tailored to individual preferences. However, traditional recommendation approaches often struggle with challenges such as accurately capturing user tastes, maintaining diversity in recommendations, and addressing the cold-start problem, where limited user data hampers effective predictions. To address these issues, this study presents a hybrid recommendation model that integrates K-Means clustering and a Multilayer Perceptron (MLP) neural network to deliver coherent and diverse music recommendations. The model utilizes the all-MiniLM-L6-v2 embedding, a powerful sentence-transformer, to analyze semantic similarities in textual data such as song titles, artist names, and lyrics, encoding them into a dense vector space. Combined with normalized audio features, these embeddings enable clustering and similarity-based recommendations. Extensive experiments, conducted on datasets from Spotify and Kaggle, employed key metrics such as accuracy, F1 score, silhouette score, and cosine similarity to evaluate performance. The results highlight the system’s ability to maintain genre coherence and acoustic feature consistency, minimize track repetition, and foster user engagement. Addressing challenges like the cold-start problem and diverse user preferences, the proposed model demonstrates its potential for real-world applications. Future extensions include incorporating user feedback and supporting multi-session recommendations to adapt to evolving music trends, offering a robust and innovative approach to music recommendation systems.
Download

Paper Nr: 242
Title:

Text-to-SQL Experiments with Engineering Data Extracted from CAD Files

Authors:

Júlio G. Campos, Grettel M. García, Jefferson A. de Sousa, Eduardo T. L. Corseuil, Yenier T. Izquierdo, Melissa Lemos and Marco A. Casanova

Abstract: The development of Natural Language (NL) interfaces to access relational databases attracted renewed interest with the use of Large Language Models (LLMs) to translate NL questions to SQL queries. This translation task is often referred to as text-to-SQL, a problem far from being solved for real-world databases. This paper addresses the text-to-SQL task for a specific type of real-world relational database storing data extracted from engineering CAD files. The paper introduces a prompt strategy tuned to the text-to-SQL task over such databases and presents a performance analysis of LLMs of different sizes. The experiments indicated that GPT-4o achieved the highest accuracy (96%), followed by Llama 3.1 70B Instruct (86%). Quantized versions of Gemma 2 27B and Llama 3.1 8B had a very limited performance. The main challenges faced in the text-to-SQL task involved SQL complexity and balancing speed and accuracy when using quantized open-source models.
Download

Paper Nr: 264
Title:

Towards Big OLAP Data Cube Classification Methodologies: The ClassCube Framework

Authors:

Alfredo Cuzzocrea and Mojtaba Hajian

Abstract: Focusing on the emerging big data analytics scenario, this paper introduces ClassCube, an innovative methodology that combines OLAP analysis and classification algorithms for improving effectiveness, expressive power and accuracy of the main classification task over big datasets shaped in the form of big OLAP data cubes. The key idea of ClassCube relies on dimensionality reduction tools, which are deeply investigated in this paper.
Download

Paper Nr: 272
Title:

Economic Token Models in ReFi Projects: Token Design and Incentive Mechanisms Analysis

Authors:

Julia Staszczak, Mariusz Nowostawski and Patrick Mikalef

Abstract: As cryptocurrency markets continue to captivate attention promising quick financial gains, it becomes increasingly important to critically examine blockchain-based projects that attract significant investments. This study provides insights into evaluating the viability of projects by analyzing and categorizing token attributes through the application of a token morphological framework. This serves as a structured examination of key parameters - purpose, governance, functional, and technical - to understand how different design aspects interact within each project and influence long-term success. We explore sustainability-oriented projects within the field of Regenerative Finance (ReFi) being a growing dimension of blockchain innovation that integrates financial systems with ecological and social regeneration. The focused approach of limiting the scope to three case studies ensures a deeper analysis and provides clarity in understanding the nuances of token design while also identifying possible patterns across projects. Hence, we define token archetypes offering valuable insights into how variations in token structure influence governance, user incentives, and economic viability, extending micro-level perspective to broader economic dynamics. This study sheds light on ownership and governance structures, token supply models, mechanisms for incentivizing participation while limiting and mitigating speculative behavior, and mechanisms for token removal from circulations. Understanding these aspects allow for shaping more impactful and resilient token economies and provides actionable insights that can inform future projects, making it relevant for both academic and practical implications. This comparative analysis contributes to the theoretical development of tokenomics by offering a clearer understanding of how different token structures align with organizational goals and community dynamics. In doing so, it bridges theoretical insights with practical applications.
Download

Paper Nr: 278
Title:

Data Governance in Education: Addressing Challenges and Unlocking Opportunities for Effective Data Management

Authors:

Thiago Medeiros, André Araújo, José Silva and Alenilton Silva

Abstract: This study investigates the pivotal role of data governance in driving digital transformation within the education sector. It highlights the importance of improving data quality, security, interoperability, and integration to establish efficient and transparent educational data ecosystems. The analysis reveals significant challenges, including fragmented adoption of governance practices, the absence of tailored public policies, and a lack of standardized metrics to measure governance impacts. Additional barriers, such as compatibility issues with legacy systems and insufficient technical training, hinder the effective implementation of data governance strategies. This research emphasizes the need for collaborative and interdisciplinary efforts to address these challenges, advocating for developing practical, scalable, and context-specific solutions. By tackling these issues, data governance can be firmly established as a cornerstone for innovation, improved decision-making, and enhanced transparency and equity in the education sector, ultimately supporting its digital transformation and long-term sustainability.
Download

Paper Nr: 298
Title:

Examining the Impact of Cloud Computing on Organizational Performance: A Systematic Literature Review

Authors:

Vincent Donat, Christian Haertel, Daniel Staegemann, Christian Daase, Matthias Pohl, Dirk Dreschel, Damanpreet Singh Walia and Klaus Turowski

Abstract: Cloud computing has taken a pivotal role in modern business operations, offering convenient and flexible access to IT resources. Accordingly, this study investigates the impact of cloud computing on organizational performance. A systematic literature review identified 31 relevant papers. The analysis underscores the diverse benefits of cloud computing adoption across various facets, including financial and product market performance, organizational agility, productivity, innovation, sustainability, and supply chain performance. This review further discusses challenges and gaps, highlighting the need for future research in this area.
Download

Paper Nr: 315
Title:

Evaluating the Use of Open-Source and Standalone SAST Tools for Detecting Vulnerabilities in C/C++ Projects

Authors:

Valdeclébio Farrapo, Emanuel Rodrigues, José Maria Monteiro and Javam Machado

Abstract: Detecting security vulnerabilities in the source code of software systems is one of the most significant challenges in the field of information security. In this context, the Open Web Application Security Project (OWASP) defines Static Application Security Testing (SAST) tools as those capable of statically analyzing the source code, without executing it, to identify security vulnerabilities, bugs, and code smells during the coding phase, when it is relatively inexpensive to detect and resolve security issues. However, most wellknown SAST tools are commercial and web-based, requiring the upload of the source code to a “trusted” remote server. In this paper, our goal is to investigate the viability of using open-source standalone SAST tools for detecting security vulnerabilities in C/C++ projects. To achieve our goal, we conduct an empirical study in which we examine 30 large and popular C/C++ projects using two different state-of-the-art opensource and standalone SAST tools. The results demonstrate the potential of using open-source standalone SAST tools as a means to evaluate the security risks of a software product without manually reviewing all the warnings.
Download

Paper Nr: 316
Title:

EM-Join: Efficient Entity Matching Using Embedding-Based Similarity Join

Authors:

Douglas Rolins Santana, Paulo Henrique Santos Lima and Leonardo Andrade Ribeiro

Abstract: Entity matching in textual data remains a challenging task due to variations in data representation and the computational cost. In this paper, we propose an efficient pipeline for entity matching that combines text preprocessing, embedding-based data representation, and similarity joins with a heuristic-driven method for threshold selection. Our approach simplifies the matching process by concatenating attribute values and leveraging specialized language models for generating embeddings, followed by a fast similarity join evaluation. We compare our method against state-of-the-art techniques, namely Ditto, Ember, and DeepMatcher, across 13 publicly available datasets. Our solution achieves superior performance in 3 datasets while maintaining competitive accuracy in the others, and it significantly reduces execution time—up to 3x faster than Ditto. The results obtained demonstrate the potential for high-speed, scalable entity matching in practical applications.
Download

Paper Nr: 322
Title:

From Collection to Analysis: A Blockchain Solution for Transparent and Reliable Chain of Custody in the O&G Sector

Authors:

Theo Caldas, Ana Lara Mangeth, Yang Ricardo Miranda, Paulo Henrique Alves, Rafael Nasser, Gustavo Robichez, Gil Marcio Silva and Fernando Pellon de Miranda

Abstract: The oil and gas (O&G) sector relies on a robust chain of custody mechanisms to ensure the transparency, integrity, and traceability of materials and environmental evidence. Traditional custody systems often suffer from inefficiencies, data fragmentation, and vulnerabilities to unauthorized alterations. This paper presents Cust´ odiaBR, a blockchain-based solution designed to enhance the registration and monitoring of oily waste and oiled fauna samples collected during Petrobras’s Beach Monitoring Projects (Projetos de Monitoramento de Praias - PMPs). Cust´ odiaBR integrates real-time data from a centralized monitoring system and leverages the Brazilian Blockchain Network (RBB) to provide a transparent, immutable, and auditable custody record. The proposed system employs a hybrid on-chain/off-chain architecture, composed of five major components, ensuring data integrity while preserving confidentiality through cryptographic hash verification. Through a comparative analysis with existing blockchain-based forensic solutions, this study highlights the advantages of public-permissioned blockchain in industrial applications, demonstrating how Cust´ odiaBR can serve as a model for digital chain of custody systems.
Download

Paper Nr: 26
Title:

Uniting Mcdonald’s Beta and Liouville Distributions to Empower Anomaly Detection

Authors:

Oussama Sghaier, Manar Amayri and Nizar Bouguila

Abstract: In this paper, we examine the McDonald’s Beta-Liouville distribution, a new distribution that combines the key features of the Liouville and McDonald’s Beta distributions, in order to address the issue of anomaly identification in proportional data. Its primary advantages over the standard distributions for proportional data, including the Dirichlet and Beta-Liouville, are its flexibility and capacity for explanation when working with this type of data, thanks to its variety of presented parameters. We provide two discriminative methods: a feature mapping approach to improve Support Vector Machine (SVM) and normality scores based on choosing a specific distribution to approximate the softmax output vector of a deep classifier. We illustrate the advantages of the proposed methods with several tests on image and non-image data sets. The findings show that the suggested anomaly detectors, which are based on the McDonald’s Beta-Liouville distribution, perform better than baseline methods and classical distributions.
Download

Paper Nr: 50
Title:

Digital Platform-Based Value Creation in Micro-Enterprise Networks

Authors:

Kaisa Liukkonen, Otto Nokkala and Jyri Vilko

Abstract: Digital platform-based value creation allows micro-enterprises to cost-effectively share information within their value networks. However, these enterprises face challenges due to limited resources and skills, making network participation crucial for value creation. Digital platforms are generally accessible, offering micro-enterprises practical tools for enhancing business operations and value creation. This study aims to examine the value networks in the non-timber forest sector, exploring how sector participants create value and the digital platforms they use. Conducted as qualitative research, the study combines a literature review with empirical data. Findings reveal that digital platforms, including websites, social media, and communication platforms, were primarily used for sales, marketing, and communication. However, digital platform use and perceived value creation were low; companies attributed this to resource constraints and satisfaction with current customer numbers.
Download

Paper Nr: 70
Title:

Exploring Links Between Social Media Habits, Loneliness, and Sleep: A Formal Concept Analysis Approach

Authors:

Fernanda M. Gomes, Julio C. V. Neves, Luis Enrique Zárate Gálvez and Mark Alan Junho Song

Abstract: Social media platforms have reshaped personal interactions, allowing engagement with diverse audiences. However, growing evidence suggests that these platforms may also contribute to mental health challenges. This paper investigates the associations between social media usage patterns, loneliness, and sleep quality, using Formal Concept Analysis (FCA) on data from a sample in Bangladesh. The dataset includes information on social media habits, loneliness, anxiety, depression, and sleep disturbances, using metrics from validated psychological scales. Through FCA, this study extracted implication rules that describe how specific social media usage behaviors relate to feelings of loneliness and sleep issues. Findings show that individuals with high levels of social media engagement report shorter sleep durations and heightened symptoms of loneliness. FCA is used in this study to uncover non-obvious relationships within complex datasets, making it a valuable approach for analyzing patterns between social media behaviors and mental health outcomes.
Download

Paper Nr: 83
Title:

Domain Ontology for Semantic Mediation in the Data Science Process

Authors:

Silvia Lucia Borowicc and Solange Nice Alves-Souza

Abstract: The integration of heterogeneous data sources is a persistent challenge in public health. As regards dengue and other arboviral diseases, data collected over many years by various organizations are fragmented across heterogeneous, siloed databases, lacking semantically consistent integration for effective decision-making in health crises. These organizations operate autonomously but must collaborate to integrate data for a unified understanding of the domain of knowledge. However, standardizing infrastructure or unifying systems is often unfeasible. This study proposes the incorporation of semantic mediation into the data science process, introducing an innovative approach for data mapping that preserves the autonomy of data providers while avoiding interference with their existing infrastructure or systems. The goal is to streamline the integration and analysis of distributed, heterogeneous datasets by applying domain ontologies within an iterative data science process. Unlike traditional approaches, which perform data mapping in later stages, our approach advances this step to the definition phase, providing benefits such as early standardization, greater efficiency and error reduction. The methodology includes a collaborative workflow for constructing a modular domain ontology that will support the data mapping from data sources to a global RDF ontology-based data model. This approach fosters expert involvement and accommodates evolving domain knowledge. The results demonstrate that semantic mediation enables the consolidation of semantically enriched data, enhancing the understanding of outcomes in the decision-making process, in a scalable process for integrating public health data in epidemiological monitoring and response contexts.
Download

Paper Nr: 102
Title:

Unsupervised Motif and Discord Discovery in ECG

Authors:

Lucas Peres, Livia Almada Cruz, Ticiana Coelho da Silva, Regis Pires Magalhães, João Paulo Madeiro and José Macêdo

Abstract: Cardiovascular disease stands as the leading global cause of morbidity and mortality. Electrocardiograms (ECGs) are among the most effective tools for detecting arrhythmia and other cardiovascular diseases, as well as other applications like emotion recognition and stress level stratification. The ECG-based diagnostic relies on specialized physicians to manually explore the whole signal. This paper presents an unsupervised solution for ECG analysis, obviating specialists’ need to manually run over the entire dataset to identify representative segments (motifs) or non-repeated patterns (discords). The method was experimented with an open dataset and showed promising results.
Download

Paper Nr: 114
Title:

Proposal for a Common Conceptual Schema for Metadata Integration and Reuse Across Diverse Scientific Domains

Authors:

Marcos Sfair Sunye, Karolayne Costa Rodrigues de Lima, Alberto Abelló and Elisabete Ferreira

Abstract: Addresses challenges in reusing scientific data, highlighting the necessity for enhanced metadata standards to facilitate data integration across various scientific domains. The primary contribution is a standardized conceptual schema for metadata, designed to improve data integration and reuse. This schema outlines the phenomena and methods in scientific research to enable better identification and integration of datasets. The schema was synthesized by analyzing four well-established metadata standards: DataCite, Darwin Core, Ecological Meta-data Language, and Dublin Core. This analysis aimed to identify semantic correspondences among these standards to create a unified conceptual schema. The analysis validated the practicality of the proposed schema. More than 11% of the 926 metadata elements analyzed were successfully integrated, demonstrating significant potential to improve data integration across different scientific domains. The proposed common conceptual schema for metadata improves data integration and reuse in scientific research. By introducing structured descriptions of phenomena and methods, it facilitates the discovery of relationships between datasets, promoting integrated and interdisciplinary reuse without additional metadata creation costs.
Download

Paper Nr: 147
Title:

Development of a Big Data Mechanism for AutoML

Authors:

Roberto Sá Barreto Paiva da Cunha, Jairson Barbosa Rodrigues and Alexandre M. A. Maciel

Abstract: This paper introduces the development of an AutoML mechanism explicitly designed for large-scale data processing. First, the paper presents a comprehensive technological benchmark of current AutoML frameworks. According to the gaps found, the paper proposes integrating consolidated Big Data technologies into an open-source AutoML framework, emphasizing enhanced usability and scalability in processing capabilities. The entire methodology of this paper was based on Design Science Research - DSR, commonly used in studies that seek to to develop innovative artifacts, such as systems, methods or theoretical models, to address practical challenges. The developed architecture enhances the AutoML FMD - Framework of Data Mining. This integration allowed the efficient management of large datasets and supported distributed machine learning algorithms training. An expert opinion evaluation demonstrated the effectiveness in reducing the learning curve for non-experts and improving scalability and data handling. Integration tests were adopted to validate all FMD components.This work significantly advanced FMD by broadening its applicability to large datasets and various domains while making open-source collaboration and ongoing innovation possible.
Download

Paper Nr: 217
Title:

Empowering Pharmaceutical Retail Storefronts: An Exploratory Study on Classification and Association Techniques

Authors:

Humberto Tozetti Carlos, Luciana Lee and Mateus Barcellos Costa

Abstract: This work presents a study of association and classification algorithms to support sales in retail stores through recommendation systems. The study aimed to evaluate these algorithms in terms of their ability to provide contextual information relevant to sales in retail storefronts. To achieve this goal, two primary objectives were defined. The first was to explore methods for relating sales items. For this approach, experiments were conducted using association rule and clustering algorithms. The second objective was to evaluate the capability of classification algorithms to identify classes of interest present within the data universe. The experiments utilized a dataset from the pharmaceutical sector. In the case of association rule algorithms, given the absence of data to enable recommendations based on collaborative filtering, the purpose was to identify patterns of item associations derived from customer shopping basket data. For the classification algorithms, the goal was to identify sales with and without medical prescriptions, a fundamental aspect to monitor consumer behavior regarding the use of drugs. For identifying sales with medical prescriptions, the MultiLayer Perceptron algorithm achieved the best results. For predicting items based on the shopping basket, the best results were obtained by combined use of the K-Means, K-Prototype, and FP-Growth algorithms.
Download

Paper Nr: 313
Title:

TGAN and CTGAN: A Comparative Analysis for Augmenting COVID 19 Tabular Data

Authors:

Eman Kamal Al-Bwana, Mohammad Alauthman, Ikbel Sayahi and Mohamed Ali Mahjoub

Abstract: The discovery of COVID-19 has drawn attention to the need for relatively fast and accurate diagnostic solutions for clinical applications. However, the creation of high-quality AI systems is often hampered by the lack of sufficient amounts of similar reference datasets. Therefore, GANs have emerged as useful tools to address this challenge through synthetic data. Building on our previous work on conditional tabular GANs (CTGANs), this study proposes a novel TGAN architecture for augmenting tabular COVID-19 data. To evaluate the performance of TGAN-based augmentation, we conduct extensive tests to compare its performance with CTGAN while using several machine learning classifiers for prediction. The results on evaluation criteria such as precision, accuracy, recall, F-measure, and ROC AUC show that the proposed TGAN outperforms CTGAN. It is worth noting that the logistic regression classifier achieves a test accuracy of 0.746, precision of 0.734, and recall of 0.928 when trained on the provided TGAN-augmented dataset, which is higher than those on the original and CTGAN-augmented datasets. In addition, the augmentation range was optimal at 100% as we balance performance and the risk of overfitting. The developed TGAN method provides an effective tool for generating synthetic samples that provide a description of the data distribution and improve COVID-19 diagnostic models. This study demonstrates the feasibility of TGAN-based data augmentation in overcoming the data shortage issues by creating efficient and reliable AI systems to support clinical decisions regarding upcoming pandemics.
Download

Area 2 - Artificial Intelligence and Decision Support Systems

Full Papers
Paper Nr: 15
Title:

Enhancing Post-Incarceration Support: A Custom Chatbot Solution for the Brazilian Prison System

Authors:

Geovana Ramos Sousa Silva, Lurian Correia Lima, Guilherme Pereira Paiva and Edna Dias Canedo

Abstract: This paper presents the development and implementation of a dedicated chatbot to assist former inmates of the Brazilian prison system, integrated into the ESVirtual application. This initiative addresses the multifaceted challenges faced by this vulnerable group by providing a comprehensive and sustainable approach to support their social reintegration. The proposed architecture leverages Docker Compose, Rasa NLU, Stories, and Actions, creating a robust and scalable framework capable of understanding natural language, adapting to diverse interaction scenarios, and executing customized actions. By integrating the chatbot into ESVirtual, we enhance its utility and accessibility, offering former inmates a reliable and accessible channel for obtaining information and support. The decision to manually craft the chatbot’s content, rather than using generative AI, ensures the accuracy, relevance, and reliability of the provided information, allowing for quick adaptation to changes in policies, legislation, and available services. Socially, this project aims to significantly contribute to the reintegration of former inmates into society, reducing recidivism rates and fostering a more just and inclusive community. By providing essential support and resources, the chatbot empowers individuals to overcome the challenges they face after leaving the prison system and to build dignified and productive lives. This work underscores the potential of technology and innovation to promote social well-being and justice, marking a significant step towards a more humanized approach to former inmates reintegration in Brazil.
Download

Paper Nr: 46
Title:

Deep Reinforcement Learning for Selecting the Optimal Hybrid Cloud Placement Combination of Standard Enterprise IT Applications

Authors:

André Hardt, Abdulrahman Nahhas, Hendrik Müller and Klaus Turowski

Abstract: Making the right placement decision for large IT landscapes of enterprise applications in hybrid cloud environments can be challenging. In this work, we concentrate on deriving the best placement combination for standard enterprise IT landscapes with the specific use case of SAP-based systems based on performance real-world metrics. The quality of the placement decision is evaluated on the basis of required capacities, costs, various functional and business requirements, and constraints. We approach the problem through the use of deep reinforcement learning (DRL) and present two possible environment designs that allow the DRL algorithm to solve the problem. In the first proposed design, the placement decision for all systems in the IT landscape is performed at the same time, while the second solves the problem sequentially by placing one system at a time. We evaluate the viability of both designs with three baseline DRL algorithms: DQN, PPO, and A2C. The algorithms were able to successfully explore and solve the designed environments. We discuss the potential performance advantages of the first design over the second but also note its challenges of scalability and compatibility with various types of DRL algorithms.
Download

Paper Nr: 62
Title:

Optimizing Truck Flow in Ship Unloading: A Real-Time Simulation Approach for the Port of Itaqui

Authors:

Carlos Eduardo V. Gomes, Victor José B. A. Martinez, João Augusto F. N. de Carvalho, Francisco Glaubos Nunes Clímaco, Geraldo Braz Júnior, Tiago Bonini Borchatt and João Dallyson Sousa de Almeida

Abstract: Truck congestion in port yards is a significant issue that impacts the operational efficiency and competitiveness of port terminals. This study examines the logistics flow dynamics in the ship unloading area of the Port of Itaqui. Through modeling and simulation methodologies, it aims to identify bottlenecks and optimize operations. We collected and analyzed port data to establish simulation parameters, which were categorized into various groups, including system administration, truck flows, and required resources. Based on the conceptual modeling of truck and ship movements, we developed a simulation system that visualizes the effects of different operational decisions. The results of the simulation are crucial for identifying critical points and formulating strategies to reduce congestion.
Download

Paper Nr: 63
Title:

Automatic Lead Qualification Based on Opinion Mining in CRM Projects: An Experimental Study Using Social Media

Authors:

Victor Hugo Ferrari Canêdo Radich, Tania Basso and Regina Lucia de Oliveira Moraes

Abstract: Lead qualification is one of the main procedures in Customer Relationship Management (CRM) projects. Its main goal is to identify potential consumers who have the ideal characteristics to establish a profitable and long-term relationship with a certain organization. Social networks can be an important source of data for identifying and qualifying leads, since interest in specific products or services can be identified from the users’ expressed feelings of (dis)satisfaction. In this context, this work proposes the use of machine learning techniques and sentiment analysis as an extra step in the lead qualification process in order to improve it. In addition to machine learning models, sentiment analysis, also called opinion mining, can be used to understand the evaluation that the user makes of a particular service, product, or brand. The results indicated that sentiment analysis derived from social media data can serve as an important calibrator for the lead score, representing a significant competitive advantage for companies. By incorporating consumer sentiment insights, it becomes possible to adjust the Lead Score more accurately, enabling more effective segmentation and more targeted conversion strategies.
Download

Paper Nr: 77
Title:

Convolutional Neural Networks Enriched by Handcrafted Attributes (Enriched-CNN): An Innovative Approach to Pattern Recognition in Histological Images

Authors:

Luiz Fernando Segato dos Santos, Leandro Alves Neves, Alessandro Santana Martins, Guilherme Freire Roberto, Thaína Aparecida Azevedo Tosta and Marcelo Zanchetta do Nascimento

Abstract: This paper presents a novel method called Enriched-CNN, designed to enrich CNN models using handcrafted features extracted from multiscale and multidimensional fractal techniques. These features are incorporated directly into the loss function during model training through specific strategies. The method was applied to three important histological datasets for studying and classifying H&E-stained samples. Several CNN architectures, such as ResNet, InceptionNet, EfficientNet, and others, were tested to understand the enrichment behavior in different scenarios. The best results achieved accuracy rates ranging from 93.75% to 100% for enrichment situations involving only 3 to 5 features. This paper also provides significant insights into the conditions that most contributed to the process and allowed competitiveness compared to the specialized literature, such as the possibility of composing models with minimal or no structural changes. This unique aspect enables the method to be applied to other types of neural architectures.
Download

Paper Nr: 98
Title:

Data Smells Are Sneaky

Authors:

Nicolas Hahn and Afonso Sales

Abstract: Data is the primary source for developing AI-based systems, and poor-quality data can lead to technical debt and negatively impact performance. Inspired by the concept of code smells in software engineering, data smells have been introduced as indicators of potential data quality issues, and can be used to evaluate data quality. This paper presents a simulation aimed at identifying specific data smells introduced in the unstructured format and detected in a tabular form. By introducing and analyzing specific data smells, the research examines the challenges in their detectability. The results underscore the need for robust detection mechanisms to address data smells across different stages of a data pipeline. This work expands the understanding of data smells and their implications, provinding new foundations for future improvements in data quality assurance for AI-driven systems.
Download

Paper Nr: 100
Title:

Improving Large Language Models Responses with Retrieval Augmented Generation in Animal Production Certification Platforms

Authors:

Pedro Bilar Montero, Jonas Bulegon Gassen, Glênio Descovi, Vinícius Maran, Tais Oltramari Barnasque, Matheus Friedhein Flores and Alencar Machado

Abstract: This study explores the potential of integrating Large Language Models (LLMs) with Retrieval-Augmented Generation (RAG) to enhance the accuracy and relevance of responses in domain-specific tasks, particularly within the context of animal health regulation. Our proposal solution incorporates a RAG system on the PDSA-RS platform, leveraging an external knowledge base to integrate localized legal information from Brazilian legislation into the model’s response generation process. By combining LLMs with an information retrieval module, we aim to provide accurate, up-to-date responses grounded in relevant legal texts for professionals in the veterinary health sector.
Download

Paper Nr: 117
Title:

Optimizing 2D+1 Packing in Constrained Environments Using Deep Reinforcement Learning

Authors:

Victor Ulisses Pugliese, Oséias Faria de Arruda Ferreira and Fabio A. Faria

Abstract: This paper proposes a novel approach based on deep reinforcement learning (DRL) for the 2D+1 packing problem with spatial constraints. This problem is an extension of the traditional 2D packing problem, incorporating an additional constraint on the height dimension. Therefore, a simulator using the OpenAI Gym framework has been developed to efficiently simulate the packing of rectangular pieces onto two boards with height constraints. Furthermore, the simulator supports multidiscrete actions, enabling the selection of a position on either board and the type of piece to place. Finally, two DRL-based methods (Proximal Policy Optimization -- PPO and the Advantage Actor-Critic -- A2C) have been employed to learn a packing strategy and demonstrate its performance compared to a well-known heuristic baseline (MaxRect-BL). In the experiments carried out, the PPO-based approach proved to be a good solution for solving complex packaging problems and highlighted its potential to optimize resource utilization in various industrial applications, such as the manufacturing of aerospace composites.
Download

Paper Nr: 118
Title:

Pancreatic Mass Segmentation Using TransUNet Network

Authors:

Fael Faray de Paiva, Alexandre de Carvalho Araujo, João Dallyson Sousa de Almeida and Anselmo C. de Paiva

Abstract: Currently, one of the major challenges in computer vision applied to medical imaging is the automatic segmentation of organs and tumors. Pancreatic cancer, in particular, is extremely lethal, primarily due to the major difficulty in early detection, resulting in the disease being identified only in advanced stages. Recently, new technologies, such as deep learning, have been used to identify these tumors. This work uses the TransUNet network for the task, as convolutional neural networks (CNNs) are extremely effective at capturing features but present limitations in tasks that require greater context. On the other hand, transformer blocks are designed for sequence-to-sequence tasks and have a high capacity for processing large contexts; however, they lack spatial precision due to the lack of detail. TransUNet uses the Transformer as an encoder to enhance the capacity to process content globally, while convolutional neural networks are employed to minimize the loss of features during the process. Among the experiments presented herein, one used image pre-processing techniques and achieved an average Dice score of 42.60±1.97%. The second experiment, a crop was applied to the mass region, reaching an average Dice score of 79.67±2.31%.
Download

Paper Nr: 122
Title:

A Solution Procedure for Fixed Mammography Unit Location-Allocation and Mobile Mammography Unit Routing Problems

Authors:

Romário dos S. L. de Assis, Marcos V. A. de Campos, Marcone J. F. Souza, Maria A. L. Souza, Eduardo C. de Siqueira, Elizabeth F. Wanner and Sérgio R. de Souza

Abstract: This paper addresses the Mammography Unit Location-Allocation and Mobile Mammography Unit Routing problems. The objective is to maximize coverage of the target population and cover unmet demand with fixed mammography units by using mobile units. It is proposed a sequential solution procedure for solving, in which the first problem is solved by using an exact method, and the second one through a heuristic algorithm with the uncovered municipalities from the first problem as input. This proposal was tested in three scenarios from the State of Minas Gerais, Brazil. The results show that the coverage of this state can be fully met with 84 additional mobile units, considering the current location of the fixed equipment and the restriction of the municipalities’ service to their healthcare micro-regions. However, if this requirement is not imposed, 42 units are sufficient. Finally, by allowing the equipment to be relocated, only nine units are needed.
Download

Paper Nr: 124
Title:

Leveraging Transfer Learning to Improve Convergence in All-Pay Auctions

Authors:

Luis Eduardo Craizer, Edward Hermann and Moacyr Alvim Silva

Abstract: In previous research on Multi-Agent Deep Deterministic Policy Gradient (MADDPG) in All-Pay Auctions, we identified a key limitation: as the number of agents increases, the tendency for some agents to bid 0.0 —resulting in local equilibrium — grows, leading to suboptimal bidding strategies. This issue diminishes the effectiveness of traditional reinforcement learning in large, complex auction environments. In this work, we propose a novel transfer learning approach to address this challenge. By training agents in smaller N auctions and transferring their learned policies to larger N settings, we significantly reduce the occurrence of local equilibrium. This method not only accelerates training but also enhances convergence toward optimal Nash equilibrium strategies in multi-agent settings. Our experimental results show that transfer learning successfully overcomes the limitations observed in previous research, yielding more robust and efficient bidding strategies in all-pay auctions.
Download

Paper Nr: 136
Title:

On the Effectiveness of Large Language Models in Automating Categorization of Scientific Texts

Authors:

Gautam Kishore Shahi and Oliver Hummel

Abstract: The rapid advancement of Large Language Models (LLMs) has led to a multitude of application opportunities. One traditional task for Information Retrieval systems is the summarization and classification of texts, both of which are important for supporting humans in navigating large literature bodies as they e.g. exist with scientific publications. Due to this rapidly growing body of scientific knowledge, recent research has been aiming at building research information systems that not only offer traditional keyword search capabilities, but also novel features such as the automatic detection of research areas that are present at knowledge-intensive organizations in academia and industry. To facilitate this idea, we present the results obtained from evaluating a variety of LLMs in their ability to sort scientific publications into hierarchical classifications systems. Using the FORC dataset as ground truth data, we have found that recent LLMs (such as Meta’s Llama 3.1) are able to reach an accuracy of up to 0.82, which is up to 0.08 better than traditional BERT models.
Download

Paper Nr: 203
Title:

Improving Underwater Ship Sound Classification with CNNs and Advanced Signal Processing

Authors:

Pedro Guedes, José Franco Amaral, Thiago Carvalho and Pedro Coelho

Abstract: The identification of underwater sound patterns has become an area of great relevance, both in marine biology, for studying species, and in the identification of ships. However, the significant presence of noise in the underwater environment poses a technical challenge for the accurate classification of these signals. This work proposes the use of signal analysis techniques, such as Mel Frequency Cepstral Coefficients (MFCCs) and Wavelet Transform, combined with Convolutional Neural Networks (CNNs), for classifying ship audio captured in a real-world environment strongly influenced by its surroundings. The developed models achieved a better accuracy in signal classification, demonstrating robustness in the face of adverse underwater conditions. The results indicate the effectiveness of the proposed approach, contributing to advances in the application of neural network techniques to underwater sound signals.
Download

Paper Nr: 225
Title:

Using Transformers for B2B Contractual Churn Prediction Based on Customer Behavior Data

Authors:

Jim Ahlstrand, Anton Borg, Håkan Grahn and Martin Boldt

Abstract: In the competitive business-to-business (B2B) landscape, retaining clients is critical to sustaining growth, yet customer churn presents substantial challenges. This paper presents a novel approach to customer churn prediction using a modified Transformer architecture tailored to multivariate time-series data. We suggest that analyzing customer behavior patterns over time can indicate potential churn. Our findings suggest that while uncertainty remains high, the proposed model performs competitively against existing methods. The Transformer architecture achieves a top decile lift of almost 5 and 0.77 AUC. We assess the model’s confidence by employing conformal prediction, providing valuable insights for targeted anti-churn campaigns. This work highlights the potential of Transformers to address churn dynamics, offering a scalable solution to identify at-risk customers and inform strategic retention efforts in B2B contexts.
Download

Paper Nr: 238
Title:

Predicting B2B Customer Churn and Measuring the Impact of Machine Learning-Based Retention Strategies

Authors:

Victória Emanuela Alves Oliveira, Amanda Cristina da Costa Guimarães, Arthur Rodrigues Soares de Quadros, Reynold Navarro Mazo, Rickson Livio de Souza Gaspar, Alessandro Vieira and Wladmir Cardoso Brandão

Abstract: Acquiring new customers often costs five times more than retaining existing ones. Customer churn significantly threatens B2B companies, causing revenue loss and reduced market share. Analyzing historical customer data, including frequency on product usage, allow us to predict churn and implement timely retention strategies to prevent this loss. We propose using Support Vector Machines (SVMs) to predict at-risk customers while retraining it, if necessary. By monitoring its recall over 15-day periods, we retrain the model if its recall on new data falls below 60%. Our research focuses on feature selection to identify key churn factors. Our experiments show that when constantly retraining the model, we avoid accuracy loss by updating the customer’s context, providing valuable insights on how to reduce churn rates and increase revenue.
Download

Paper Nr: 239
Title:

Predicting Employee Turnover Using Personality Assessment: A Data-Driven Approach

Authors:

Reynold Navarro Mazo, Maurício Pereira Nogueira Júnior, Arthur Rodrigues Soares de Quadros, Alessandro Vieira and Wladmir Cardoso Brandão

Abstract: Employee turnover represents a persistent challenge for organizations seeking to maintain stability, retain institutional knowledge, and control costs. Traditional predictive models often rely on static employee records and demographic variables, providing limited insight into the nuanced behavioral patterns that precede workforce attrition. This study leverages the PACE Behavioral Profile Mapping (BPM) framework to integrate behavioral features into a machine learning–based turnover prediction pipeline. Clustering techniques were employed to ensure model generalization for specific company clusters, and hyperparameter optimization was performed using Optuna. The resultant CatBoost models demonstrated notable improvements in predicting turnover risk, particularly for employees at higher risk of departure, when PACE-based behavioral indicators were incorporated. These findings suggest that a more comprehensive characterization of employee tendencies, beyond conventional demographic and historical measures, can enhance the identification of at-risk individuals. By adopting behaviorally informed analytics, organizations may achieve more targeted and effective retention strategies, ultimately supporting more stable workforce management.
Download

Paper Nr: 248
Title:

Cost-Effective Strabismus Measurement with Deep Learning

Authors:

Luis Felipe Araujo de Oliveira, João Dallyson Sousa de Almeida, Thales Levi Azevedo Valente, Jorge Antonio Meireles Teixeira and Geraldo Braz Junior

Abstract: This article presents a new methodology for detecting and measuring strabismus. Traditional diagnostic methods in the medical field often require patients to visit a specialist, which can present challenges in regions with limited access to strabismus experts. An accessible and automated approach can, therefore, support ophthalmologists in making diagnoses. The proposed methods use images from the Hirschberg Test exams and employ techniques based on Convolutional Neural Networks (CNNs) and image processing to detect the limbus region and measure the brightness reflected in patients’ eyes from the camera’s flash. The method calculates the distance between the limbus’s center and the reflected brightness’s center, converting this distance from pixels to diopters. The results show the potential of these approaches, achieving significant effectiveness.
Download

Paper Nr: 255
Title:

Mathematical Modeling and Simulation for Optimizing Truck Dispatch in Bulk Unloading Operations: A Case Study at the Port of Itaqui

Authors:

Victor José Beltrão Almajano Martinez, Carlos Eduardo V. Gomes, João Augusto F. N. de Carvalho, Francisco Glaubos Nunes Clímaco, João Dallyson Sousa de Almeida, Geraldo Braz Júnior and Tiago Bonini Borchatt

Abstract: This paper addresses the optimization of truck pulling for bulk unloading operations at the Port of Itaqui, a critical logistics hub in Brazil. The current manual process often leads to inefficiencies such as congestion, delays, and increased emissions. To tackle these challenges, we propose a mathematical model for responsive truck pulling to minimize queue imbalances considering emissions while maintaining operational efficiency. A port activity simulator was developed to evaluate the model under various demand and supply scenarios, comparing its performance against a benchmark algorithm replicating operator behavior. Results demonstrate that the proposed model reduces truck congestion in the primary area by up to 50% without increasing unloading times, offering a more balanced and sustainable approach. The findings enhance port logistics and provide a framework for automating truck dispatch processes in bulk cargo operations. Future work involves integrating the model into real-world applications and extending its capabilities to multi-terminal environments.
Download

Paper Nr: 256
Title:

Dynamization of Retail Pricing: From Traditional Price Determinants to Automation Based on Artificial Intelligence

Authors:

Christian Daase, Seles Selvan, Dominic Strube, Daniel Staegemann, Jennifer Schietzel-Kalkbrenner and Klaus Turowski

Abstract: Setting product prices poses both challenges and chances for retailers, as higher prices per stock keeping unit might lead to lower customer volume, while lower prices might result in insufficient turnover in relation to costs. In the age of digitalization and artificial intelligence, understanding price determinants becomes even more important as customer preferences shift and alternatives for purchasing products, such as online, are within easy reach. Based on a systematic literature review, this study aims to build a comprehensive model of traditional factors influencing customers’ price perception as fair, with an extension towards AI-driven data integration and use case design to ultimately realize dynamic pricing models such as real-time demand pricing, personalized pricing and further machine learning-based approaches. The final visualization is intended as guidance for practitioners to evaluate their pricing strategies to determine if factors are currently being overlooked and to consider how they could be incorporated into future decisions. Researchers can also use the insights gained to build upon and expand the potential of AI integration into pricing automation.
Download

Paper Nr: 273
Title:

Towards an Ontology-Based Approach for Enhancing Animal Sanitary Event Management

Authors:

Felipe Amadori Machado, Jonas Bulegon Gassen, Matheus Flores, Francisco Lopes, Fernando Groff, Alencar Machado and Vinicius Maran

Abstract: This study presents the development and integration of a contextual modeling approach for sanitary events, specifically focusing on outbreaks affecting animal populations. A specialized ontology for sanitary events was developed using Protégé and integrated into the PDSA-RS platform, which supports animal health regulation in Brazil. The platform aids in the certification of poultry and swine farming in Rio Grande do Sul, ensuring compliance with Brazilian animal health regulations. The system’s effectiveness was demonstrated during FMD, avian influenza, and ND outbreak scenarios, where it significantly reduced analysis time and improved field team management through real-time task allocation and monitoring. The system’s usability was evaluated using the System Usability Scale (SUS), resulting in a score of 75.52, reflecting positive feedback from users. Future developments will focus on refining the ontology and enhancing the embedded rules within the system to better align with real-world sanitary management processes and improve adaptability to various scenarios.
Download

Paper Nr: 279
Title:

OptiGuide+: An Interactive Recommender System for Virtual Things

Authors:

Tahani Almanie, Xu Han, Bhavana Posani and Alexander Brodsky

Abstract: This paper is concerned with markets of Virtual Things, which are parameterized specifications of products or services that can be realized and delivered on demand. Reported in this paper is the design and development of OptiGuide+, an interactive recommender system designed to guide users in choosing and optimally instantiating Virtual Things. OptiGuide+ enhances decision-making by dynamically integrating user preferences with multi-objective optimization. It supports (1) real-time interactive utility extraction from user-selected preferences on Pareto-optimal alternatives; (2) seamless integration of virtual things’ specifications; (3) cross-platform web-based deployment capability; and (4) an intuitive user interface for visualizing trade-offs to explore Pareto-optimal alternatives. OptiGuide+ is unique in its ability to leverage utility-driven decision guidance methodology for markets of parameterized products and services and demonstrate it in real-world applications.
Download

Paper Nr: 283
Title:

Assessing the Attention Layers in Convolutional Neural Networks for Penile Cancer Detection in Histopathological Images

Authors:

Joana Kuelvia de Araújo Silva, Geraldo Braz Júnior, Anselmo Cardoso de Paiva, Italo Francyles Santos da Silva and Alexandre César Pinto Pessoa

Abstract: Penile cancer, with its high incidence in Brazil, stands out due to the need for early diagnosis and avoiding invasive surgical procedures with physical and psychological implications. Although histopathological analysis is the standard approach, its complexity and delay motivate the search for faster and more accurate alternatives to aid the process. This study proposes a methodology for classifying penile cancer in histopathological images using Convolutional Neural Networks (CNNs) coupled with Attention Mechanisms. Experiments were conducted using a data set of 194 samples at magnifications of 40× and 100×. As a result, the method achieved an accuracy of 95\% for cancer detection.
Download

Paper Nr: 310
Title:

Automated Detection of Fake Biomedical Papers: A Machine Learning Perspective

Authors:

Ahmar K. Hussain, Bernhard A. Sabel, Marcus Thiel and Andreas Nürnberger

Abstract: In order to address the issue of fake papers in scientific literature, we propose a study focusing on the classification of fake papers based on certain features, by employing machine learning classifiers. A new dataset was collected, where the fake papers were acquired from the Retraction Watch database, while the non-fake papers were obtained from PubMed. The features extracted for classification included metadata, journal-related features as well and textual features from the respective abstracts, titles, and full texts of the papers. We used a variety of different models to generate features/word embeddings from the abstracts and texts of the papers, including TF-IDF and different variations of BERT trained on medical data. The study compared the results of different models and feature sets and revealed that the combination of metadata, journal data, and BioBERT embeddings achieved the best performance with an accuracy and recall of 86% and 83% respectively, using a gradient boosting classifier. Finally, this study presents the most important features acquired from the best performing classifier.
Download

Paper Nr: 324
Title:

IoT-AID: Leveraging XAI for Conversational Recommendations in Cyber-Physical Systems

Authors:

Mohammad Choaib, Moncef Garouani, Mourad Bouneffa and Adeel Ahmad

Abstract: The rapid evolution of Industry 4.0 has introduced transformative technologies such as the Internet of Things (IoT), Artificial Intelligence (AI), and big data, facilitating real-time data collection, processing, and decision-making. At the heart of this revolution lies Cyber-Physical Systems (CPS), which integrate computational algorithms with physical components to create intelligent, resilient, and adaptive systems. However, CPS deployment remains complex due to the need for extensive domain expertise. This paper introduces IoT-AID, a novel Explainable AI (XAI)-driven Cyber-Physical Recommendation System (CPRS) that enhances transparency, trust, and efficiency in CPS design. IoT-AID integrates traditional machine learning models, deep learning architectures, and fine-tuned transformer-based models with XAI techniques to automate and improve CPS configuration. Our approach ensures that AI-driven recommendations are interpretable, thereby increasing adoption across industries.
Download

Short Papers
Paper Nr: 29
Title:

XARF: Explanatory Argumentation Rule-Based Framework

Authors:

Hugo Eduardo Sanches, Ayslan Trevizan Possebom and Linnyer Beatrys Ruiz Aylon

Abstract: This paper introduces the Explanatory Argumentation Rule-based Framework (XARF), a new approach in Explainable Artificial Intelligence (XAI) designed to provide clear and understandable explanations for machine learning predictions and classifications. By integrating a rule-based system with argumentation theory, XARF elucidates the reasoning behind machine learning outcomes, offering a transparent view into the otherwise opaque processes of these models. The core of XARF lies in its innovative utilization of the apriori algorithm for mining rules from datasets and using them to form the foundation of arguments. XARF further innovates by detailing a unique methodology for establishing attack relations between arguments, allowing for the construction of a robust argumentation structure. To validate the effectiveness and versatility of XARF, this study examines its application across seven distinct machine learning algorithms, utilizing two different datasets: a basic Boolean dataset for demonstrating fundamental concepts and methodologies of the framework, and the classic Iris dataset to illustrate its applicability to more complex scenarios. The results highlight the capability of XARF to generate transparent, rule-based explanations for a variety of machine learning models.
Download

Paper Nr: 38
Title:

Modelling and Clustering Patterns from Smart Meter Data in Water Distribution Systems

Authors:

Mariaelena Berlotti, Sarah Di Grande, Salvatore Cavalieri and Roberto Gueli

Abstract: In recent years, water utilities have increasingly required a deeper understanding of users’ water demand across their distribution networks to optimize resource management and meet customers' needs. With the adoption of smart metering solutions, it has become possible to investigate water usage at a finer resolution, enabling the collection of more detailed consumption data. In the present study, the authors present an innovative methodology for identifying water usage using data from smart meters. First, a Multiple Seasonal-Trend Decomposition algorithm is applied to extract seasonality from the raw time-series data. Next, the Bootstrap sampling technique is used to train an optimized Time Series K-means algorithm on multiple data configurations. Finally, the clustering results are interpreted graphically and validated, providing valuable insights into consumption habits and a comprehensive assessment of the methodology's effectiveness and stability.
Download

Paper Nr: 44
Title:

Evaluating Network Intrusion Detection Models for Enterprise Security: Adversarial Vulnerability and Robustness Analysis

Authors:

Vahid Heydari and Kofi Nyarko

Abstract: Machine learning (ML) has become essential for securing enterprise information systems, particularly through its integration in Network Intrusion Detection Systems (NIDS) for monitoring and detecting suspicious activities. Although ML-based NIDS models demonstrate high accuracy in detecting known and novel threats, they remain vulnerable to adversarial attacks—small perturbations in network data that mislead the model into classifying malicious traffic as benign, posing serious risks to enterprise security. This study evaluates the adversarial robustness of two machine learning models—a Random Forest classifier and a Neural Network—trained on the UNSW-NB15 dataset, which represents complex, enterprise-relevant network traffic. We assessed the performance of both models on clean and adversarially perturbed test data, with adversarial samples generated via Projected Gradient Descent (PGD) across multiple epsilon values. Although both models achieved high accuracy on clean data, even minimal adversarial perturbations led to substantial declines in detection accuracy, with the Neural Network model showing a more pronounced degradation compared to the Random Forest. Higher perturbations reduced both models’ performance to near-random levels, highlighting the particular susceptibility of Neural Networks to adversarial attacks. These findings emphasize the need for adversarial testing to ensure NIDS robustness within enterprise systems. We discuss strategies to improve NIDS resilience, including adversarial training, feature engineering, and model interpretability techniques, providing insights for developing robust NIDS capable of maintaining security in enterprise environments.
Download

Paper Nr: 56
Title:

A Learning Approach for User Localization and Movement Prediction with Limited Information

Authors:

Quang-Vinh Tran and Quang-Diep Pham

Abstract: In the 5G network system, users continuously travel among areas managed by different User Plane Functions (UPFs), leading to the need for efficient handover between UPFs. Conventional handover relies on signal measurements between user devices and neighboring base stations, so it is a ”re-active” scheme. Therefore, this procedure results in long response time of the Packet Data Unit (PDU) session establishment, and affecting data service quality. Another approach is an ”pro-active” scheme, in which the position of users are estimated, hence the decision of UPF handover can be made earlier. We propose a solution using machine learning techniques to model user movement behavior in the network and predict user positions in advance. The predicted UPF managing the next location will be announced accordingly to take preparatory steps for serving the incoming users, thereby reducing the new PDU session establishment latency, increasing processing speed, and improving the quality of experience. We propose the model combining the K-means clustering algorithm and the Gated Recurrent Unit deep learning network for time series data. The solution was tested with Viettel’s 5G network data and demonstrated its feasibility in real-world dataset.
Download

Paper Nr: 57
Title:

A Mixed-Integer Linear Programming Model for Repeaters and Routers Location-Allocation Problem in Open-Pit Mines

Authors:

Jéssica Cristina Teixeira da Costa, Arthur Francisco Emanuel Borges Pereira, Higor Cassiano Sousa Milanês, Tatianna Aparecida Pereira Beneteli and Luciano Perdigão Cota

Abstract: In open-pit mines, communication network coverage is required throughout the operating area to ensure continuous operation of equipment such as drills, trucks, shovels, and loaders, in addition to communication between teams. Although the location-allocation problems have been widely studied in various contexts, there is a significant gap in its application to open-pit mines. This study proposes a Mixed-integer linear programming formulation based on the p-median problem to optimize the location-allocation of repeaters and routers. The objective is to minimize the number of network equipment installed and reduce distances between operating points and network equipment, increasing efficiency and coverage in mining environments. We use nine large instances to validate the mathematical formulation. These instances vary the number of candidate locations for installation and operation points, reflecting scenarios from large open-pit mines. The results demonstrate that the proposed method can find optimal solutions with low computational time, less than 5 minutes, ensuring efficient coverage of the operation area.
Download

Paper Nr: 61
Title:

Domain Expertise and AI Adoption: Insights into HR Managers’ Unified Perspectives Across Roles and Contexts

Authors:

Guangming Cao

Abstract: The integration of artificial intelligence (AI) offers transformative potential for human resource management (HRM), yet a significant majority of organizations have yet to adopt AI in HRM practices. While much research focuses on individual-level factors in technology adoption, limited attention has been given to the role of domain-specific expertise in shaping HR managers’ perceptions of AI. This study addresses this gap by exploring HR managers’ attitudes and intentions toward AI adoption and examining whether these perceptions differ by gender, job role, organizational size, or industry. Survey data from 279 HR managers in China, analyzed using ANOVA, reveal a largely positive, uniform view of AI adoption, with no significant differences in demographic or organizational factors. These results suggest that shared expertise within HR may drive a cohesive understanding of AI’s benefits, challenging conventional models that emphasize individual or contextual variability in technology adoption. This study contributes to the theoretical framework of technology adoption by highlighting the role of functional expertise in developing uniformity and provides practical insights for designing AI training and implementation strategies that resonate across diverse organizational settings.
Download

Paper Nr: 86
Title:

A Data Annotation Approach Using Large Language Models

Authors:

Carlos Rocha, Jonatas Grosman, Fernando A. Correia, Venicius Rego and Hélio Lopes

Abstract: Documents are crucial for the economic and academic systems, yet extracting information from them can be complex and time-consuming. Visual Question Answering (VQA) models address this challenge using natural language prompts to extract information. However, their development depends on annotated datasets, which are costly to produce. To face this challenge, we propose a four-step process that combines Computer Vision Models and Large Language Models (LLMs) for VQA data annotation in financial reports. This method starts with Document Layout Analysis and Table Structure Extraction to identify document structures. Then, it uses two distinct LLMs for the generation and evaluation of question and answer pairs, automating the construction and selection of the best pairs for the final dataset. As a result, we found Mixtral-8x22B and GPT-4o mini to be the most cost-benefit for generating pairs, while Claude 3.5 Sonnet performed best for evaluation, aligning closely with human assessments.
Download

Paper Nr: 87
Title:

Association of Fractal Geometry and Data Augmentation Through GANs and XAI for Classification of Histology Images

Authors:

Vinicius Augusto Toreli Borgue, Bianca Lançoni de Oliveira Garcia, Sérgio Augusto Pelicano Júnior, Guilherme Freire Roberto, Guilherme Botazzo Rozendo, Leandro Alves Neves, Alessandro Santana Martins, Thaína Aparecida Azevedo Tosta and Marcelo Zanchetta do Nascimento

Abstract: In computer vision, one of the main challenges regarding the classification of histopathology images lies on the low number of samples available in public image datasets. For the past year, the most common approaches applied to handle this problem consisted of using geometric data augmentation to increase the dataset size. Recently, the use of GANs to generate artificial images to increase the size of the training set for the classification of histology images has been proposed. Despite obtaining promising results in the deep learning context, there has not yet been much research regarding the use of these approaches in the context of handcrafted features. In this paper, we propose the use of handcrafted features based on fractal geometry and GANs for data augmentation for classifying four histology image datasets. The GANs were assisted by explainable artificial intelligence (XAI) to enhance the quality of the generated images. The fractal features obtained from the original and artificial images were given as input to six classifiers. After analyzing the results, we verified that, despite obtaining the best overall performance, our method was only able to provide a slight improvement in two datasets.
Download

Paper Nr: 88
Title:

Zero-Shot Product Description Generation from Customers Reviews

Authors:

Bruno Gutierrez, Jonatas Grosman, Fernando A. Correia and Hélio Lopes

Abstract: In e-commerce, product descriptions have a great influence on the shopping experience, informing consumers and facilitating purchases. However, creating good descriptions is labor-intensive, especially for large retailers managing daily product launches. To address this, we propose an automated method for product description generation using customer reviews and a Large Language Model (LLM) in a zero-shot approach. Our three-step process involves (i) extracting valuable sentences from reviews, (ii) selecting informative and diverse content using a graph-based strategy, and (iii) generating descriptions via prompts based on these selected sentences and the product title. For our proposal evaluation, we had the collaboration of 30 evaluators comparing the generated descriptions with the ones given by the sellers. As a result, our method produced descriptions preferred over those provided by sellers, rated as more informative, readable, and relevant. Additionally, a comparison with a literature method demonstrated that our approach, supported by statistical testing, results in more effective and preferred descriptions.
Download

Paper Nr: 115
Title:

Resource-Efficient Monitoring of Energy Storage Systems During Transport and Storage: A Data-Driven Approach to Early Short Circuit Detection

Authors:

Christoph Schrade, Theo Zschörnig, Leonard Kropkowski and Bogdan Franczyk

Abstract: Due to national and international laws and regulations, the number of energy storage systems has risen sharply in recent years. While battery systems in operation can often be monitored by installed battery management systems to ensure safe operation, there are still no standardized monitoring methods for batteries during transport or storage. Consequently, this article proposes a solution for monitoring such batteries in the typical logistic processes of storage and transport. Particular attention is paid to a resource-efficient implementation of a data-driven algorithm that is adopted from existing literature and enables the early detection of internal short circuits, which are the main cause of thermal runaways of battery storage systems. As the transmission frequency of an external monitoring device is a particularly resource-critical variable, the extent to which different data frequencies influence the detection performance is also investigated.
Download

Paper Nr: 116
Title:

Selection of Retransmitter Nodes for Alert Message Transmission in VANETs Using a Multicriteria Decision-Making Approach Based on Vehicle Credibility

Authors:

Santiago Cardoso and Adriano Fiorese

Abstract: Adverse situations that occur on public traffic roads, such as traffic accidents, severe traffic jams, among others, are considered critical traffic events. Such events occur relatively frequently and need to be dealt with quickly by public authorities to maintain the proper functioning of cities and highways. The main challenges for efficient handling lie in the random nature of the event and the speed and accuracy of its notification to the authorities. Thus, the large number of vehicles on the roads, together with their communication and monitoring capabilities, allow the detection and alert of such events occurrences. However, transmitting such detections to the destinations can be difficult due to the not entirely reliable nature of those involved, especially when there is a need for retransmission of the alert message between the detecting vehicle and the destination. In this sense, choosing the most suitable etransmitter vehicle, among the possible ones, becomes an issue. In this sense, this work proposes the development and use of a Vehicle Credibility Factor (VCF) in Ad Hoc Vehicular Networks (VANETs), generated by means of the use of several criteria that represent traffic behavior, as input parameters for the AHP multicriteria decision-making method. The result of the method is the VCF, which is used to determine, by ranking, the most reliable vehicles to transmit sensitive information for alerting critical traffic events.
Download

Paper Nr: 121
Title:

Pattern Recognition in Biosequences Using Artificial Immune System

Authors:

Luiz Paulo Liberato, Álvaro Magri Nogueira da Cruz, Anderson Rici Amorim, Vitoria Zanon Gomes, Bruno Rodrigues da Silveira, Gabriel Augusto Prevato, Luiza Guimarães Cavarçan, Carlos Roberto Valêncio and Geraldo Francisco Donegá Zafalon

Abstract: With the advance on genomic studies was possible to know better the genetic inheritance, protein synthesis and mutations that occurs in living beings. With the increase in the DNA sequencing capacity, and its storage, advanced biological studies are possible. To ensure timely and greater precision in the pattern recognition process, heuristic methods are used, since deterministic methods make it impossible to execute large volumes of data. Heuristic methods have the characteristic of seeking the best possible solution within the search space that is explored. Among the known heuristics is the Artificial Immune System (AIS), which falls under the category of bioinspired methods that simulate biological behavior. In this work, the CLONALG (Clonal Selection Algorithm) of the AIS approach was implemented with HMM (Hidden Markov Model) as an affinity function, in order to obtain stochastic patterns with biological relevance and an acceptable computational time. As a result, a 50% more relevant value was obtained in terms of execution time, when compared to CLONALG with the Hamming affinity function. Finally, it was also validated that CLONALG with the HMM implementation was able to recognize the same patterns when compared to similar tools.
Download

Paper Nr: 127
Title:

A Knowledge Discovery Pipeline to Describe the High Cholesterol Profile in Young People Using GA for Feature Selection

Authors:

Daniel Rocha Franca, Caio Davi Rabelo Fiorini, Ligia Ferreira de Carvalho Gonçalves, Marta Dias Moreira Noronha, Mark Alan Junho Song and Luis Enrique Zárate Galvez

Abstract: Understanding the risk factors associated with hypercholesterolemia in young individuals is crucial for developing preventive strategies to combat cardiovascular diseases. This study proposes a data mining pipeline employing machine learning techniques to profile high cholesterol in Brazilian youth aged 15 to 25, utilizing the 2019 National Health Survey (PNS) dataset. The PNS-2019 database has 1,088 attributes organized into 26 modules and 293,726 anonymized records. The Knowledge Discovery in Databases (KDD) process was implemented, incorporating a novel CAPTO-based conceptual attribute selection followed by feature selection using a Non-dominated Sorting Genetic Algorithm II (NSGA-II). A decision tree classifier was optimized and evaluated, achieving an F1 Score of 66%, demonstrating reasonable predictive power despite data limitations. The results highlight the significant impact of dietary habits, particularly high sugar and fat intake, on hyper-cholesterolemia risk. The study emphasizes the potential for early identification and targeted interventions, contributing to public health improvements and laying the groundwork for future research with advanced models and additional data sources.
Download

Paper Nr: 154
Title:

Integration of Data Science in Institutional Management Decision Support System

Authors:

Scăunașu Monica-Teodora and Mocanu Mariana Ionela

Abstract: This article explores the integration of data science into Decision Support Systems (DSS) as a transformative framework for institutional management. Using advanced analytics such as Random Forest classifiers, ARIMA models, and optimization algorithms, the research demonstrates how organizations can transition from static decision-making frameworks to adaptive, data-driven systems. Case studies, including IT risk management and group decision-making frameworks, illustrate the practical application and benefits of these methodologies. The study compares the proposed DSS with traditional systems, underscoring the advancements in predictive analytics, resource optimization, and collaborative decision-making. By aligning predictive insights with institutional priorities, the proposed framework fosters operational efficiency, strategic foresight, and inclusivity, setting a new standard for modern management practices.
Download

Paper Nr: 171
Title:

Towards IT Workload Hybrid-Cloud Placement Advisory in Enterprise

Authors:

André Hardt, Abdulrahman Nahhas, Hendrik Müller and Klaus Turowski

Abstract: Placement of IT workloads in a cloud or hybrid-cloud environment is not always straightforward and requires taking into account various requirements, cloud offering capabilities, and costs. This fact has led researchers and industry practitioners to develop various automation solutions to support this decision process. However, the exact procedure for applying these solutions in practice, especially in the enterprise environment, is typically not discussed. In this work, we propose a formalized systematic business-centric process for delivering a service that relies on a data-driven automation solution, as a tool for experts, for relevant data management and placement optimization in a hybrid-cloud. We performed preliminary field testing of the proposed approach on real-world enterprise IT landscapes running SAP enterprise applications with the application of a user-friendly placement optimization automation solution. Finally, the stakeholder feedback and key takeaways from the field testing are summarized, noting the feasibility and potential usefulness of the presented formalized process.
Download

Paper Nr: 177
Title:

Digitalization in Small-Load-Carrier Management

Authors:

Alexander Dobhan, Lars Eberhardt, Markus Haseneder, Heiko Raab, Steffen Rabenstein, Axel Treutlein, Vincent Wahyudi and Martin Storath

Abstract: In this article, we describe our research on digitalization in the field of returnable small load carrier (SLC) management. Our findings are the result of a collaboration between three companies and an academic institution. We apply various methods for modeling and analyzing digitalization measures that are already being prototypically implemented and discuss them in terms of transparency, data quality, resource consumption and costs. Our research enables academic researchers to build on real-world data and problems. For practitioners, we offer concrete solutions to increase the level of digitalization in their organizations. Unlike most other academic work to date, we focus on SLCs with their specific characteristics. This article could be the starting point for a higher impact and a growing number of research activities on returnable SLCs to make SLC cycles more efficient, which in turn will increase the sustainability of industrial packaging in general.
Download

Paper Nr: 189
Title:

Towards a Standardized Business Process Model for LLMOps

Authors:

Maria Chernigovskaya, Damanpreet Singh Walia, Ksenia Neumann, André Hardt, Abdulrahman Nahhas and Klaus Turowski

Abstract: The generalization and standardization of the Large Language Model Operations (LLMOps) life cycle is crucial for the effective adoption and management of Large Language Models (LLMs) in a business context. Researchers and practitioners propose various LLMOps processes however they all tend to lack formalization in their design. In this paper, we address the absence of a standard LLMOps model for enterprises and propose a generalized approach to adopting LLMOps into existing enterprise system landscapes. We start by identifying the state-of-the-art LLMOps processes through a systematic literature review of peer-reviewed research literature and gray literature. Considering the scarcity of relevant publications and research in the area discovered during the initial stage of the research, we propose a generic, use-case-agnostic, and tool-agnostic LLMOps business process model. The proposed model is designed using the Business Process Model and Notation (BPMN) and aims to contribute to the effective adoption of LLM-powered applications in the industry. To the best of our knowledge, this paper is the first attempt to systematically address the identified research gap. The presented methods and proposed model constitute the initial stage of the research on the topic and should be regarded as a starting point toward the standardization of the LLMOps process.
Download

Paper Nr: 190
Title:

Large-Scale Group Brainstorming and Deliberation Using Swarm Intelligence and Generative AI

Authors:

Louis Rosenberg, Hans Schumann, Christopher Dishop, Gregg Willcox, Anita Woolley and Ganesh Mani

Abstract: Conversational Swarm Intelligence (CSI) is an GenAI-based method for enabling real-time conversational deliberations among networked human groups of potentially unlimited size. Based on the biological principle of Swarm Intelligence and modelled on the decision-making dynamics of fish schools, CSI has been shown in prior studies to enable thoughtful conversations among hundreds of real-time participants while amplifying group intelligence. It works by dividing a large population into a set of subgroups that are woven together by real-time AI agents called Conversational Surrogates. The present study focuses on the use of a CSI platform called Thinkscape to enable real-time brainstorming and prioritization among groups of 75 networked users. The study employed a variant of a common brainstorming intervention called an Alternative Use Task (AUT) and compared brainstorming using a CSI platform to a traditional text-chat environment. This comparison revealed that participants significantly preferred using CSI, reporting that it felt (i) more collaborative, (ii) more productive, and (iii) was better at surfacing quality answers. In addition, participants using CSI reported (iv) feeling more ownership and more buy-in in the top answers the group converged on and (v) reported feeling more heard as compared to a traditional chat environment. Overall, the results suggest that CSI is a promising GenAI-based method for brainstorming and prioritization at large scale.
Download

Paper Nr: 194
Title:

Synthetic Data-Driven Object Detection for Rail Transport: YOLO vs RT-DETR in Train Loading Operations

Authors:

Thiago Leonardo Maria, Saul Delabrida and Andrea Gomes Campos

Abstract: Efficient wagon loading plays a crucial role in logistic efficiency and supplying essential raw materials to various industries. However, ensuring the cleanliness of the wagons before loading is a critical aspect of this process as it directly impacts the quality and integrity of the transported item. Early detection of objects inside empty wagons before loading is a key component in this logistic puzzle. This study proposes a computer vision approach for object detection in train wagons before loading and performs a comparison between two models: YOLO (You Only Look Once) and RT-DETR (Real-Time Detection Transformer), which are based on Convolutional Neural Networks (CNNs) and Transformers, respectively. Additionally, the research addresses the generation of synthetic data as a strategy for model training, using the \textit{Unity} platform to create virtual environments that simulate real conditions of wagon loading. Therefore, the findings highlight the potential of combining computer vision and synthetic data to improve the safety, efficiency, and automation of train loading processes, offering valuable insights into the application of advanced vision models in industrial scenarios.
Download

Paper Nr: 205
Title:

Mapping and Predicting Crimes in Small Cities Using Web Scraping and Machine Learning

Authors:

Pedro Arthur P. S. Ortiz and Leandro O. Freitas

Abstract: This paper presents an approach to municipal crime analysis and prediction through the integration of web scraping techniques and artificial intelligence. Focusing on Alvorada, Brazil, we address the challenge of limited crime data availability in small cities by developing an automated system that extracts and processes crime-related information from local news sources. Our methodology employs the Anthropic Claude AI API for structured data extraction and implements a machine learning model (Random Forest) for crime prediction. The research demonstrates the feasibility of creating crime prediction systems for small cities while identifying temporal and spatial patterns in criminal activity. Additionally, we provide a framework for future improvements through potential law enforcement partnerships and dataset expansion. This study contributes to the growing field of smart city development by offering a replicable methodology for municipalities lacking standardized crime data collection systems.
Download

Paper Nr: 222
Title:

An Adaptive Neuro-Fuzzy Inference Approach of AOA/AOS Data Fusion for Small Fixed-Wing UAV

Authors:

Bowen Duan, Yiming Wang, Heng Wang, Yunxiao Liu, Han Li and Jianliang Ai

Abstract: Accurate measurement of angle of attack (AOA) and angle of sideslip (AOS) is crucial for ensuring the safe operation of fixed-wing unmanned aerial vehicles (UAVs) and conducting reliable flight performance evaluations. Given the limited payload capacity of small-sized UAVs, lightweight wind vane probes are commonly employed. Although installing wind vane sensors at the nose typically yields accurate measurements, this placement is impractical for UAVs with front-mounted propellers. An alternative is to position the sensors beneath the wing, but this configuration introduces measurement inaccuracies due to propeller-induced slipstream and fuselage obstruction. To address these challenges, estimating AOA and AOS using inertial data through the unscented Kalman filter (UKF) offers a more robust solution, as it is less affected by external disturbances. This study introduces an adaptive network-based fuzzy inference system (ANFIS) for AOA/AOS data fusion, which compensates for inaccuracies in sensor measurements by integrating UKF-estimated AOA and AOS values. Flight test results demonstrate that the proposed ANFIS model achieves an average relative error of less than 15 %, with the average relative errors being 10.26 % for AOA and 12.77 % for AOS. This fusion approach significantly enhances the accuracy of AOA and AOS measurements, providing a valuable reference for small-sized fixed-wing UAVs.
Download

Paper Nr: 223
Title:

A Framework and Method for Intention Recognition to Counter Drones in Complex Decision-Making Environments

Authors:

Yiming Wang, Bowen Duan, Heng Wang, Yunxiao Liu, Han Li and Jianliang Ai

Abstract: With the large-scale application of drones in various industries and the rapid development of a low-altitude economy, a complex decision-making environment for counter-drones has been formed. Accurate drone intention recognition is of great significance for the defense of core assets such as power facilities. Therefore, based on the proposed intention space description model, this paper establishes a drone intention recognition framework in a complex decision-making environment and defines the key modules and processing procedures. In addition, to further solve the problem of uncertainty in the time window of drone intention recognition information acquisition in complex decision-making scenarios, this paper optimizes the algorithm by introducing a dynamic time window adaptive adjustment mechanism based on the BiLSTM (Bi-directional Long Short-Term Memory) network. The method described was validated through simulation experiments, confirming the effectiveness of the framework presented in this paper. It is capable of performing four types of intention classification, assessing threats, and scoring for offensive targets. The optimized BiLSTM method demonstrates high recognition accuracy. For drone targets with varying intentions, the recognition accuracy exceeds 96% after applying the time window division.
Download

Paper Nr: 243
Title:

Anomaly Detection Techniques in the Service of Data Labeling for Fault Diagnosis in Manufaturing

Authors:

Aldonso Martins de O. Junior, Emmanuel A. de B. Santos, Denis Leite and Alexandre M. A. Maciel

Abstract: The lack of labeled fault data in industrial environments presents a major challenge for developing effective fault detection and diagnosis models. This study investigates the application of unsupervised anomaly detection techniques to identify abnormal machine behavior without relying on labeled data. By enabling the early detection of anomalous conditions, these techniques assist in distinguishing normal from faulty instances, supporting the labeling process for improved fault diagnosis. Ten different techniques are evaluated across multiple performance metrics to determine their effectiveness in industrial fault detection. Experimental results demonstrate that Angle-Based Outlier Detection (ABOD) outperformed other methods, achieving a higher F1-score and improved accuracy in recognizing unseen normal data. These findings highlight the potential of unsupervised learning for enhancing industrial fault detection, facilitating the transition to data-driven maintenance strategies, and optimizing data collection processes. The study provides valuable insights into model selection, dataset structuring, and cost-efficient implementation strategies for industrial applications, contributing to the broader adoption of anomaly detection in manufacturing environments.
Download

Paper Nr: 246
Title:

Integrating Satellite Images Segmentation and Electrical Infrastructure Data to Identify Possible Urban Irregularities in Power Grid

Authors:

Álisson Alves, Luísa Souza, Luiz Cho-Luck, Raniere Lima, Carlos Augusto, Wesley Marinho, Rafael Capuano, Bruno Costa, Marina Siqueira, Jesaías Silva, Raul Paradeda and Pablo Javier Alsina

Abstract: Managing urban expansion and its impact on electrical infrastructure presents significant challenges, necessitating innovative methodologies to address irregular settlements and commercial losses in the electricity sector. This paper proposes an approach integrating convolutional neural networks and geospatial data to detect urban areas lacking electrical infrastructure. High-resolution Google Earth images and low-resolution Landsat 8 data were processed using advanced semantic segmentation architectures, LinkNetB7 and D-LinkNet50, to analyze land use patterns. The segmentation outputs were combined with data from the Brazilian Geographic Database of the Distribution System to generate comprehensive maps of electrical infrastructure coverage. The study focused on the SBAU substation in Sabaŕ a, Minas Gerais, which demonstrated commercial losses of up to 47.5% in specific feeders. Results demonstrated the effectiveness of deep learning models in identifying mismatches between urban development and infrastructure coverage, highlighting areas with potential irregular connections. This study contributes to advancing artificial intelligence applications in urban energy management by providing a scalable framework for analyzing land use and electrical infrastructure.
Download

Paper Nr: 249
Title:

Emotionalyzer: Player's Facial Emotion Recognition ML Model for Video Game Testing Automation

Authors:

Rebeca Bravo-Navarro, Luis Pineda-Knox and Willy Ugarte

Abstract: In video game development, the play testing phase is crucial for evaluating and optimizing user perception before launch. These tests are often costly and require significant time investment, as they are conducted by experts observing gameplay sessions, which makes capturing real-time data, such as facial and bodily expressions, challenging. Additionally, many independent studies lack the necessary resources to conduct professional testing. Therefore, smaller developers need more cost-effective and time-efficient alternatives to improve their products and streamline the development process. This project aims to develop a real-time facial emotion recognition model using machine learning, which will be integrated into an application that records the player’s emotions during the gameplay session. It seeks to benefit Peruvian indie companies by reducing costs and time associated with traditional testing and providing a more precise and detailed evaluation of the user experience. Additionally, the use of machine learning technology ensures continuous adaptation and progressive improvements in the model over time.
Download

Paper Nr: 284
Title:

A LLM-Powered Agent for Summarizing Critical Information in the Swine Certification Process

Authors:

Gabriel Rodrigues da Silva, Alencar Machado and Vinícius Maran

Abstract: Animal production farms play an essential role in sanitary control by serving as the first line of defense against disease outbreaks, thus safeguarding both the national food supply and public health. In the state of Rio Grande do Sul, one of Brazil’s leading regions for livestock production, swine farming holds particular economic importance and requires rigorous oversight to maintain herd health and compliance with regulatory standards. Recognizing this critical need, this paper presents the development of a virtual assistant aimed at supporting swine certification processes within the Animal Health Defense Platform of Rio Grande do Sul (PDSA-RS), a system integral to monitoring and preserving swine health in the region. The virtual assistant was implemented in Java using Spring Boot and the Spring AI library, with large language models (LLMs) executed locally through Ollama to ensure data privacy and provide contextualized responses. To improve response accuracy and relevance, retrieval-augmented generation (RAG) was employed, enriching user queries with external data on swine health regulations, standard operating procedures, and relevant certifications. A case study was conducted to evaluate the effectiveness of the prototype in real-world swine certification scenarios. Results indicated that the virtual assistant showed promise in improving the speed and accuracy of the certification process, offering timely and relevant information to users. This highlights the system’s potential to streamline workflows and facilitate better decision-making among technicians and veterinarians involved in sanitary control measures.
Download

Paper Nr: 290
Title:

Human Activity Recognition on Embedded Devices: An Edge AI Approach

Authors:

Graziele de Cássia Rodrigues and Ricardo Augusto Rabelo Oliveira

Abstract: Human Activity Recognition (HAR) is a technology aimed at identifying basic movements such as walking, running, and staying still, with applications in sports monitoring, healthcare, and supervision of the elderly and children. Traditionally, HAR data processing occurs in cloud servers, which presents drawbacks such as high energy consumption, high costs, and reliance on a stable Internet connection. This study explores the feasibility of implementing human activity recognition directly on embedded devices, focusing on three specific movements: walking, jumping, and staying still. The proposal uses machine learning models implemented with LiteRT (known as TensorFlow Lite), enabling efficient execution on hardware with limited resources. The developed proof of concept demonstrates the potential of embedded systems for real-time activity recognition. This approach highlights the efficiency of edge AI, enabling local inferences without the need for cloud processing.
Download

Paper Nr: 302
Title:

A Predictive Greenhouse Digital Twin for Controlled Environment Agriculture

Authors:

Abdellah Islam Kafi, Antonio P. Sanfilippo, Raka Jovanovic and Sa'd Abdel-Halim Shannak

Abstract: Controlled environment agriculture offers significant advantages for the efficient use of resources in food production, especially in hot desert climate regions due to the scarcity of arable land and water. However, farming practices such as hydroponics and aquaponics have high energy requirements for temperature control and present higher operational complexity when compared to traditional forms of farming. This study describes a Predictive Greenhouse Digital Twin (PGDT) that addresses these challenges through a dynamic crop yield assessment. The PGDT uses greenhouse measurements gathered through an IoT sensor network and a regression approach to multivariate time series forecasting to develop a model capable of predicting final crop yield as a function of the gathered measurements at any point in the crop cycle. The performance of the PGDT is evaluated with reference to forecasting algorithms based on deep and ensemble learning methods. Overall, deep learning methods show superior performance, with Long short-term memory (LSTM) providing a marginal advantage compared to Deep Neural networks (DNN). Furthermore, the models were deployed on an edge device (a Raspberry Pi-based gateway), where DNN demonstrated faster inference while delivering performance better than LSTM.
Download

Paper Nr: 306
Title:

Using Large Language Models to Support the Audit Process in the Accountability of Interim Managers in Notary Offices

Authors:

Myke Valadão, Natalia Freire, Mateus de Paula, Lucas Almeida and Leonardo Marques

Abstract: The auditing process in notary offices in Brazil is hindered by inefficiencies, high costs, and the complexity of manual procedures. To address these challenges, we propose a system that leverages the capabilities of Large Language Models (LLMs), specifically LLaMA2-7B and Falcon-7B, to automate critical information extraction from diverse document types. The system detects anomalous monetary values and unauthorized services, linking them to corresponding dates and beneficiaries to provide a detailed overview of financial discrepancies. Integrating advanced Natural Language Processing (NLP) techniques into auditing workflows enhances fraud detection, reduces operational costs, and improves accuracy. With a BLEU metric superior to 0.67, the proposed system demonstrates significant potential to streamline auditing operations. Key benefits include assisting court analysts in identifying fraud cases, optimizing public resource management by eliminating unjustified expenses, and potentially increasing court revenues to reinvest in public services.
Download

Paper Nr: 307
Title:

Feature Selection for Stock Market Prediction: A Comparison of Relief and Information Gain Methods

Authors:

Humberto O. Bragança, Rafael A. Berri, Bruno L. Dalmazo, Eduardo N. Borges, Viviane L. D. de Mattos, Richard F. Pinto, Fabian C. Cardoso and Giancarlo Lucca

Abstract: This study explores an approach to predictive analysis in the financial market, using a data set composed of financial information from different companies listed on the stock market, which provides a more detailed and contextualized view of the behavior of shares. Based on these indicators, feature selection methods, such as Relief and Information Gain, are applied to identify the most relevant variables for building predictive models. One of the main contributions of this work is the use of cross-validation to evaluate attribute selection, a technique that has not yet been explored in this context with this dataset. The results show that the combination of new financial indicators and cross-validation offers a solid basis for more accurate analysis, with important implications for investors, financial analysts and policymakers in the stock market. This work expands the boundaries of the literature on feature selection and opens possibilities for future research in emerging markets.
Download

Paper Nr: 308
Title:

Conference Management System Utilizing an LLM-Based Recommendation System for the Reviewer Assignment Problem

Authors:

Vaios Stergiopoulos, Michael Vassilakopoulos, Eleni Tousidou, Spyridon Kavvathas and Antonio Corral

Abstract: One of the most important tasks of a conference organizer is to assign reviewers to papers. The peer review process of the submitted papers is a crucial step in determining the conference agenda, quality, and success. However, this is not an easy task; large conferences often assign hundreds of papers to hundreds of reviewers, making it impossible for a single person to complete the task due to hard time constraints. We propose a Conference Management System that embodies a Large Language Model (LLM) in its core. The LLM is utilized as a Recommendation System which applies Content-based Filtering and automates the task of reviewers-to-papers assignment for a conference. The LLM we select to use is the Bidirectional Encoder Representations from Transformers (BERT), in two specific variants, BERT-tiny and BERT-large.
Download

Paper Nr: 319
Title:

Artificial Intelligence Harm and Accountability by Businesses: A Systematic Literature Review

Authors:

Michael Dzigbordi Dzandu, Sylvester Tetey Asiedu, Buddhi Pathak and Sergio De Cesare

Abstract: This study reviews the literature on artificial intelligence (AI) harms caused by businesses, their impact on stakeholders, and the available remedial mechanisms. Using the PRISMA method, relevant articles were sourced from the Scopus database and critically analysed. The data revealed that only 38 articles were published on the topic between 2012 and 2024, with 21 of these in 2024 alone. Key AI harms identified include economic and employment displacement, user harm, bias and discrimination, the digital divide, and environmental harm. While an explicit AI harm accountability framework was not found, related frameworks were derived from six cognate areas: data governance, decision-making, ethical AI, legal frameworks, responsible AI, and AI implementation. Five themes—AI transparency, accountability, decision-making, ethics, and risk—emerged as central to the literature. The study concludes that accountability for AI harms by businesses has been an afterthought relative to the rapid adoption of AI during the review period. Developing a robust AI accountability framework to guide businesses in mitigating AI harm is therefore imperative.

Paper Nr: 49
Title:

Ontology and AI Integration for Real-Time Detection of Cyberbullying Among University Students

Authors:

Khaliq Ahmed, Ashley Mathew and Shajina Anand

Abstract: With the increasing of the internet, smartphones, and social media, nearly everyone is a potential target for cyberbullying. Our research introduces an AI-driven approach to detect and address cyberbullying among college students, with a focus on its impact on mental health. We developed a context-specific ontology, drawing from real-time data, publicly available data, surveys, academic literature, and social media interactions to categorize information into domains such as victims, causes, types, environments, impacts, and responses. We collected real-time data from college students through surveys, interviews, and social media, leveraging advanced NLP (Natural Language Processing) techniques and BERT for accurate and efficient detection. By integrating this ontology with AI, our system dynamically adapts to emerging cyberbullying patterns, offering more precise detection and response strategies. Experimental results show that the proposed model achieves 96.2% accuracy, with 95.8% precision, 95.5% recall, and an F1-score of 95.6%. This performance surpasses traditional methods, emphasizing its capability to identify both explicit and implicit forms of abusive behavior. The approach not only introduces a tailored ontology for college students' unique social dynamics but also offers solutions to evolving cyberbullying trends. This research significantly enhances online safety and fosters a healthier digital environment for university students use.
Download

Paper Nr: 82
Title:

Evaluating Serverless Function Deployment Models on AWS Lambda

Authors:

Gabriel Duessmann and Adriano Fiorese

Abstract: With the advancement of computing and serverless services in the last couple of years, this area has been growing rapidly. Currently, most cloud providers offer serverless services, in particular at Amazon, they have AWS Lambda to create Functions as a Service (FaaS). There are at least two ways to implement it: by compressing the source code and files into a compacted folder in a ZIP format; the second way is through a container image, which has the running application and its dependencies. Based on the approach selected, the function’s performance, cost and initialization time may vary. This paper takes into account these metrics and compares the aforementioned ways of deployment. Furthermore, it aims to discover which approach is the most adequate. Experiments conducted at AWS Lambda show that functions created with compressed ZIP folders present advantages, regarding their initialization time during cold start mode, and cost.
Download

Paper Nr: 104
Title:

Retrieval of Similar Behaviors of Human Postural Control from the Center of Pressure in Elderly People with Sarcopenia

Authors:

Thales Vinicius de Brito Uê, Danilo Medeiros Eler and Iracimara de Anchieta Messias

Abstract: Human postural control acquired by a force plate is an object of study in different Healthcare areas. However, researchers without much experience in other areas of knowledge, such as Statistics and Information Technology, observe their data only quantitatively. Therefore, as complement to these biomechanical analyses, this paper aims to compare, retrieve and visualize similar behaviors of the Centre of Pressure (COP) measured according to static positions performed by elderly people with sarcopenia, before and after the application of a muscular training intervention. For this purpose, the medial-lateral (ML) and anterior-posterior (AP) directions of the COP's oscillations are used, respectively, as coordinates on the x- and y-axes, to which the Fourier Transform is applied to extract features from each set of coordinates that will represent each data collection during comparisons by the Euclidean distance metric. The acquisitions are ranked based on the similarity they share with the one defined as query. As a result, only acquisitions of interest are retrieved. Case studies involved comparisons of pre- and post-intervention data collections from 4 subjects performing different static positions on the force plate. Scatter plot visualizations, combined with comparisons and retrievals of similar behaviors among COP’s oscillations, facilitate analyses and insights regarding the subjects' postural balance performance during force plate data collections.
Download

Paper Nr: 133
Title:

Towards the Automated Selection of ML Models for Time-Series Data Forecasting

Authors:

Yi Chen and Verena Kantere

Abstract: Analyzing and forecasting time-series data is challenging since the latter always comes with characteristics, such as seasonality, which may impact models’ performance but are frequently unknown before implementing models. At the same time, the abundance of ML models makes it difficult to select a suitable model for a specific dataset. To solve this problem, research is currently exploring the creation of automated model selection techniques. However, the characteristics of the datasets have yet to be considered. Toward this goal, this work aims to explore the appropriateness of models concerning the features of time-series datasets. We collect a wide range of models and time-series datasets and choose some of them to conduct experiments to explore how different elements affect the performances of selected models. Based on the results, we formulate several outcomes that are helpful in time-series data forecasting. Further, we design a decision tree based on these outcomes, which can be used as a first step toward creating an automated model-selection technique for time-series forecasting.
Download

Paper Nr: 176
Title:

Fuzzy Logic for Neonatal EEG Analysis: A Systematic Review

Authors:

Samuel Cardoso, Juliano Buss, Javier Gomez, Helida Santos, Giancarlo Lucca, Adenauer Yamin and Renata Reiser

Abstract: Machine learning has advanced in healthcare, aiding diagnostics, treatment, and monitoring. In neonatal health, it helps to classify and predict conditions such as hypoxic-ischemic encephalopathy, which requires early detection. Thus, EEG pattern analysis is key in improving the neonatal prognosis. In this work, we present a systematic review of the literature to identify strategies currently employed to classify and predict neonatal EEG patterns using fuzzy logic. Fuzzy logic is particularly valuable for handling uncertainties in biological signals and improving interpretability. Five studies were selected and analyzed, focusing on applying fuzzy systems to detect epileptic events. The reviewed studies highlight techniques involving EEG data, emphasizing the role of fuzzy logic in advancing the understanding and management of neonatal neurological conditions, contributing to the state of the art in this critical field.
Download

Paper Nr: 202
Title:

Optimizing Musical Genre Classification Using Genetic Algorithms

Authors:

Caio Grasso, Thiago Carvalho, José Franco Amaral, Pedro Coelho, Robert Oliveira and Giomar Olivera

Abstract: Classifying music into genres is a challenging yet fascinating task in audio analysis. By leveraging deep learning techniques, we can automatically categorize music based on its acoustic characteristics, opening up new possibilities for organizing and understanding large music collections.The main objective of this study is to develop and evaluate deep learning models for the classification of different musical styles. To optimize the models, we utilized Genetic Algorithms (GA) to automatically determine the optimal hyperparameters and model architecture selection, including Convolutional Neural Networks and Transformers. The results demonstrated the effectiveness of GAs in exploring the hyperparameter space, leading to improved performance across multiple architectures, with EfficientNet models standing out for their consistent and robust results. This work highlights the potential of automated optimization techniques in enhancing audio analysis tasks and emphasizes the importance of integrating deep learning and evolutionary algorithms for tackling complex music classification problems.
Download

Paper Nr: 220
Title:

Automatic Exam Correction System Involving XAI for Admission to Public Higher Education Institutions: Literature Review

Authors:

Joaquim João Nsaku Ventura, Cleyton Mário de Oliveira Rodrigues and Ngombo Armando

Abstract: The process of correcting entrance exams is an essential procedure for assessing the academic performance of student candidates and ensuring fairness and accuracy in the awarding of marks for their future selection. Most lecturers at Angolan higher education institutions carry out the corrections manually, especially subjective corrections. Due to the high number of students, ensuring a high-quality correction process while meeting institutional deadlines becomes challenging. In this context, this article aims to find the techniques and metrics that are used for the automated correction process of assessments with discursive questions, involving Explainable Artificial Intelligence (XAI). This literature review follows the PRISMA 2020 methodology and includes studies from three bibliographic databases: ACM Digital Library, IEEEXplore and Science Direct. The results obtained show that the use of a combination of similarity measures and Natural Language Processing (NLP) provides greater efficiency for the automated correction of discursive questions.
Download

Paper Nr: 252
Title:

Comparative Study of Large Language Models Applied to the Classification of Accountability Documents

Authors:

Pedro Vinnícius Bernhard, João Dallyson Sousa de Almeida, Anselmo Cardoso de Paiva, Geraldo Braz Junior, Renan Coelho de Oliveira, Lúis Jorge Enrique Rivero Cabrejos and Darlan Bruno Pontes Quintanilha

Abstract: Public account oversight is crucial, facilitated by electronic accountability systems. Through those systems, audited entities submit electronic documents related to government and management accounts, categorized according to regulatory guidelines. Accurate document classification is vital for adhering to court standards. Advanced technologies, including Large Language Models (LLMs), offer promise in optimizing this process. This study examines the use of LLMs to classify documents pertaining to annual accounts received by regulatory bodies. Three LLM models were examined: mBERT, XLM-RoBERTa and mT5. These LLMs were applied to a dataset of extracted texts specifically compiled for the research, based on documents provided by the Tribunal de Contas do Estado do Maranhao (TCE/MA), and evaluated based on the F1-score. The results ˜ strongly suggested that the XLM-RoBERTa model achieved an F1-score of 98.99% ±0.12%, while mBERT achieved 98.65% ± 0.29% and mT5 showed 98.71% ± 0.75%. These results highlight the effectiveness of LLMs in classifying accountability documents and contributing to advances in natural language processing. These approaches can potentially be exploited to improve automation and accuracy in document classifications.
Download

Paper Nr: 263
Title:

The Changing Importance of Technology Skills for Accountants in the Context of Artificial Intelligence

Authors:

Yangchun Xiong and Yang Peng

Abstract: The goal of this study is to demonstrate the impact of the changing importance of technology skill under the evolution of artificial intelligence on the job requirements for accountants. The analysis is based on data from the Chinese employment market from 2012 to 2022 under different educational backgrounds. The research objectives are achieved through multiple regression and relative importance analysis. The analysis indicates that the changing importance of technology skills have significant effects on the job requirements of accountants. Trends show that from 2012 to 2020, the relative importance of technology skills decreased. However, this trend was reversed in 2020. Differences exist in both overall characteristics and trend features for job seekers with different educational backgrounds. The research findings provide insights for recommendations on how job seekers and educational institutions should take actions in the context of AI to promote employment and personal development.
Download

Paper Nr: 265
Title:

Bridging AutoML and LLMs: Towards a Framework for Accessible and Adaptive Machine Learning

Authors:

Rafael Duque, Cristina Tîrnăucă, Camilo Palazuelos, Abraham Casas, Alejandro López and Alejandro Pérez

Abstract: This paper introduces a framework architecture that integrates Automated Machine Learning with Large Language Models to facilitate machine learning tasks for non-experts. The system leverages natural language processing to help users describe datasets, define problems, select models, refine results through iterative feedback, and manage the deployment and ongoing maintenance of models in production environments. By simplifying complex machine learning processes and ensuring the continued performance and usability of deployed models, this approach empowers users to effectively apply machine learning solutions without deep technical knowledge.
Download

Area 3 - Information Systems Analysis and Specification

Full Papers
Paper Nr: 37
Title:

Which Factors Influence the Success of Communities of Practices in Large Agile Organizations, and How Are They Related?

Authors:

Franziska Tobisch, Johannes Schmidt, Ahmet Şentürk and Florian Matthes

Abstract: Agile software development methods are intended to allow quick reactions to frequent changes. The success of these methods in small settings has motivated organizations to scale them. However, dependencies, collaboration, and alignment become challenging in this context. Communities of Practice (CoPs) can support addressing the mentioned problems, but organizations have struggled with their implementation. Also, existing research lacks empirical studies on factors influencing CoPs’ success across organizations. Thus, we ran an expert interview study investigating factors hindering and supporting the success of CoPs in scaled agile settings and explored how they influence each other. Our findings highlight that establishing and cultivating CoPs should be aligned with organizations’ and communities’ contexts. Key barriers are a lack of (attending) members, limited time due to daily work, and difficulties in the CoP organization. Especially value for organization and members, a suitable organization of CoP internal activities, and regular adaption and improvement foster success.
Download

Paper Nr: 131
Title:

Using Multicriteria Decision Method to Score and Rank Serverless Providers

Authors:

Leandro Ribeiro Rittes and Adriano Fiorese

Abstract: As technology advances, it becomes increasingly challenging to identify the best approach or method to develop and distribute software that meets the ultimate goals of its creators and users, without becoming economically unfeasible and technically complex. Recognizing the relevance of a third-party infrastructure solution (cloud computing) option and the use of the serverless paradigm for such an approach, this study proposes an approach for selecting serverless platforms using a decision-making multicriteria method. The criteria used to model the solution were extracted from serverless service providers as well as from the analysis of benchmarking reports of serverless providers. Experiments regarding the accuracy and performance of the solution were carried out, together with the comparison of an implementation of the multicriteria method used available in a software library. As a result, it was identified that both implementations of the decision-making multicriteria method algorithm obtained 100% accuracy in the results in a controlled environment. However, the algorithm implemented in this work presented a better performance in computation time in scenarios with more than 500 serverless providers.
Download

Paper Nr: 152
Title:

A Process to Compare ATAM and Chapter 9 of ISO/IEC/IEEE 42020:2019

Authors:

Gustavo S. Melo and Michel S. Soares

Abstract: The evaluation of software architecture is a critical activity for ensuring system quality and alignment with business goals. The Architecture Tradeoff Analysis Method (ATAM) offers a systematic approach to identifying, prioritizing, and resolving tradeoffs within architectural decisions. In contrast, the ISO/IEC/IEEE 42020:2019 standard provides a structured framework for the design, evaluation, and documentation of software architectures in various domains. This paper presents a comparative analysis of ATAM and ISO/IEC/IEEE 42020:2019, highlighting their strengths and limitations. One conclusion is that it is important to note that the broader scope of ISO/IEC/IEEE 42020:2019 does not diminish the value of specialized methods like ATAM. Rather, it suggests a complementary relationship in which targeted evaluation techniques can be integrated into a more comprehensive framework. By examining their different approaches to architectural evaluation, this study aims to provide insights into their applicability to different contexts and implications for software architecture practices.
Download

Paper Nr: 185
Title:

User Stories: Does ChatGPT Do It Better?

Authors:

Reine Santos, Gabriel Freitas, Igor Steinmacher, Tayana Conte, Ana Carolina Oran and Bruno Gadelha

Abstract: In agile software development, user stories play a central role in defining system requirements, fostering communication, and guiding development efforts. Despite their importance, they are often poorly written, exhibiting quality defects that hinder project outcomes and reduce team efficiency. Manual methods for creating user stories are time-consuming and prone to errors and inconsistencies. Advancements in Large Language Models (LLMs), such as ChatGPT, present a promising avenue for automating and improving this process. This research explores whether user stories generated by ChatGPT, using prompting techniques, achieve higher quality than those created manually by humans. User stories were assessed using the Quality User Story (QUS) framework. We conducted two empirical studies to address this. The first study compared manually created user stories with those generated by ChatGPT through free-form prompt. This study involved 30 participants and found no statistically significant difference between the two methods. The second study compared free-form prompt with meta-few-shot prompt, demonstrating that the latter outperformed both, achieving higher consistency and semantic quality with an efficiency calculated based on the success rate of 88.57%. These findings highlight the potential of LLMs with prompting techniques to enhance user story generation, offering a reliable and effective alternative to traditional methods.
Download

Paper Nr: 201
Title:

Using Historical Information for Fuzzing JavaScript Engines

Authors:

Bruno Gonçalves de Oliveira, Andre Takeshi Endo and Silvia Regina Vergilio

Abstract: JavaScript is a programming language commonly used to add interactivity and dynamic functionality to websites. It is a high-level, dynamically-typed language, well-suited for building complex, client-side applications and supporting server-side development. JavaScript engines are responsible for executing JavaScript code and are a significant target for attackers who want to exploit vulnerabilities in web applications. A popular approach adopted to discover vulnerabilities in JavaScript is fuzzing, which involves generating and executing large volumes of tests in an automated manner. Most fuzzing tools are guided by code coverage but they usually treat the code parts equally, without prioritizing any code area. In this work, we propose a novel fuzzing approach, namely JSTargetFuzzer, designed to assess JavaScript engines by targeting specific source code files. It leverages historical information from past security-related commits to guide the input generation in the fuzzing process, focusing on code areas more prone to security issues. Our results provide evidence that JSTargetFuzzer hits these specific areas from 3.61% to 16.17% more than a state-of-the-art fuzzer, and covers from 1.46% to 15.09% more branches. By the end, JSTargetFuzzer also uncovered one vulnerability not found by the baseline approach within the same time frame.
Download

Paper Nr: 254
Title:

Event Modeling for Reasoning of Consequences

Authors:

Haroldo R. S. Silva, Fabrício H. Rodrigues and Mara Abel

Abstract: The modeling of events is crucial in several domains in which the temporal evolution of data supports decision-making, but the representation limitations in the state of the art in conceptual modeling are still a barrier to software application development. Current solutions fail to reconcile behavior expressiveness, reuse, and technological compatibility. This work considers event modeling under the approach of ontologies and focuses on the reasoning behind inferring the consequences of events. We propose the use of rule description languages to improve traditional ontology reasoning with interpretation capabilities of specific semantics, preserving the utility of current technologies (by not depending on non-analyzable descriptions, either by representational, modeling, or technological choice) while inferring in ways that are not possible with conventional axioms. During this work, we explore solutions compatible with the Semantic Web to represent the behavior of events, resulting in an OWL representation of an event model supported by SHACL-SPARQL inference and consistency check. We demonstrate our proposition by importing the resulting model to a domain ontology of the O&G industry and showing how the event consequences inferred affect a query over the oil flow.
Download

Paper Nr: 326
Title:

Elicitation and Documentation of Explainability Requirements in a Medical Information Systems Context

Authors:

Christian Kücherer, Linda Gerasch, Denise Junger and Oliver Burgert

Abstract: [Context and motivation] Ongoing research indicates the importance of explainabilty as it provides the rationale for results and decision of information systems to users. Explainability must be considered and implemented in software at the early stage in requirements engineering (RE). For the completeness of software requirements specifications, the elicitation and documentation of explainability requirements is essential. [Prob-lem] Although there are existing studies on explainability in RE, it is not clear yet, how to elicit and document such requirements in detail. Current software development projects miss a clear guidance, how explainability requirements should be specified. [Solution Idea] Through a review of literature, existing works for elicitation and documentation of explainability requirements are analyzed. Based on these findings, a template and additional guiding for capturing explainability requirements is developed. Following a design science approach, the template is applied and improved in a research project of the medical information domain. [Contribution] The overview of related work presents the current state of research for the documentation of explainability requirements. The template and additional guiding can be used in other information system context for RE elicitation and documentation. The application of the template and the elicitation guidance in a real world case show the refinement and an improved completeness of existing requirements.
Download

Short Papers
Paper Nr: 16
Title:

Pipeline for Ontology Construction Using a Large Language Model: A Smart Campus Use Case

Authors:

Daniel Lichtnow, Ana Marilza Pernas Fleischmann, Leonardo Vianna do Nascimento, Guilherme Medeiros Machado and José Palazzo Moreira de Oliveira

Abstract: Developing Semantic Web ontologies is a complex endeavor that necessitates a deep understanding of a specific domain, proficiency with Semantic Web patterns, the use of ontology editors, and the exploration and reuse of relevant existing ontologies. This paper presents a pipeline for ontology construction, leveraging a Large Language Model (LLM). Our work intends not to create a new methodology for ontology construction, but to explore how these tools can assist in the ontology-building process, acknowledging that they may not fully automate it. The pipeline was designed through an experience report, following the steps outlined in a recognized ontology construction guide to ensure a degree of reproducibility. A complex use case, a Smart Campus, was chosen to illustrate this process. This experience paper aims to highlight new possibilities while addressing the challenges encountered.
Download

Paper Nr: 33
Title:

Cybersecurity Risk Assessment Through Analytic Hierarchy Process: Integrating Multicriteria and Sensitivity Analysis

Authors:

Fernando Rocha Moreira, Edna Dias Canedo, Rafael Rabelo Nunes, André Luiz Marques Serrano, Cláudia Jacy Barenco Abbas, Marcelo Lopes Pereira Júnior and Fábio Lúcio Lopes de Mendonça

Abstract: Context: Cybersecurity is increasingly critical for public institutions, particularly as digital transformations expose them to a wide range of cybersecurity risks. Managing these risks effectively requires a structured approach that aligns with recognized standards and frameworks. Methods: This study presents the process of cybersecurity risk management within a Brazilian public agency, utilizing the cybersecurity incident detection controls proposed by the NIST Cybersecurity Framework (NIST-CSF). To assess and prioritize these controls, the Analytic Hierarchy Process (AHP) was applied as a multicriteria decision-making method. Expert judgments were collected and integrated into the AHP model to determine the relative importance of each control. Results: The application of the AHP method resulted in a prioritized list of cybersecurity controls. This list outlines the sequence in which controls should be implemented, enabling decision-makers to direct resources effectively and make informed choices in mitigating cybersecurity risks. Conclusion: The findings underscore the value of adopting multicriteria methods like AHP in cybersecurity risk management. This paper contributes to the literature by encouraging the use of such methods as best practices for improving cybersecurity risk assessment and management in public sector organizations.
Download

Paper Nr: 41
Title:

PhishCapture: An App to Detect Phishing Websites

Authors:

Willy Sotelo, Alvaro Roque and Pedro Castañeda

Abstract: This article introduces an app desktop solution for phishing detection, using a model based on YOLO, which uses machine learning, called YOLOv5 and RPA. The research explains how the app works, its architecture and discusses the performance of the model. It also focuses on students from universities since most victims of phishing come from that sector. This app also automates the process of taking the screenshot so the model can analyse it and know if the visited website contains phishing or not. The results show a significant effectiveness of phishing detection, giving a precision of 75,5%, being valuable information for the investigation in the phishing area. It gives another way of dealing with it, giving a good contribution for universities and students who don't know which pages contain malicious content or are trying to steal their personal information. All of these results were obtained from Google Colab, training and testing the model with a dataset that came from two authors in Kaggle.

Paper Nr: 45
Title:

Toward an Ontology-Based Framework for Textual System Requirements Extraction and Analysis

Authors:

Zakaria Mejdoul, Gaëlle Lortal and Myriam Lamolle

Abstract: This paper introduces the context, objectives and expectations of an ontology-driven framework designed to support engineers in the analysis of textual system requirements. The primary goals are twofold: (i) to keep the System Engineering-(SE) formal processes that satisfy industrial constraints, and (ii) to provide a semantic representation of textual requirements, enabling consistent semantic analysis through the logical properties of ontologies. Formalization and semantic analysis of system requirements provide early evidence of adequate specification, for reducing the validation tasks and high-cost corrective measures during later system development phases. Integrating ontologies into the SE process enhances system engineers’ ability to understand and manage requirements, leading to a smoother design and more accurate operation.
Download

Paper Nr: 54
Title:

AI-Based Approaches for Software Tasks Effort Estimation: A Systematic Review of Methods and Trends

Authors:

Bruno Budel Rossi and Lisandra Manzoni Fontoura

Abstract: Accurate measurement of task effort in software projects is essential for effective management and project success in software engineering. Conventional methods often face limitations in both accuracy and their ability to adapt to the complexities of contemporary projects. This systematic analysis examines the use of ensemble learning methods and other artificial intelligence strategies for estimating task effort in software projects. The review focuses on methods that employ machine learning, neural networks, large language models, and natural language processing to improve the accuracy of effort estimation. The use of expert opinion is also discussed, along with the metrics utilized in task effort estimation. A total of 826 empirical and theoretical studies were analyzed using a comprehensive search across the ACM Digital Library, IEEE Digital Library, ScienceDirect, and Scopus databases, with 66 studies selected for further analysis. The results highlight the effectiveness, current trends, and benefits of these techniques, suggesting that adopting AI could lead to substantial improvements in effort estimation accuracy and more efficient software project management.
Download

Paper Nr: 55
Title:

Analyzing User Story Quality: A Systematic Review of Common Issues and Solutions

Authors:

João Vitor Oliveira and Lisandra Manzoni Fontoura

Abstract: User stories are widely adopted in agile development, serving as a fundamental technique for capturing and communicating software requirements. This paper aims to conduct a systematic literature review (SLR) to identify and analyze studies that address issues found in user stories, as well as possible ways to solve them. The primary motivation for this study was the advancement in the use of Large Language Models (LLMs), particularly after the launch of ChatGPT in 2022 by OpenAI. The research identified the main issues and solutions related to user stories between 2020 and 2024, focusing on issues related to user story quality. The results indicate that the most common issue in user stories is quality problems, cited in 15 articles, followed by requirements management and task assignment (12) and the derivation and generation of the conceptual model (8). Estimation is the least mentioned issue, appearing only three times. Regarding solution methods, researchers most frequently used Natural Language Processing, Machine Learning, and other Artificial Intelligence techniques, citing them in 15 articles. This demonstrates the well-established application of AI methods to address these challenges.
Download

Paper Nr: 60
Title:

Enhancing Continuous Integration Workflows: End-to-End Testing Automation with Cypress

Authors:

Maria Eduarda S. Vieira, Vitor Reiel M. de Lima, Windson Viana, Michel S. Bonfim and Paulo A. L. Rego

Abstract: In Agile Software Development, adopting Continuous Integration (CI) practices enables the continuous delivery of high-quality software through frequent code deployments and automated testing. Automated tests play a crucial role in this process by reducing manual effort and increasing reliability. Among available tools, Cypress is particularly notable for executing end-to-end (E2E) tests efficiently and reliably directly within the browser environment. This paper proposes a structured approach to integrating Cypress into CI/CD pipelines, utilizing the Page Object pattern to enhance the robustness and maintainability of E2E test suites. We apply this approach in a case study in which 155 E2E tests were developed for a web-based internal system of a multinational corporation with a globally distributed user base. By detailing the methodology and results of our study, we demonstrate how this approach optimizes test execution, expands test coverage, and facilitates rapid feature deployment without compromising system stability.
Download

Paper Nr: 76
Title:

SWeeTComp: A Framework for Software Testing Competency Assessment

Authors:

Nayane Maia, Ana Carolina Oran and Bruno Gadelha

Abstract: The quality of the process and product is critical for competitiveness in the software industry. Software testing, which spans all development phases, is essential to assess product quality. This requires testing professionals to master various technical and general skills. To address the competency gap in testing teams, a competency assessment of all team members is necessary. In response, SWeeTComp (A Framework for Software Testing Competency Assessment) was developed as a self-assessment tool to identify competency gaps. A study with 22 participants from a Software Engineering course at the Federal University of Amazonas evaluated SWeeTComp’s effectiveness in identifying competencies and gaps. Participants also provided feedback on its usability and effectiveness. Results show that SWeeTComp helped participants identify their strengths and weaknesses. Feedback was positive, though areas for improvement, such as clearer instructions and more detailed feedback, were noted.
Download

Paper Nr: 93
Title:

A Generic Process Model for Supporting Requirements Engineering in IoT Systems

Authors:

Mohamad Mjalled, Khin Than Win and Elena Vlahu Gjorgievska

Abstract: Internet of Things (IoT) has gained significant traction in both academic and industrial spheres, leading to extensive research and practical applications that promise to transform various aspects of society and the economy. Despite the proliferation of IoT concepts and solutions around initiation and requirements analysis, the field suffers from fragmented terminology and a lack of interoperability, impeding effective knowledge sharing. This paper addresses this gap by developing a comprehensive metamodel for requirements engineering (RE) in IoT systems, following the guidelines of design science research. The proposed metamodel integrates common concepts and processes from existing literature to create a standardized, context-agnostic framework for IoT RE. The expressiveness and utility of the metamodel are demonstrated through two case studies, highlighting its potential to harmonize and enhance IoT development practices. The paper concludes by discussing the implications, benefits, and limitations of this approach, paving the way for more cohesive and efficient IoT systems development.

Paper Nr: 95
Title:

Predictive Regression Models of Machine Learning for Effort Estimation in Software Teams: An Experimental Study

Authors:

Wilamis K. N. Silva, Bernan R. Nascimento, Péricles Miranda and Emanuel P. Vicente

Abstract: Estimating the effort required by software teams remains complex, with numerous techniques employed over the years. This study presents a controlled experiment in which machine learning techniques were applied to predict software team effort. Seven regression techniques were tested using eight PROMISE datasets, with their performance evaluated across five metrics. The findings indicate that the XGBoost technique yielded the best results. These results suggest that XGBoost is highly competitive compared to other established techniques in the field. The paper proved to lay the foundation to guide future researchers in conducting research in the field of software team effort estimation.
Download

Paper Nr: 112
Title:

Big Data Fortaleza Platform: Quality Improvement with Testing Process

Authors:

Amanda K. B. Cavalcante, Ícaro S. de Oliveira, Victória T. Oliveria, Pedro Almir M. Oliveira, Tales P. Nogueira, Ismayle S. Santos and Rossana M. C. Andrade

Abstract: In July 2022, the City Planning Institute of Fortaleza (Iplanfor), in collaboration with Computer Networks, Software and Systems Engineering Group (GREat) from the Federal University of Ceaŕ a, launched a project to develop a platform utilizing Big Data for data analysis and predictive modeling. This initiative aimed to support strategic planning and create solutions that would foster the development of City Fortaleza, ultimately guiding public policies based on solid evidence. The platform was named Big Data Fortaleza. Given its focus on government applications, it was essential to validate the platform through various testing methods. This article outlines the adopted testing process and highlights critical outcomes, including improved prediction accuracy and enhanced system and data security efficiency. Additionally, it discusses valuable lessons learned, such as the importance of effective team communication and the necessity for ongoing adjustments to maintain the platform’s quality and reliability.
Download

Paper Nr: 123
Title:

Multiple Sequence Alignment Using Ant Colony Optimization with Chaotic Jump

Authors:

Matheus Lino de Freitas, Matheus Carreira Andrade, Anderson Rici Amorim, Vitoria Zanon Gomes, Bruno Rodrigues da Silveira, Gabriel Augusto Prevato, Luiza Guimarães Cavarçan, Carlos Roberto Valêncio and Geraldo Francisco Donegá Zafalon

Abstract: Multiple sequence alignment is one of the most relevant techniques in the bioinformatics. The next generation sequencing technologies produces a large volume of data that is later analised by biologists, biomedicals and geneticists. Due to this huge volume, computational effort are necessary to aid in the data analisys, as an example, for sequence alignment. This works aims to introduce a novel method that combines the KAlign and Clustal Omega tools in order to produce a seed alignment that will later be refined by Ant Colony Optimization and Chaotic Jump. The results showed that for every test the ACO produced better alignments than the MSA-GA tool and at least for 50% of tests the proposed method was able to improve the initial alignments produced by the KAlign and Clustal Omega tools.
Download

Paper Nr: 161
Title:

Perception of Professionals Regarding Behavior-Driven Development (BDD): A Descriptive and Statistical Study

Authors:

Shexmo Richarlison Ribeiro dos Santos, Gustavo S. Melo, Michel S. Soares and Fabio Gomes Rocha

Abstract: Context: Analyze the perception of professionals who use the Behavior-Driven Development (BDD) framework in software development. Problem: Identify the aspects inherent to adopting BDD by the software industry. Solution: Through descriptive and statistical analysis, advance understanding of the characteristics of using BDD. Method: Carry out a Survey to characterize BDD, where the target audience is professionals who use this framework in their work activities. Summary of results: The Survey carried out in this study was answered by 43 professionals to characterize how BDD has been adopted in software development. Contributions and impact: The main contribution of this article is that the lack of experience in using BDD directly impacts the performance of work activities. Thus, it is necessary to have experience in adopting BDD to achieve the potential expected by this framework.
Download

Paper Nr: 178
Title:

Improving Clarity and Completeness in User Stories: Insights from a Multi-Domain Analysis with Developer Feedback

Authors:

Maria Regina Araújo Souza and Tayana Conte

Abstract: The clarity and completeness of requirements are crucial in agile software development, where user stories are widely used to capture user needs. However, poorly written user stories can introduce ambiguities, leading to inefficiencies in the development process. This paper presents a detailed analysis of 30 user stories from five different domains, along with feedback from 50 developers gathered through a questionnaire. The analysis, based on the INVEST criteria (Independent, Negotiable, Valuable, Estimable, Small, and Testable), identified common issues such as vague acceptance criteria, insufficient technical details, and overly broad stories. Based on these findings, we present targeted recommendations for improving user story quality, including refining acceptance criteria, breaking down large stories into smaller components, and incorporating adequate technical details. The feedback from developers reinforced the value of these practices, highlighting the importance of collaboration in refining user stories. This study offers actionable insights and practical strategies to enhance user story quality and promote continuous improvement in agile software development.
Download

Paper Nr: 191
Title:

Transformation of Cyclic Process Models with Inclusive Gateways to Be Executable on State-of-the-Art Engines

Authors:

Thomas M. Prinz, N. Long Ha and Yongsun Choi

Abstract: One aim of business process management is to automate business process models. Since process models shall reflect occurring processes in companies, such models can be complex and contain non-trivial behavior with inclusive semantics and loops formed by sequence flows. This paper shows on a test set that state-of-the-art BPMN execution engines do not fully support inclusive gateways, especially, if they are within loops. This circumstance prevents the one-to-one automation of process models. As there is no transformation of process models with inclusive semantics into models without them not risking the exponential growth of the models, this paper presents a transformation that decomposes cyclic process models into a set of message-exchanging acyclic process models. The transformed models are directly executable on most investigated engines. The transformation itself is achievable in quadratic time complexity, increases the size of the model just quadratically in the worst case, and, finally, can be fully automated as pre-processing step before execution, thus avoiding to change execution engines.
Download

Paper Nr: 193
Title:

Exploring the Use of ChatGPT for the Generation of User Story Based Test Cases: An Experimental Study

Authors:

Felipe Sonntag Manzoni, Rávella Rodrigues and Ana Carolina Oran Rocha

Abstract: CONTEXT: The rapid advancement of Artificial Intelligence (AI) technologies has introduced new tools and methodologies in software engineering, particularly in test case generation. Traditional methods for generating test cases are often time-consuming and rely on manual input, limiting efficiency and coverage. The ChatGPT 3.5 model, developed by OpenAI, represents a novel approach to automating this process, potentially transforming software testing. OBJECTIVE: This article aims to explore the application of ChatGPT 3.5 in generating test cases based on user stories from a course in software engineering, evaluating the effectiveness, user acceptance, and challenges associated with its implementation. METHOD: The study involved generating test cases using ChatGPT 3.5 and executed by students from the Practice in Software Engineering (PES) course at the Federal University of Amazonas (UFAM) collecting data through surveys and qualitative feedback, focusing on TAM model perceptions and students’ self-perceptions. RESULTS and CONCLUSIONS: Results indicate a generally positive reception of ChatGPT 3.5 for the objective above, praising it for enhancing several aspects of TC creation, which resulted in high intention of future use and perception of value. However, some challenges have been raised, meaning users should validate and review generated results. Furthermore, results highlight the importance of integrating AI tools while keeping human expertise to maximize their effectiveness.
Download

Paper Nr: 211
Title:

Data Privacy in Educational Contexts: Analyzing Perceptions, Practices and Challenges in Personal Data Protection

Authors:

Yuri Correia de Barros and Jéssyka Vilela

Abstract: This study aims to investigate the increasing relevance of data privacy in the educational context as digital processes and the use of technology become more prevalent in educational institutions. The protection of personal data, especially in academic environments, is a sensitive and challenging topic due to the large volume of information shared between students, teachers, and administrators, making the adoption of efficient and secure practices essential. The study analyzes current data security practices and the challenges faced by educational institutions in safeguarding personal information. Focusing on the guidelines and requirements established by data protection laws such as Brazil’s LGPD and the European Union’s GDPR, the research examines both the legal implications and ethical issues related to the treatment of personal data in the educational field. Alongside a detailed review of best practices and regulatory demands, the study is based on field research conducted through a survey with students and teachers from various institutions, including public universities, private institutions, and technical schools. The survey’s goal is to understand users’ perceptions of data protection and to assess their knowledge of the relevant legislation. This approach provides a critical insight into how prepared students and teachers are to address data privacy challenges in academic settings. The analysis of the research conducted with educators and students from educational institutions revealed key insights into the treatment of personal data. The results indicate concerns about transparency and data security, highlighting the need to improve education on privacy and promote more transparent practices within institutions, in line with the LGPD, to foster a safer and more ethical environment for students.
Download

Paper Nr: 228
Title:

Micro4Delphi: A Process for the Modernization of Legacy Systems in Delphi to Microservice Architecture

Authors:

Lucas Fernando Fávero, Gabriel Soares Mário and Frank José Affonso

Abstract: The modernization of legacy systems to microservice architecture (MSA) has been a subject of interest in both academic and industrial areas. This architectural style has facilitated the development of software systems by composing them as a collection of small and loosely coupled services, each running in its process and communicating with lightweight mechanisms. In parallel, Delphi is an integrated development environment (IDE), based on the Object Pascal programming language, that enables the rapid application development of software for desktop, mobile, web, and console applications. Although the software systems developed in Delphi have considerable relevance in contemporary software, there is a lack of documented processes that facilitate the modernization of legacy systems in Delphi to MSA. This paper presents the Micro4Dephi, a modernization process based on six well-defined activities. Each activity is constituted by a step set, which may vary in number and content, thus allowing such activities to be performed. A case study was conducted to show the applicability of the process proposed in this paper. The results provide important evidence that enables a clear perspective on the process’s contribution to software modernization.
Download

Paper Nr: 230
Title:

Agile Project Management in Government Software Development: Addressing Challenges in Education Public Policy

Authors:

José Silva, Alenilton Silva, André Araújo and André Silva

Abstract: This article explores adopting Agile project management practices in developing government software solutions, specifically focusing on Brazil’s National Textbook Program (PNLD). The PNLD is a cornerstone public policy initiative that ensures the distribution of millions of educational resources to public schools, addressing dynamic requirements and engaging diverse stakeholders. This study identifies the complexities of managing public policy-driven software projects through comprehensive case study research involving document analysis and interviews with project managers and stakeholders. Key challenges include aligning functionalities with user needs, improving communication between developers and users, and fostering iterative feedback processes. The findings reveal that while Agile practices have positively influenced the project’s efficiency and adaptability, critical gaps remain in addressing requirements and stakeholder collaboration volatility. Based on these insights, the article proposes a set of good practices tailored to enhance Agile project management in similar contexts. These practices aim to improve responsiveness, stakeholder engagement, and process scalability, contributing to successfully implementing dynamic and multifaceted government policies.
Download

Paper Nr: 237
Title:

Advancing Cyberbullying Detection: A Hybrid Machine Learning and Deep Learning Framework for Social Media Analysis

Authors:

Bishal Shyam Purkayastha, Md. Musfiqur Rahman, Md. Towhidul Islam Talukdar and Maryam Shahpasand

Abstract: Social media platforms have led to the prevalence of cyberbullying, seriously challenging the mental health of individuals. This research is on how effectively different machine learning and deep learning techniques can detect cyberbullying in online communications. Using two different tweet datasets obtained from Mandalay and Kaggle, we developed a balanced framework for binary classification. This research emphasizes comprehensive data preprocessing: text normalization and class balancing by random oversampling to increase the dataset’s quality. Models used include several traditional machine learning classifiers: Random Forest, Extra Trees, AdaBoost, MLP, and XGBoost, and advanced deep learning architectures such as Bidirectional LSTM, BiGRU, and BERT. These results confirm that deep learning models, especially BERT, yield outstanding performance with an accuracy rate of 92%, hence showing the models’ capability in effectively detecting and preventing cyberbullying through automated detection.
Download

Paper Nr: 267
Title:

Integration of User-Centered Design into the Scrumban Framework: A Case Study on the Groovoo Platform

Authors:

Thiago Luiz de Souza Gomes, Victor Samuel dos Santos Lucas, Rejane Figueiredo, Glauco Pedrosa and Elaine Venson

Abstract: This paper explores the integration of User-Centered Design (UCD) guidelines into the Scrumban framework in the development of the Groovoo platform, an application for ticket sales and issuance developed by the startup Atena Solutions. While agile methodologies, such as Scrum and Kanban, promote flexibility and efficiency, the speed of iterative cycles can compromise a deeper understanding of user needs. In this context, the hybrid Scrumban framework, which combines elements of Scrum and Kanban, was complemented with UCD practices to ensure that users remained at the center of design decisions, delivering a more satisfying and intuitive experience. Using a qualitative case study, data were collected through interviews, focus groups, and usability testing with end users. The applied UCD techniques, such as personas, customer journey mapping, and usability testing, helped identify areas for improvement in user experience and enabled continuous adjustments during the agile process. The results indicate that the integration of UCD into Scrumban brought tangible benefits to usability and user satisfaction without compromising development agility. This work contributes to the literature by demonstrating how combining UCD with agile methodologies can strengthen user-centered development in dynamic and consumer-driven environments.
Download

Paper Nr: 270
Title:

Reviewing Reproducibility in Software Engineering Research

Authors:

André F. R. Cordeiro and Edson Oliveira Jr

Abstract: This paper presents a Systematic Mapping Study (SMS) on reproducibility in Software Engineering (SE), analyzing definitions, procedures, investigations, solutions, artifacts, and evaluation assessments. The research explores how reproducibility is defined, applied, and investigated, identifying several approaches and solutions. The final set of studies considered 25 primary studies, grouping the definitions of reproducibility into categories such as method, repeatability, probability, and ability. The application procedures are categorized into method, architecture, container, technique, framework, environment, notebook, toolkit, and benchmark. The investigation of reproducibility is analyzed through workflows, approaches, prototypes, methods, measures, methodologies, technologies, case studies, and frameworks. Solutions to reproducibility problems include environments, tools, benchmarks, initiatives, methodologies, notebooks, and containers. The artifacts considered include tools, environments, scenarios, datasets, models, diagrams, notebooks, algorithms, codes, representation structures, methodologies, containers, repositories, sequences, and workflows. Reproducibility assessment is performed using methods, experiments, measurements, processes, and factors. We also discuss future research opportunities. The results aim to benefit SE researchers and practitioners by providing an overview of organizing and providing reproducible research projects and artifacts, in addition to pointing out research opportunities related to reproducibility in the area.
Download

Paper Nr: 32
Title:

Micro-Frontend Architecture in Software Development: A Systematic Mapping Study

Authors:

Giovanni Cunha de Amorim and Edna Dias Canedo

Abstract: Context: The integration of new technologies into monolithic frontend projects poses significant challenges, including code redundancy, inconsistency, and limited scalability. The micro-frontend architecture has emerged as a promising solution, offering a more modular and independent approach to frontend development. Goal: This study aims to explore the impacts and challenges of adopting micro-frontend architecture in software projects. Method: We conducted a systematic mapping study to identify common architectural patterns and strategies used in micro-frontends. Results: Our findings underscore the potential of micro-frontend architecture in modernizing frontend development, particularly for large, complex projects. However, successful implementation depends on the project’s scale and requires careful methodological and technological planning.

Paper Nr: 72
Title:

Use of Knowledge Management in IDiAL

Authors:

Emine Bilek

Abstract: The paper deals with knowledge management and organizational learning in general and its possible application at the Institute for the Digital Transformation of Application and Living Domains (IDiAL) which focus is on the main topics digital transformation of application and living domains at Fachhochschule Dortmund University of Applied Sciences and Arts in particular. Firstly, the different forms of knowledge management and organizational learning are discussed, followed by a description of IDiAL, its development and its focus. In addition to the main areas of research and the transdisciplinary collaboration between the institute's scientists, this article describes how organizational learning is used at IDiAL by means of collaborative software with sample contents and how this has improved communication in general and administrative processes at the institute's head office.
Download

Paper Nr: 75
Title:

Acceptance Criteria Validation in Agile Projects Using AI and NLP Techniques

Authors:

Ana Carla Gomes da Silva, Afonso Sales and Fabio Gomes Rocha

Abstract: In agile software development, user stories and their acceptance criteria play a critical role in ensuring alignment between stakeholder expectations and system functionality. However, the manual validation of these criteria is often labor-intensive and prone to bias. This study investigates the application of Artificial Intelligence (AI) techniques, particularly Natural Language Processing (NLP) and Machine Learning (ML), to automate the analysis and validation of user stories. Using a dataset of user stories collected from academic and industry projects, we trained and evaluated four ML algorithms: Multilayer Perceptron (MLP), Support Vector Machine (SVM), Naive Bayes, and Random Forest. The models were assessed for their ability to classify acceptance criteria accurately and efficiently. Our findings demonstrate the potential of AI to enhance the validation process, achieving over 60% accuracy in certain cases, with SVM standing out as the most robust algorithm. This research highlights the transformative role of AI in improving software requirements analysis and lays the foundation for future innovations in automated validation and quality assurance in agile environments.
Download

Paper Nr: 80
Title:

Prototyping Smart City Solutions with Metaverse and Digital Twins: A Systematic Literature Mapping

Authors:

Márcio Roberto Rizzatto, Alexandre L’Erario and Eduarda Maganha de Almeida

Abstract: The concept of Smart Cities (SC) has emerged as a strategic approach to address contemporary urban challenges. The integration of innovative technologies, such as Digital Twins (DT or DTs), the Metaverse, prototyping, and stakeholder participation, offers promising solutions for improving urban management. This paper presents a systematic mapping of the main approaches and research on prototyping solutions in Smart Cities, exploring how the metaverse and Digital Twins contribute to these advancements. Based on a detailed analysis of relevant articles, this work investigates key trends and challenges, providing a consolidated view of the state-of-the-art in these emerging areas.
Download

Paper Nr: 92
Title:

Usability Evaluation of Requirement Collaboration Features in Requirements Management Tools

Authors:

Oana Rotaru, Silviu Vert and Radu Vasiu

Abstract: In the automotive industry, system requirements, derived from environmental contexts and project goals, evolve from abstract concepts to detailed specifications and of course, increase in complexity. This complexity continues into the project’s design, implementation, and integration stages, highlighting the importance of robust, tool-supported Requirements Management (RM). The Requirements Management tools(RMT) are characterized by specific features meant to facilitate the processes, which also lead to complex user interfaces used by requirements engineers and managers, developers, testers. This paper presents the usability evaluation results of the collaboration features of a Requirements Management Tool, DOORS Next Generation (DNG), recently introduced in an automotive company department.
Download

Paper Nr: 126
Title:

The Impact of AI Tools on Software Development: A Case Study with GitHub Copilot and Other AI Assistants

Authors:

Sergio Cavalcante, Erick Ribeiro and Ana Carolina Oran

Abstract: Background - With the increasing complexity of software projects and the demand for rapid and high-quality deliveries, Generative Artificial Intelligence (GenAI) tools have emerged as powerful allies in software development. Objective - This study aims to evaluate the impact of using Code Generation Assistants—such as GitHub Copilot, ChatGPT, and Gemini—in software development environments. Method - We conducted a satisfaction survey with 57 volunteers in an R&D organization, including developers, test analysts, and product owners, collecting quantitative and qualitative data on the use of these tools. Results - The results indicate that the use of these tools significantly increased productivity, improved code quality, and accelerated professional learning. Additionally, it facilitated the automation of repetitive tasks, allowing focus on more complex challenges. However, challenges such as the need for constant review of generated code and the risk of excessive dependency were identified. Conclusion - We conclude that, despite the challenges, GenAI tools have a significant positive impact on software development, and organizational support is crucial to maximize their benefits.
Download

Paper Nr: 159
Title:

Assessing Cybersecurity Readiness Among SME

Authors:

Bjarne Lill, Clemens Sauerwein, Alexander Zeisler, Carina Hochstrasser and Nico Mexis

Abstract: Information security is a critical issue for small and medium-sized enterprises (SMEs) around the world. These organisations face an increasing number of security incidents and the sophistication of attacks. In order to remain competitive and protect their and their customers’ critical information, it is essential that SMEs can manage their cybersecurity risks appropriately. Accordingly, it is important that these SMEs can rely on tailored information security assessments and frameworks. However, there is a scarcity of knowledge regarding their specific needs and the practical implementation of cybersecurity within these organisations. To address this knowledge gap, an exploratory study was conducted on the SME cybersecurity situation, with a particular focus on the implementation level of cybersecurity controls within SMEs in Austria and Germany. We surveyed 30 SMEs regarding their cybersecurity implementation situation in 2023. Our findings show, among other things, a very heterogeneous picture regarding the implementation level of cybersecurity controls and outline areas for action.
Download

Paper Nr: 195
Title:

The Effects of Digital Twins Development on System's Long-Term Performance, Potential Capabilities, and Possible Benefits

Authors:

Ahmed Habib and Michael W. Grenn

Abstract: Practitioners who are working in Digital Engineering applications and especially the applications involving Digital Twins are concerned with maintaining the twinning state between the cyber and physical entities throughout the system’s life cycle. Although this level of granularity during the operation mode is required to maintain the state of the Digital Twin, in many cases, it negatively impacts the emergent behavior of the system in the long run. This effort explores the benefits of the architecture interfaces of the system, assuming the preservation of the twinning state, to uncover the convergence of the latent system in behavior which can offer insights to systems engineers and decision makers to guide current twinning arrangements toward the desired system behavior in the long run. The effort will explore Hastings-Metropolis, Markov-Chain, Monte-Carlo Algorithm at interface sampling level and discuss the expansion potential beyond systems’ interfaces architecture through empirical analysis example and discussing future research potentials.
Download

Paper Nr: 219
Title:

Bridging the Gap in Agricultural Sharing Economy: A Systematic Review for Evaluating Information Systems for Machinery Efficiency

Authors:

Reinaldo Wendt, Eduardo Tiadoro, Fabio Basso and Maicon Bernardino

Abstract: The sharing economy is rapidly transforming various industries, including agriculture, where there is growing demand for systems that facilitate machinery rental and sales. Agricultural machinery is often expensive and is used primarily during specific periods, such as harvests. This limited utilization leads to high depreciation costs, imposing substantial and scalable financial burdens on owners. This study investigates how a sharing economy model can improve the efficiency of agricultural machinery use. By allowing equipment owners to maximize utilization and providing small-scale farmers with affordable access to machinery, such a model reduces the need for significant upfront investments. We conducted a qualitative analysis to evaluate the effectiveness of current information systems that support this approach. The methodology involved exploring grey literature to identify relevant tools, defining evaluation criteria, and conducting a qualitative assessment of existing platforms. Among 14 evaluated platforms, we rated only four as acceptable, with only one achieving a good rating. None fully met all the criteria, revealing a gap between user needs and the solutions currently available in the market. This study highlights the inadequacies in existing platforms and offers valuable insights for advancing the sharing economy in agriculture. By identifying specific needs and challenges, the findings provide a foundation for future research and the development of more effective technologies and practices in this domain.
Download

Paper Nr: 292
Title:

Reproducibility Practices of Software Engineering Controlled Experiments: Survey and Prospective Actions

Authors:

André F. R. Cordeiro and Edson Oliveira Jr

Abstract: Reproducibility can be described as a characteristic that contributes to expanding knowledge in science. This paper investigates the reproducibility of experiments in Software Engineering (SE) in a context where the literature points to challenges in verifying experimental results. The central problem addressed is the difficulty in reproducing experiments in SE due to the different factors, such as sharing and artifact management. We then aimed to identify the factors necessary to achieve reproducibility in SE experiments, characterizing these factors in terms of the reproducibility crisis, experimental workflows, research practices, FAIR principles application, and reproducibility improvements. We planned and conducted one survey with 16 participants who answered a questionnaire with 33 questions. The results show that most participants perceive a reproducibility crisis in the field and point to factors such as lack of public data and incomplete information on methods and experimental setups as the main causes. Furthermore, the results highlight the importance of sharing data, metadata, and information about research teams. We also provide points to possible actions to improve reproducibility in SE experiments. The contributions include a detailed analysis of the challenges to reproducibility in SE, as well as the identification of practices and measures that can improve reproducibility.
Download

Paper Nr: 312
Title:

Applying Prototyping and Exploratory Testing to Ensure Software Quality in an Information System for Power Tampering Detection: An Experience Report

Authors:

Sabryna Araujo, Ramille Santana, Joana Silva, Arthur Passos, Matheus Menezes, Felipe Feyh, Carlos Moura, Lucas Pinheiro, Auriane Santos, Aristofanes Silva, João Dallyson, Italo Francyles and Luis Rivero

Abstract: This paper presents an experience report on the application of exploratory testing in the development of a system aimed at detecting illegal connections in the electricity supply, a critical problem that causes financial losses and compromises the safety and efficiency of power grids. The research utilized a high-fidelity prototype developed in the Figma tool as a basis for planning and executing tests, allowing the identification of functional and usability defects in an agile and collaborative manner. The adopted methodology involved the use of iterative meetings for continuous validation, ensuring alignment between requirements and implementation. During the process, 49 defects were recorded and categorized, enabling significant system improvements and ensuring higher quality in the final product. The results highlight the effectiveness of integrating prototypes and exploratory testing to reduce validation time, identify critical issues, and promote team alignment. As future work, it is proposed to expand the system’s prioritization criteria and conduct user tests in real scenarios. This study contributes to the literature by reinforcing the role of agile methodologies and modern testing techniques in the development of robust and effective technological solutions.
Download

Area 4 - Software Agents and Internet Computing

Full Papers
Paper Nr: 141
Title:

Enhancing IoT Interactions with Large Language Models: A Progressive Approach

Authors:

Daniela Timisica, Radu Boncea, Mariana Mocanu, Bogdan Dura and Sebastian Balmus

Abstract: This paper explores the development and implementation of an Intelligent Virtual Assistant (IVA) leveraging Large Language Models (LLMs) to enhance interactions with Internet of Things (IoT) systems. Our work demonstrates the initial success in enabling the IVA to perform telemetry readings and basic interpretations, showcasing the potential of LLMs in transforming Natural Language Processing (NLP) applications within smart environments. We discuss the future enhancements planned for the IVA, including the ability to sequentially call multiple tools, perform readings from various sources, and execute robust data analysis. Specifically, we aim to fine-tune the LLM to translate human intentions into Prometheus queries and integrate additional analytical tools like MindDB to extend the system’s capabilities. These advancements are expected to improve the IVA’s ability to provide comprehensive responses and deeper insights, ultimately contributing to more intelligent and intuitive virtual assistants. Our ongoing research highlights the potential of integrating advanced NLP, IoT, and data analytics technologies, paving the way for significant improvements in smart home and vehicle environments.
Download

Paper Nr: 250
Title:

A Modularized and Reusable Architecture for an Embedded MAS IDE

Authors:

Gabriel Ramos Alves Lima, Elaine Maria Pereira Siqueira, Thácito R. Costa Medeiros, Carlos Eduardo Pantoja, Nilson Mori Lazarin and José Viterbo

Abstract: The rise of the Embedded Multi-Agent Systems (MAS) field brings challenges regarding technologies specifically tailored for development in this area. Among these, chonIDE, as a new Integrated Development Environment (IDE) dedicated to this newly explored scenario, has gaps in its code that can be improved to meet market demands. In this regard, this paper outlines a new architecture through the restructuring of the components of this IDE to make it more scalable and, consequently, mitigate challenges in the Embedded MAS scenario. Additionally, changes in file structure are suggested to promote code versioning and enhance interoperability with other IDEs at the project reuse level, thus making it better adapted to market demands.
Download

Short Papers
Paper Nr: 48
Title:

A LoRaWAN Multi-Network Server Application for Smart Cold Chain Tracking in Remote Areas

Authors:

Alex Fabiano Garcia, Wanderley Lopes de Souza and Luís Ferreira Pires

Abstract: This paper describes the design and implementation of a multi-network server application for smartly tracking cold chain based on the LoRaWAN technology. The system aims to solve logistics challenges in remote areas by using IoT technologies to monitor and manage the conditions of perishable goods during transportation, ensuring their quality and safety. The proposed solution encompasses various sensor types and integrates multiple network providers to improve coverage, aiming to support decision-makers in public-private partnerships when addressing social issues in outlying regions. The study compares the simulation of antenna coverage with the effective distance from the gateway that receives the signal. Field tests in the Netherlands demonstrate the system’s effectiveness in real-world scenarios, showing features such as GPS-free geolocation by multilateration, long-range communication, and the potential for applying our solution in other domains beyond cold chain logistics.
Download

Paper Nr: 59
Title:

Recommending Points of Interest with a Context-Aware Dual Recurrent Neural Network

Authors:

Lucas Silva Couto, Gislaine Camila Lapasini Leal and Marcos Aurélio Domingues

Abstract: The advent of location-based social networks (LBSNs) has reshaped how users engage with their surroundings, facilitating personalized connections with nearby points of interest (POIs) like restaurants, tourist attractions, and so on. To help the users to find points that fit their interests, recommender systems can be used to filter a large number of POIs according to the users’ preferences. However, the context in which the users make their check-ins must be taken into account, which justifies the usage of context-aware recommender systems. The goal of this work is to use a Context-Aware Dual Recurrent Neural Network to acquire contextual information (represented by embeddings) for each POI, given the sequence of points that each user has checked-in. Then, the contextual information (i.e. the embeddings) is used by context-aware recommenders to suggest POIs. We evaluated the contextual information by using four context-aware recommender systems in two datasets. The results showed that the contextual information obtained by our proposed method presents better results than the state-of-the-art method proposed in the literature.
Download

Paper Nr: 142
Title:

A Mobile App for Food Purchase Decision and Waste Minimizing Using IoT, Social Tools, ML and Chatbots

Authors:

Robin Faro, Angelo Fortuna and Giuseppe Di Dio

Abstract: Chatbots and conversational systems are increasingly emerging as technologies to support decision- making systems and to improve human-machine interaction. Our paper aims to demonstrate how social media and chatbots can improve the decisions of a consumer of food products and reduce food waste, whereas simplified conversational systems are taken into account to facilitate the interaction between users and application. In particular the paper presents a mobile app for Food Purchase Decision and Waste Minimizing where social tools and chatbots play an important role to support the implementation of an electronic pantry to optimize food purchase and consumption. This smart pantry has a memory of all the foods present in the home pantry. This allows the app to recommend the use of products that are about to expire, to provide with the help of a chatbot advice for the purchase of products useful for making recipes that take into account the products present in the pantry, to highlight gastronomic events to participate in for the type of tastings that are offered.
Download

Paper Nr: 285
Title:

Towards a RAG-Based WhatsApp Chatbot for Animal Certification Platform Support

Authors:

Gabriel Vieira Casanova, Pedro Bilar Montero, Alencar Machado and Vinicius Maran

Abstract: Ensuring compliance with animal production certification requirements is often a complex and time-consuming task. This paper presents a domain-specific chatbot designed to assist users in requesting certifications within the PDSA-RS framework. By leveraging Retrieval-Augmented Generation (RAG) and large language models (LLMs), the proposed system retrieves relevant information from specialized documents and generates accurate, context-driven responses to user queries. The chatbot’s performance was evaluated on two Brazilian certification platforms, demonstrating its potential to streamline certification requests, reduce errors, and enhance user experience.
Download

Area 5 - Human-Computer Interaction

Full Papers
Paper Nr: 31
Title:

From Legislation to Human Flourishing: Unveiling the Characteristics of Digital Well-Being by Taxonomy Development from an EU Perspective

Authors:

Katharina-Maria Illgen and Oliver Thomas

Abstract: With pervasive digitalization, human well-being is intimately connected with the condition of the information environment and the digital technologies that shape human interaction with it. With the increased exposure to technologies like Artificial Intelligence, concerns about well-being grow. However, there is no thorough understanding of the conditions necessary to enhance digital well-being, particularly from a legislative perspective. The European Union (EU) addresses this through various guidelines and regulations for a more trustworthy and human-centered approach. This study translates EU directives into practical, holistic advice via taxonomy development, helping practitioners assess their adherence to digital well-being characteristics and as a dynamic resource encouraging innovation and creation in promoting digital well-being goals. By advancing awareness and supporting human flourishing in the digital age, this research contributes to the ongoing Information Systems research discourse on critical challenges like human-technology symbiosis and well-being, especially in Human-Computer Interaction and Human-Centered AI research.
Download

Paper Nr: 40
Title:

Yearning for Love: Exploring the Interplay of Parasocial Romantic Attachment, Loneliness, and Purchase Behavior Within Dating Simulation Games

Authors:

Jeanette Buhleier, Benjamin Engelstätter and Omid Tafreschi

Abstract: Female-oriented dating simulation games (i.e., games centered around the romantic relationships between a female player and its game characters) have grown increasingly popular internationally and developed into a profitable business model. The genuine feelings of love players develop for these virtual characters (i.e., parasocial love), particularly the motifs behind such attachments, have garnered rapid curiosity. Applying the parasocial compensation hypothesis, this study conducted an online survey among female players of the free-to-play dating simulation game Mystic Messenger to explore romantic loneliness as a motivator for players’ parasocial love and its impact on players’ purchasing behavior. The correlation analysis revealed a weak negative relationship between romantic loneliness and parasocial love, indicating a complementary rather than compensatory function of such attachments. Further, while the strength of para-romantic feelings did not drive in-game spending, romantic loneliness was negatively associated with willingness to invest money. These findings suggest that other motivations drive real-money investments in romance-themed games, highlighting the complexity of player behavior in this context.
Download

Paper Nr: 91
Title:

Strategic Placement of Branding Elements in Digital Marketing: Insights from Eye-Tracking Data

Authors:

Mohamed Basel Almourad, Emad Bataineh, Mohammed Hussein and Zelal Wattar

Abstract: In today's media landscape, where consumers are overloaded with information and have shorter attention spans, digital marketers face significant difficulty in grabbing and holding customers' attention. This research examines how visual attention affects the processing of advertising stimuli. It does this by using eye-tracking technology to determine where branding components should be placed in digital ads to maximize processing efficiency and perceptual salience. The research shows that placing branding features strategically in the top central part of the advertisement can greatly increase visual attention and subsequent recall by analyzing fixation patterns and saccadic behavior. This result is consistent with well-known theories of visual attention, such as the zoom lens model, which holds that processing and memory can be enhanced by focused visual attention. The findings of the study provide marketers with important information on how to maximize the impact of their advertising campaigns by using the principles of visual attention to convey clear, powerful messages in a media landscape that is changing quickly.
Download

Paper Nr: 105
Title:

Dynamic Integration of 3D Augmented Reality Features with AI-Based Contextual and Personalized Overlays in Asset Management

Authors:

Kessel Okinga Koumou and Omowunmi E. Isafiade

Abstract: This study addresses the challenges of manual implementation of 3D models in AR and the scalability limitations of AR applications in asset management. It proposes a framework for the dynamic integration of 3D models into the AR environment, incorporating AI to enhance textual content and personalized user engagement. The study presents a system architecture comprising three layers: (i) The bottom layer, which handles the interactive capabilities of 3D models, including collision detection, mesh manipulation, dataset preparation, and model training; (ii) The middle layer, which facilitates communication between the web asset management platform and mobile application developed; and (iii) The topmost layer, which focuses on user interaction with the 3D models via the web platform. To evaluate the framework, two 3D models (microscope and centrifuge) were used as case studies for dynamic integration. The AI component was trained using a dataset based on the microscope information obtained with web scrapping. The model was trained using both Standard LSTM and BiLSTM architectures, with the dataset split into 60% for training, 20% for testing, and 20% for validation, over 50 epochs with a batch size of 64. The BiLSTM outperformed the Standard LSTM, achieving a test accuracy of 94.35% and a test loss of 0.51. This research is significant in revolutionizing asset management and promoting personalized content for quality education through technological innovation.
Download

Paper Nr: 130
Title:

Emotions and Experiences on the Road: Unveiling UX in Automotive Infotainment Through YouTube Comments

Authors:

Lígia Teixeira, Yago Alencar, Lorena Bastos, Pollyana Rodrigues, Raquel Pignatelli da Silva and Adriana Lopes Damian

Abstract: Automotive technologies have been advancing, and infotainment systems have become a key component in the User Experience (UX). Given the complexity of these systems and the diversity of user preferences, consumer opinions are crucial to analyze satisfaction and overall experience. This paper presents an investigation about the UX of information system based on consumer opinions. We started our investigation on YouTube platform, collecting comments regarding consumer opinions in review videos from several kinds of infotainment systems. We analyze comments with the support of sentiment analysis and UX dimensions to characterize user perceptions about information systems. We adopted a hybrid approach, which combined Natural Language Processing support and human analysis. Our findings reveal that performance, connectivity, and functionality issues often result in negative perceptions, while intuitive interfaces and device integration caused positive experiences. This investigation can address research opportunities for UX of infotainment systems, such as proposals to support the reduction of negative perceptions, including positive recommendations for the evolution of these systems.
Download

Paper Nr: 138
Title:

Enhancing Usability in Large Map Interactions: A Novel Magnifying Lenses Technique for Tabletops

Authors:

Fernando Pedrazzi Pozzer, Gustavo Machado de Freitas, Gabriel Fronza Schuster, Felipe Carvalho de Marques de Oliveira, Cesar Tadeu Pozzer and Lisandra Manzoni Fontoura

Abstract: Tabletops are large interactive displays that enable users to interact for collaborative analysis and planning on datasets. These devices allow communication using pointing and gestures and interactive zooming, searching, and modifying data. However, when interacting with large maps, the user must analyze details that can only be viewed by enlarging the map region. Furthermore, it is interesting to visualize the context (zoom out) and the details of a specific area (zoom in). Magnifying lenses are often used for this purpose, but these lenses have the disadvantage of losing context. In this paper, we present a new technique for interaction using magnifying lenses for tabletops to improve interaction usability in large maps. We analyzed several techniques discussed in the literature for magnifying lenses. We performed experimental implementations in the context of the SIS-ASTROS GMF Project. We validated the work through objective and subjective analyses, and the results demonstrated significant improvements in objective metrics, such as time and accuracy, as well as in subjective metrics, including frustration, effort, and mental demand. The NASA-TLX Questionnaire was used to evaluate the subjective metrics.
Download

Paper Nr: 169
Title:

Exploring Stakeholders’ Practical Needs for GDPR Compliance

Authors:

Ana Ferreira, Pedro Vieira-Marques and Rute Almeida

Abstract: In a time when various regulations and directives are enforced within the European cyberspace regarding cybersecurity and data protection, General Data Protection Regulation (GDPR) requirements are still far from being completely understood and integrated into the practice of individuals personal and sensitive data processing. Having clear directions of what is needed to protect the privacy of personal data is essential but even more, is the availability of tools and mechanisms that can provide easy, structured and, hopefully, more automated ways to implement those requirements in practice. After more than six years of GDPR enforcement, how are people aware, knowledgeable and prepared to comply with GDPR in their daily practice? Moreover, what still needs to be done to improve this process? This work presents the results of a survey aimed to collect the perceptions, preferences and needs regarding interactive and assistive tools, together with its content, to support GDPR compliance in practice. Participants (n=62) from varied backgrounds and experiences agreed that such tools are very needed and can have beneficial impact in terms of Privacy, Knowledge, Efficiency and Productivity, but also in terms of Safety. Results also show that stakeholders who frequently need to perform personal data processing, do not many times have the knowledge, experience or required support to put compliance procedures into practice, and within their context. Our study contributes to understanding what content and functionalities a GDPR compliance tool must include to support those stakeholders.
Download

Paper Nr: 186
Title:

Dark Patterns in Games: An Empirical Study of Their Harmfulness

Authors:

Emerson Veiga, Nabson Silva, Bruno Gadelha, Horácio Oliveira and Tayana Conte

Abstract: Dark patterns (DPs) are manipulative design strategies that exploit players’ cognitive biases, often at their expense. DP in games can negatively affect players’ experiences by coercing them into unwanted behaviors, often without informed consent. While previous research has categorized DPs and explored their impacts, an empirical evaluation of their perceived harmfulness remains unexplored. This study aims to create a catalog of DP and evaluate players’ perceptions of them to gather insights into how they are experienced and understood by players. We extracted DPs and their definitions from prior academic work, refining them with examples from community forums. To evaluate players’ perceptions, we developed a survey to assess each DP’s harmfulness, problematic nature, and prevalence. We surveyed 30 participants representing a range of gaming engagement levels. Statistical tests were conducted to compare harmfulness scores across different patterns, identifying significant differences among them. Additionally, qualitative analysis provided insights into players’ experiences and perceptions, highlighting key concerns regarding specific Dark Patterns. The results provide valuable insights into players’ perceptions of DPs and how they may be unaware of these patterns, aiming to raise awareness and reduce their use in game design.
Download

Paper Nr: 208
Title:

Pilot Study on the Effects of Gamification and Virtual Reality on the Shopping Experience

Authors:

Ruben Grande, Diego Cordero, David Vallejo, Carlos González, Santiango Schez-Sobrino, Jose Jesús Castro-Schez and Javier Albusac

Abstract: E-commerce has embraced emerging technologies such as Virtual Reality (VR) and Augmented Reality (AR), which are transforming consumer interaction. In particular, VR enables the creation of immersive environments that simulate and enhance physical experiences, offering advantages such as gamification of the shopping process, thereby boosting user engagement and interaction. This pilot study addresses two main questions: how does the exploration of digitalized products in VR influence purchase intention? And, does a gamified experience with a specific product have an additional impact? In collaboration with a local business, an experiment involving 48 participants was designed. A portion of the store, its surroundings, and several products were digitalized using advanced techniques such as Neural Radiance Fields (NeRF) and Gaussian Splatting, achieving realistic models integrated into a virtual store accessible through Meta Quest 3 headsets. One specific product, a comic book, was gamified, allowing users to interact with its narrative, solve challenges, and be incentivized to purchase the product to discover the ending. Preliminary results, including a conversion rate of 41.9%, suggest that VR, especially when it incorporates gamification, can increase purchase intention and interest in local products, highlighting its potential in digital commerce.
Download

Paper Nr: 305
Title:

Applying Checklist and Design Patterns for Evaluating and Redesigning a Dashboard Interface of a Decision Support Information System

Authors:

Kennedy Nunes, Arthur Passos, Matheus Menezes, Felipe Feyh, Carlos Moura, Lucas Pinheiro, Auriane Santos, Aristofanes Silva, João Dallyson, Italo Francyles and Luis Rivero

Abstract: Well-designed dashboards synthesize complex data, allowing users to quickly identify trends and patterns. To achieve their goals, these dashboards should be easy to use, improving the users’ ability to understand, interact with, and derive insights from the presented data. This paper highlights the importance of dashboards in supporting decision-making, emphasizing the crucial role of UX and usability in the effectiveness of these systems. The main goal of the paper is to propose quality attributes related to usability and user experience that can be incorporated during the development process of dashboards. Following a literature review on the quality attributes of dashboards, a checklist was developed to evaluate the usability aspects of these systems. The checklist facilitates the structured and easy identification of usability issues, even by inexperienced users, while being a robust evaluation tool built on validated quality attributes from prior literature. Also, an aggregated set of Design Patterns was identified and paired with the verification items of the checklist. Both the inspection checklist and the design patterns were applied for the evaluation and redesign of Dashboards proposed within an information system for Decision Making purposes at the Equatorial Energia Multinational Power Company. The results from this experience suggests the feasibility of considering these quality attributes for improving the ease of use of Dashboards.
Download

Short Papers
Paper Nr: 27
Title:

Customer-Facing Social Robots in the Grocery Store: Experiences from a Field Trial

Authors:

Niklas Eriksson, Kristoffer Kuvaja Adolfsson, Christa Tigerstedt and Minna Stenius

Abstract: Customer-facing technology provided by retailers is becoming increasingly common in retail stores. In this study the focus is on customer-facing social robots (i.e. embodied robots that interact with humans) in a grocery store. Based on workshops, learning via making and a customer survey (n=39) during a field trial, this study explores potential roles for robots in a grocery store, how well the robots can perform the roles assigned to them, and customers’ perception of the robots. Seven main roles that social robots could take on in a grocery store were identified: store guide, sales promoter, shopping assistant, entertainer, store chef, product supervisor, and experience evaluator. The two robots that were field trialled performed their tasks reasonably well. The results from the customer survey confirm previous research that customers perceive social robots primarily positively. This study, however, also indicates that a notable share of the customers may find social robots unpleasant in a store setting. Limitations and further research are also discussed.
Download

Paper Nr: 85
Title:

A Framework for Agile and UX Integration in Healthcare Software Development: A Kanban Approach

Authors:

Emanuel P. Vicente, Wilamis K. N. da Silva, Geraldo T. G. Neto and Gustavo H. S. Alexandre

Abstract: The expansion of medical applications increasingly requires quality in their interfaces to mitigate usability problems and improve the patient experience. The development of healthcare applications presents challenges in the field of usability due to the risk of impact on patient safety. This article addressed these issues by integrating User-Centered Design and Agile Software Development (ASD), creating several kanban boards to organize the flow of tasks between designers and developers. This study evaluated the integration of User-Centered Design (UCD) practices the use of Kanban as an ASD method during the development of an electronic appointment scheduling subsystem in a Hospital Information System (SIS). Semi-structured interviews were carried out with 10 project members on fourteen questions about the challenges and opportunities of integrating UCD practices with the Kanban method in the healthcare context. Through qualitative analysis, we concluded that the integration of these approaches led to the expansion of software engineering knowledge in the area of usability, UX and agile development, in addition, it helped to organize the flow of activities, improve the relationship between interaction and development designers teams. developers and reduce the number of usability problems in healthcare software products.
Download

Paper Nr: 107
Title:

Development and Preliminary Evaluation of a Technology for Assessing Hedonic Aspects of UX in Text-Based Chatbots

Authors:

Pamella A. de L. Mariano, Ana Paula Chaves and Natasha M. C. Valentim

Abstract: For text-based chatbots to achieve a desired level of quality, it is essential to evaluate their performance, particularly focusing on the hedonic aspects of User Experience (UX), which is a crucial quality attribute. A comprehensive evaluation must consider the specifics of the context being assessed. A Systematic Mapping Study (SMS) revealed that no existing UX evaluation technologies address the hedonic aspects of UX in text-based chatbots. The Guidelines to Assess Hedonic Aspects in Chatbots (GAHAC) was developed to address this gap. The guidelines were formulated by selecting and evaluating the hedonic aspects of UX and the evaluation technologies identified in the SMS. Relevant questions from these technologies were filtered and adapted to the context of text-based chatbots. GAHAC aims to provide a context-specific evaluation technology in the form of guidelines encompassing the hedonic aspects of UX. Its primary contribution is providing a structured and accessible method for evaluating hedonic aspects, which have been largely overlooked in UX studies of text-based chatbots. This enables developers and researchers to qualitatively identify opportunities to improve UX in chatbot interactions. A preliminary evaluation conducted with two Human-Computer Interaction experts led to refinements in the guidelines. By offering a dedicated UX evaluation technology for text-based chatbots, GAHAC contributes to improving the quality of such systems.
Download

Paper Nr: 153
Title:

Mixed Reality-Based Platform for Remote Support and Diagnosis in Primary Care: A Position Paper

Authors:

Francisco M. García, Mohamed Essalhi, Samuel Espejo, Santiago Sánchez-Sobrino, Javier A. Albusac and David Vallejo

Abstract: This position paper proposes the design of MRP-5G, a mixed reality-based platform for remote support and diagnosis in primary care by integrating 5G communication technologies and Artificial Intelligence. MRP-5G will facilitate real-time communication between primary care staff and medical specialists, providing functionalities such as real-time videoconferencing, virtual annotation, and intelligent session indexing for medical training purposes. The proposed architecture is modular and scalable. It includes functional layers for networked communication, integration of LLM-based chatbots, and secure data management. Our approach aims to provide low latency, high quality interactions, and integration of augmented 3D information into clinical workflows. MRP-5G will positively impact on remote healthcare by improving clinical decision-making, enhancing medical education and addressing inequalities in access to healthcare in rural regions. In the next few years, we intend to address key healthcare challenges such as limited access to specialists in rural areas and the need for technological solutions that enable efficient, interactive, and equitable care. Our work, currently in the design and implementation phase of first functional prototypes, aims to stimulate critical discussion and collaboration in the scientific community to refine and scale this innovative approach.
Download

Paper Nr: 157
Title:

VR-ADAPT: An Immersive Learning and Training Environment for Wheelchair Users with Recent Spinal Cord Injuries

Authors:

Javier Albusac, Diego Cordero, Mario Jiménez, Rubén Grande, Vanesa Herrera, Raquel Perales, M. Eugenia San Felix and Ana De Los Reyes

Abstract: Each year, thousands of people worldwide suffer injuries that limit their mobility, affecting not only their ability to walk but also, in many cases, the functionality of their upper limbs. These conditions represent a drastic life change for patients, who must undergo an initial process of learning and adaptation to their new circumstances. To address this challenge, we present VR-ADAPT, an innovative virtual reality-based platform designed to facilitate the transition to a more autonomous life. VR-ADAPT integrates an advanced simulator for learning to operate electric wheelchairs and digitized environments based on domestic and workplace settings. These environments are gamified through serious games, allowing users to practice and develop essential skills for confidently navigating their daily lives. Additionally, the platform includes a kinematic recording and analysis module that collects detailed data during exercises. This functionality provides clinical teams with a valuable tool for objectively evaluating patients’ progress, enhancing the personalization and effectiveness of therapies.
Download

Paper Nr: 192
Title:

Usability in Software for People with Disabilities: Systematic Mapping

Authors:

Luiz Felipe Cirqueira dos Santos, Edmir Queiroz, Igor Rafael Eloi dos Santos, Elisrenan Barbosa da Silva, Mariano Florencio Mendoncça and Fabio Gomes Rocha

Abstract: This study presents a systematic mapping of usability analysis tools focused on accessibility, aiming to identify technologies, methods, and challenges related to improving inclusive interfaces. Tools such as DUXAIT-NG, Guideliner, and MUSE were analyzed, standing out for integrating automated evaluations and specific adaptations. However, they exhibited technical limitations in customization and application to different contexts and types of disabilities. The results demonstrated the positive impact of these tools on the development of accessible software while also highlighting research gaps, such as the lack of empirical studies and the absence of real-time dynamic analyses. Based on this analysis, the study contributes by organizing and systematizing knowledge on accessibility tools, identifying research gaps that emphasize the need for greater flexibility in solutions and validations, and suggesting technological and methodological advancements. It reinforces the importance of expanding research to other databases and developing more robust and dynamic tools.
Download

Paper Nr: 253
Title:

Unveiling the Expanding Landscape of Attention-Capture Damaging Patterns

Authors:

Tales Guarisa Gomes, António Correia, Jano de Souza and Daniel Schneider

Abstract: This paper aims to investigate, define, and classify a comprehensive set of Attention-Capture Damaging Patterns (ACDPs) in the context of social media apps and platforms. A new taxonomy is proposed to categorize ACDPs based on their mechanisms and psychological impacts on users. Building on the concept of “dark patterns” and examining how they contribute to social polarization, this study explores the intersection between digital interface design, digital well-being, and polarization. The paper analyzes several examples of ACDPs present in popular platforms such as Instagram, TikTok, WhatsApp, and Facebook, proposing a new categorization based on three main categories. In addition, it discusses alternative design strategies that promote healthier interactions on digital platforms, aiming to mitigate the negative effects of these patterns and promote a more balanced digital environment.
Download

Paper Nr: 260
Title:

StreamVis: An Analysis Platform for YouTube Live Chat Audience Interaction, Trends and Controversial Topics

Authors:

Gabriela B. Kurtz, Stéfano de P. Carraro, Carlos R. G. Teixeira, Leonardo D. Bandeira, Bernardo L. Müller, Roberto Tietzmann, Milene S. Silveira and Isabel H. Manssour

Abstract: This paper presents StreamVis, an easy-to-use platform that provides stats and visual representations to analyze live chat data from YouTube. StreamVis uses Python and Google’s YouTube Data API for data gathering, combined with libraries such as NLTK for natural language processing, Pandas for data analysis, and Mat-plotlib for visualization. Its interactive dashboard facilitates real-time data visualization through frequency charts, word clouds, and sentiment analysis, providing deep insights into audience engagement patterns. A case study analyzing the NFL’s first game in Brazil broadcast on Cazé TV demonstrates how StreamVis reveals trends in audience interactions during critical moments, like game highlights and performances. StreamVis is different from previous tools because it has a user-friendly interface, enabling non-technical users (such as journalists and other media professionals) to perform complex data analysis with a large volume of content, helping them to understand how live chat dynamics influence media consumption.
Download

Paper Nr: 287
Title:

Accident Prevention in Industry 4.0 Using Retrofit: A Proposal

Authors:

Paulo Henrique Mariano, Bruno Pinho Campos, Frederico Augusto Cardozo Diniz, Carlos Frederico Cavalcanti and Ricardo Augusto Rabelo Oliveira

Abstract: In this work, we present an Industry 4.0 retrofit solution to prevent accidents in industrial environments, specifically focusing on the operation of bandsaw machines. It examines a real-world scenario where a company aims to enhance worker safety by implementing an integrated solution. The proposed solution involves a pattern recognition system that monitors the work area and sends commands to stop the machine in case of dangerous movements near the bandsaw. This system adheres to Industry 4.0 principles, demonstrating how this methodology can create a safer industrial environment to connect information technology (IT) and operational technology (OT).
Download

Paper Nr: 295
Title:

PriPoCoG: Empowering End-Users’ Data Protection Decisions

Authors:

Jens Leicht, Julien Lukasewycz and Maritta Heisel

Abstract: The General Data Protection Regulation (GDPR) demands data controllers to provide transparent information about data processing to data subjects. This information is mostly provided in the form of textual privacy policies. These policies have many disadvantages, such as their inconsistent structure and terminology, their large scope, and their high complexity. For this reason, data subjects are likely to accept the agreement even if they do not fully agree with the data processing contained in it; this phenomenon is known as the privacy paradox. To overcome these disadvantages, we propose a user interface based on the results from a thorough literature review and a group interview. By not relying on a completely textual approach, we reduce the mental effort required from data subjects and increase transparency. We utilize the Prolog - Layered Privacy Language (P-LPL), which allows data subjects to customize privacy policies. Our work extends the compliance checks of P-LPL with compatibility checks for customized privacy policies. The proposed interface provides graphical representations for privacy policies, aligning with different mental models of data subjects. We provide a prototype to demonstrate the proposed theoretical concepts.
Download

Paper Nr: 296
Title:

Development of a Solution for Identifying Moral Harassment in Ubiquitous Conversational Data

Authors:

Gabriel Valentim, João Carlos D. Lima, Fernando Barbosa, João Víctor B. Marques and Fabrício André Rubin

Abstract: This study presents the development and evaluation of a moral harassment detection system focusing on mobile and pervasive computing, leveraging artificial intelligence, textual similarity analysis, and ubiquitous data generated from recorded audio. Implemented as a mobile application, the system allows users to record audio and identify inappropriate behaviors using models like Mistral AI and Cohere, while integrating a collaborative database that evolves with user contributions. Tests conducted ranged from simple phrases to complex dialogues and colloquial expressions, demonstrating the hybrid approach’s effectiveness in capturing cultural and linguistic nuances. By combining advanced technologies and user participation, the system adaptively identifies moral harassment, enhancing detection accuracy and continuous learning. This work underscores the potential of mobile devices and pervasive systems to monitor daily interactions in real-time, contributing to moral harassment prevention, fostering ethical environments, and advancing the innovative use of ubiquitous data for social well-being.
Download

Paper Nr: 317
Title:

Towards a VR-BCI Based System to Evaluate the Effectiveness of Immersive Technologies in Industry

Authors:

Mateus Nazario Coelho, João Victor Jardim, Mateus Coelho Silva, Flávia Silvas and Saul Delabrida

Abstract: Industry 4.0 demands from its operators’ knowledge and mastery of modern technologies, such as the Internet of Things and Virtual Reality, as these offer the Operator 4.0 intelligent tools to improve its daily operations and practices. Recent research shows promising results on immersive technologies, as they provide a safe and effective tool for representing hazardous environments that are often difficult to replicate in the real world. Nevertheless, there is a gap in research on behavioral changes in users while using these technologies, in addition to evaluating the effectiveness of industrial processes and training and the challenges to implement in the current Industry. This work seeks to evaluate and answer these questions by using modern technologies such as VR, BCI, Eye Tracking, and xAPI for this evaluation through the perspectives of attention and fatigue by capturing the user’s behavior and physiological data inside a Virtual Environment so that in the future will be validated through a user test to evaluate and reflect on the effectiveness of using virtual reality in Industry.
Download

Paper Nr: 81
Title:

Jetson’s View: Designing Trustworthy Air Taxi Systems

Authors:

Isadora Ferrão, José Cezar de Souza Filho, Káthia Oliveira, David Espes, Catherine Dezan, Mohand Hamadouche, Rafik Belloum, Bruna Cunha and Kalinka Branco

Abstract: As the world’s population grows and urbanization accelerates, the need for sustainable urban mobility solutions becomes more and more important. Smart cities, considered the answer to urban challenges, are ready to integrate innovative modes of transport, such as electric vertical take-off and landing vehicles (eVTOLs), into their fabric. This paper discusses challenges and opportunities presented by eVTOLs, with a particular focus on safety, security, and user experience. Based on a resilient architecture proposed by STRAUSS, which integrates safety and fault tolerance measures, we characterize a framework for safe and reliable eVTOL operations in smart cities. In addition, we investigate the design of user interfaces for autonomous eVTOL systems by employing personas and use case scenarios. A interface’s prototype illustrates the adaptability and functionality of the interface in real-life scenarios, meeting the diverse needs of users and promoting trust in future urban transport systems. Through this interdisciplinary approach, this research aspires to advance the adoption of eVTOLs and enhance urban mobility at the dawn of the smart city’s future.
Download

Paper Nr: 89
Title:

Remote Emotional Interactions via AI-Enhanced Brain-to-Body Neurophysiological Interface

Authors:

Geovanna Evelyn Espinoza Taype, Maria Cecília Calani Baranauskas and Julio Cesar Dos Reis

Abstract: The rapid growth of Artificial Intelligence (AI) has led to the emergence of Human-AI Interaction. This area explores how humans and AI systems can effectively collaborate and communicate. Recent studies have shown that using traditional approaches might not be adequate to capture issues arising from the combination of methods of these disciplines. A recent approach emerging in human-computer interaction (HCI), the so-cioenactive approach, represents a new possibility for capturing aspects in the confluence of AI and HCI due to its focus on the social-physical-digital coupling. In socioenactivity studies, the brain, body, senses, perception, cognition, sensorimotor, and emotions in interactions with people, physical objects, and computational systems. This study investigates and develops a socioenactive system empowered with AI that is designed to foster and enhance socio-emotional interactions between participants who are connected remotely. Our solution has the potential to significantly impact the field of Human-AI Interaction, by providing a deeper understanding of the interaction and coupling between human-AI through the socioenactive system. The so-cioenactive scenario involves a socioenactive system based on BCI (Brain Computer Interface) composed of several components: a mind wave device, smartwatch, parrot robot, and Aquarela Virtual system (which involves physical QR toys). These components are connected to share data remotely. The mind wave device and smartwatch collect neurophysiological information, and AI algorithms process this data to recognize emotions evoked by a parrot robot and the Aquarela Virtual. The AI component uses a machine learning technique to recognize emotions in brain waves (EEG) data. Our solution explores tree algorithms to recognize emotions in heart rate (ECG) data. Our evaluation, conducted in a workshop with participants from different nationalities and ages, demonstrates that the socioenactive system with embedded AI is a key driver of socio-emotional interactions. The system’s ability to interpret and utilize neurophysiological information to facilitate dynamic coupling between humans and technological processes might significantly advance Human-AI Interaction.
Download

Paper Nr: 129
Title:

Challenges and Approaches to Enhance Usability in Healthcare Applications: A Systematic Literature Review

Authors:

Emanuel P. Vicente, Wilamis K. N. da Silva, Geraldo T. G. Neto and Gustavo H. S. Alexandre

Abstract: The expansion of medical-hospital applications requires increasingly more quality in their interfaces so that usability problems do not encourage errors and adverse events that could impact patient safety. The ecosystem of healthcare applications has challenges in the field of usability due to the dynamic medical environment in which these systems are designed, developed and operated, suffering interference from social and technological factors, and compliance with legislation. This research identifies the challenges faced by the healthcare applications ecosystem in order to map them. This work reports, through a literature review, which challenges software engineering has faced to guarantee the usability of applications in the healthcare context. A systematic review of the literature was carried out using a research protocol with the purpose of identifying relevant studies in the IEEE Xplore, ACM, Scopus, Science Direct and PUBMED databases, using criteria that adhere to the practices of search, critical evaluation, data extraction and synthesis of works, resulting in the obtaining of 43 relevant works. The literature search yielded 43 articles that met the study criteria. It resulted in mapping the main challenges found in the literature when applying existing approaches to increase the usability of healthcare applications and reduce the impact of violations that could result in adverse health events for patients. The study identified the challenges faced by the usability of health systems, as well as mapping which factors contribute to these challenges. Furthermore, this work aims to impact the use of approaches that guarantee better usability of the applications of this ecosystem and consequently reduce the risk of adverse events to the health of patients.
Download

Paper Nr: 165
Title:

Development of a Context-Free Data Ingestion Mechanism for AutoML

Authors:

Gabriel Mac’Hamilton and Alexandre M. A. Maciel

Abstract: Automated Machine Learning (AutoML) is a technology that simplifies complex data processing and analysis for strategic decision-making by automating machine learning tasks and enhancing the user experience. Data ingestion is a crucial AutoML step that involves collecting external data for machine learning workflows. Typically, AutoML systems include data input modules. However, the lack of a user interface limits the number of users that can utilize it. This work presents the development of a data ingestion mechanism that streamlines and simplifies this machine learning stage into an AutoML framework called FMD. The mechanism underwent three validations: Experimentation in a real-world scenario with two databases from different contexts, evaluation from expert opinions, and usability assessment through a questionnaire using the AttrakDiff method. Following the validations, successful results were achieved in both assessments and in demonstrating the ingestion in various contexts.
Download

Paper Nr: 180
Title:

Rethinking Usability Assessment: Integrating UX and Information Architecture

Authors:

Jo Santos, Márcia Lima and Tayana Conte

Abstract: Evaluating software usability in the early stages of development is important, thus avoiding costs associated with future changes and dissatisfied users. Current usability inspection techniques may be limited in scope as they do not embrace concepts such as User Experience (UX) and Information Architecture (IA). This paper presents a new set of heuristics based on Garrett’s elements of UX to be used in usability inspections, aiming to create systems that prioritize UX and IA aspects as an alternative to existing heuristic sets. The set was developed based on Garrett’s planes of UX, resulting in the creation of 14 heuristics organized according to each UX plane. We conducted an empirical study to analyze the technique’s feasibility. The results indicate that the new heuristics set can detect a reasonable amount of defects within an appropriate time frame. Additionally, we received feedback on the heuristics themselves, allowing for slight modifications. Finally, the paper concludes by discussing future directions for the new heuristics set.
Download

Paper Nr: 227
Title:

Method for Evaluating the Quality of Serious Games in Medical Education

Authors:

Francisco Anderson Mariano da Silva, Wellington Candeia de Araújo, Thiago Prado de Campos and Tiago Silva da Silva

Abstract: The integration of serious games into medical education, particularly in surgical training, has proven to be a promising approach for enhancing skills and knowledge acquisition. This study introduces the MAQJSEM (Method for Evaluating the Quality of Serious Games in Medical Education), a comprehensive evaluation method designed to address critical dimensions such as motivation, user experience, usability, and knowledge acquisition. The development process involved a systematic literature review, followed by validation through a pilot study and expert evaluations. MAQJSEM was applied to a mobile application focusing on surgical training, and its evaluation revealed the method’s robustness and practicality in assessing serious games within this context. Notable findings include the importance of incorporating emotional and immersive elements, as well as clear instructions and intuitive usability features. Expert feedback led to the refinement of dimensions and items, enhancing the clarity and relevance of the method. MAQJSEM contributes significantly by offering a validated and adaptable tool for improving serious games in medical education. The method supports developers and educators in creating engaging and pedagogically effective tools, fostering skill development and preparation for real-world challenges in the medical field.
Download

Paper Nr: 231
Title:

UX4ALL: A Repository of User Experience Evaluation Methods

Authors:

Ana Clarissa Barroso Beleza, Suzan Evellyn Nascimento da Mota, Márcia Sampaio Lima and Tayana Uchôa Conte

Abstract: The evolution of the User Experience (UX) area is crucial for the success of any design or system development process. Although numerous UX evaluation methods exist, understanding and utilizing these methods can be challenging for interested parties. In this context, the Experience Research Society (EXPRESSO) platform aims to support the comprehension of UX by providing data on over 80 UX evaluation methods. However, the platform’s content has limitations that hinder understanding and application. In this sense, this research proposes to create a new repository model for UX evaluation methods called UX4ALL. We intend to democratize access to UX evaluation knowledge. We developed UX4ALL using data collected from the EX-PRESSO platform, which underwent analysis, selection, enrichment, and classification before being included in the UX4ALL prototype. Furthermore, we used the System Usability Scale (SUS) method to evaluate the prototype. In addition, a UX expert assessed the UX4ALL. We used the results to support the development of the second version of UX4ALL. The main contributions of this research are: (1) The democratization of understanding UX evaluation practices; and (2) The creation of an easy-to-use repository prototype for UX evaluation methods, named UX4ALL Future studies aim to evolve the prototype into a final product, making it accessible to all interested parties and contributing to the popularization of UX practices.
Download

Paper Nr: 262
Title:

Evaluating Performance and Acceptance of the UUXE-ToH Questionnaire for Touchable Holographic Solutions

Authors:

Thiago Prado de Campos, Saul Delabrida, Eduardo Filgueiras Damasceno and Natasha M. C. Valentim

Abstract: Touchable Holographic Solutions (THS) enable natural hand interactions with virtual objects in augmented and mixed reality environments, presenting unique challenges for usability and user experience (UX) evaluation. Traditional tools, such as the System Usability Scale (SUS) and User Experience Questionnaire (UEQ), do not adequately address critical aspects of THS, including immersion and presence. The UUXE-ToH questionnaire was developed to bridge this gap, integrating usability and UX dimensions into a single instrument tailored to THS contexts. This paper presents the results of a performance and acceptance study conducted during a workshop at a conference on Human-Computer Interaction (HCI). The study compared UUXE-ToH v4 with a combination of established instruments, using the Cubism game as a case study on Meta Quest 2 and Meta Quest 3 devices. Fourteen participants evaluated the game using one of the two approaches, providing feedback on effectiveness, efficiency, and technology acceptance. Results show that UUXE-ToH v4 enabled the identification of a greater number of unique usability and UX issues and scored higher in ease of use and future intention to use compared to the combined instruments. These findings highlight the robustness and applicability of UUXE-ToH v4 in evaluating THS, offering significant insights for improving evaluation methodologies and the design of interactive holographic solutions.
Download

Paper Nr: 288
Title:

Mapping Open Design and Participation in Smart City Solutions: A Systematic Literature Review

Authors:

Flávio Henrique Alves, Maria Cecilia Calani Baranauskas and Alexandre L’Erario

Abstract: This article presents a systematic review on the adoption of open design practices in smart cities, considering how Information and Communication Technologies (ICT) enhance sustainable and inclusive urban solutions. After applying the PRISMA protocol in databases such as ACM, IEEE, Springer and Scopus, 74 articles published from 2013 to 2024 were selected. The results of analysis reveal that, although there are advances in the application of IoT, in platforms for citizen engagement, and in environmental sensors technology, there are still gaps in the standardization of definitions for “smart city” and a lack of evaluation methods. There is a strong concentration of research in Europe and North America, suggesting the need to expand research to contexts of other continents and regions of the world. The analysis is conducted through the Semiotic Framework and contextual factors, showing that the acceptance of solutions depends on a balance between the social world, the digital world and infrastructure offered. In conclusion, open design emerges as a promising strategy for the development of truly smart cities, demanding more multidisciplinary cooperation, robust evaluation methodologies and greater inclusion of diverse social contexts.
Download

Paper Nr: 297
Title:

On the Imperative of Interdisciplinarity in Defining Digital Exclusion?

Authors:

Sylvie Michel and Magalie Duarte

Abstract: Digital inclusion is a central concept in information systems (IS) management, in a context of social and environmental transitions, and with the emergence of disruptive technologies for society, such as artificial intelligence, or blockchain. When it is mobilised in the literature, the aim is mainly to provide solutions to digital inequalities (digital divide and literacy). However, situations of digital inclusion and exclusion can coexist, in the meantime. To assess the impact of digital technologies on society, we stand for an imperative to define this complementary concept of “digital exclusion”, i.e. the social mechanisms that keep individuals unable to fully participate in a world structured by technological spheres. Our article proposes to anchor this definition in an interdisciplinary approach, drawing on philosophy and sociology, in order to envisage and operationalize future required research on digital exclusion in IS.
Download

Area 6 - Enterprise Architecture

Full Papers
Paper Nr: 101
Title:

Utilizing ChatGPT as a Virtual Team Member in a Digital Transformation Consultancy Team

Authors:

Tim de Wolff and Sietse Overbeek

Abstract: This study aims to design and evaluate a method that leverages ChatGPT for efficiency improvement in digital transformation projects, specifically while designing target business architecture products. The main research question is stated as follows: ‘How can a large language model tool be utilized to support the development of target business architecture products?’ The resulting method, GenArch, enables utilization of ChatGPT throughout business architecture design processes. This method is validated by means of expert interviews and an experiment. The perceived ease of use, perceived usefulness, and intention to use of the method are analyzed to assess the perceived efficacy, which serves as an indicator for efficiency. The results show that GenArch possesses at least a moderately high level of perceived efficacy.
Download

Paper Nr: 149
Title:

DynaSchema: A Library to Support the Relational Data Schema Evolution for the Self-Adaptive Software Domain

Authors:

Gabriel Nagassaki Campos and Frank José Affonso

Abstract: The development of self-adaptive software (SaS) represents a significant challenge, as this type of software enables structural, behavioral, and context changes at runtime. Among the range of SaS, this paper focuses on a specific type of SaS that enables data schema evolution (DSE) at runtime. This type of SaS requires data storage while preserving the integrity between the logical model (i.e., SaS) and the data model (i.e., data schema). Regarding DSE, a solution must encompass not only the migration of the original data model to a new one but also the migration of data from the old schema to the new one without affecting the SaS regarding incompatibility and/or lack of data integrity. Although relevant to the SaS domain, DSE is a research topic that still needs further investigation to develop a comprehensive and robust solution. The objective of this paper is to contribute to this research topic by presenting DynaSchema, a library that enables the evolution of relational data schemas at runtime through a non-intrusive approach. To demonstrate the applicability of the DynaSchema library, a case study was conducted. The findings suggest that the library has the potential to make a significant and efficient contribution to the SaS domain.
Download

Paper Nr: 172
Title:

Business Process Design Support with Automated Interviews

Authors:

Danielle Silva de Castro, Marcelo Fantinato and Mateus Barcellos Costa

Abstract: Interviews are widely used to collect and organize decentralized, unstructured, and undocumented information, serving as a valuable tool for business process design and re-design. Conversely, they present several challenges, including high costs related with planning, preparation, and execution, as well as the need for strong engagement from interviewees. Additionally, interviewees often lack a clear and comprehensive understanding of the processes, which complicates their ability to fully articulate them. The advent of intelligent conversational tools presents new opportunities for process elicitation but also introduces challenges in managing the subjective and tacit nature of business process knowledge, while ensuring the generation of consistent and high-quality models. Considering these issues, this paper discusses an automated approach for conducting business process modeling interviews. This approach is designed to capture process behavior using the so-called Situation-Based Modeling Notation (SBMN), which further enables the automated generation of alternative imperative business process models. To evaluate the proposed approach, a conversational agent prototype and a model generator of possible solutions were developed. The experiments conducted demon-strate the ability to construct high-quality models according to the metrics of recall, precision, generalization, and simplicity. The results also indicate that the generated models maintain consistency with process constraints and uncover alternative models effectively.
Download

Paper Nr: 212
Title:

Outpacing the Competition: A Design Principle Framework for Comparative Digital Maturity Models

Authors:

Maximilian Breitruck

Abstract: Digital maturity models (DMMs) already have a long history of providing organizations with structured approaches for assessing and guiding their digital transformation initiatives. While descriptive and prescriptive DMMs have seen extensive development, comparatively few models focus on benchmarking digital maturity internally as well as externally across multiple organizations. Moreover, existing literature frequently highlights persistent shortcomings, including limited theoretical grounding, methodological inconsistencies, and inadequate empirical validation. This study addresses these gaps by synthesizing insights from a systematic literature review of 58 publications into a cohesive set of design principles for comparative DMMs. We differentiate between “usage design principles,” which adapt established descriptive and prescriptive DMM components to comparative contexts, and newly formulated principles developed specifically to accommodate implicit data sources and support ongoing benchmarking. The resulting framework provides researchers and practitioners with a foundation for designing, evaluating, and selecting comparative DMMs that are more conceptually robust, methodologically sound, and empirically viable. Ultimately, this work aims to enhance the overall maturity and applicability of comparative DMMs in advancing organizational digital transformation.
Download

Paper Nr: 224
Title:

Stakeholder Engagement in Enterprise Architecture: Enablers and Barriers in the Private Sector

Authors:

Maryam Alshehri, Rod Dilnutt, Sherah Kurnia and Abm Nayeem

Abstract: Enterprise architecture (EA) is a growing discipline involving IT and business perspectives in organizations. While EA involves various stakeholders across different levels, challenges persist, particularly in stakeholder engagement. Most literature focuses on government contexts, but this paper delves into EA in private organizations, specifically the financial industry. Through in-depth interviews, the study identifies 24 factors influencing engagement between EA stakeholders and architects, categorized into organizational goals, organizational structure, and EA users. The study provides a detailed analysis of these factors, offering insights into both the barriers and enablers of effective EA stakeholder engagement in the private sector. It offers several implications for research and practice.
Download

Paper Nr: 235
Title:

Unveiling Business Processes Control-Flow: Automated Extraction of Entities and Constraint Relations from Text

Authors:

Diogo de Santana Candido, Hilário Tomaz Alves de Oliveira and Mateus Barcellos Costa

Abstract: Business process models have increasingly been recognized as critical artifacts for organizations. However, process modeling, i.e., the act of creating accurate and meaningful models, remains a significant challenge. As a result, many processes continue to be informally described using natural language text, leading to ambiguities and hindering precise modeling. To address these issues, more formalized models are typically developed manually, a task that requires substantial time and effort. This study proposes a transcription approach that leverages Natural Language Processing (NLP) techniques for the preliminary extraction of entities and constraint relations. A dataset comprising 133 documents annotated with 5,395 expert labels was utilized to evaluate the effectiveness of the proposed method. The experiments focused on two primary tasks: Named Entity Recognition (NER) and relation classification. For NER, the BiLSTM-CRF model, enhanced with Glove and Flair embeddings, delivered the best performance. In the relation classification task, the RoBERTaLarge model achieved superior results, particularly in managing complex dependencies. These findings highlight the potential of NLP techniques to automate and enhance business process modeling.
Download

Short Papers
Paper Nr: 11
Title:

Exploring State Chief Information Officers Involvement in Information Technology Strategic Planning for Remote Collaboration

Authors:

Shawn Na, Darlene Russ-Eft, Linda Naimi, Scott Hutcheson and Omar Diaz

Abstract: State Chief Information Officers (CIOs) have a vital role in information technology (IT) organizations; this role leads and sponsors information system (IS) programs, ensures operations, and provides technologies and digital capabilities for their organizations. Previous studies (Eiras, 2010; Haffke et al., 2016; Mitchell, 2015; Muller, 2011; Roberts et al., 2014) have discussed CIOs’ effectiveness in organizational management, the skillset and credentials for the role and responsibilities involved in leading the IT organization. Compliance with Presidential Executive Orders 13571 and 13576 requires the federal government to undertake appropriate steps to streamline and improve digital services and to deliver an efficient, effective, and accountable federal government. At the state government, the CIO position is established in each of the 50 U.S. and is tasked with overseeing and managing the state information technology (IT) and information system (IS). Investigating CIOs’ involvement in dealing with IT initiatives in their organizations can identify practices leading to successful implementations (Porfírio et al., 2021). This research sought to contribute to the body of knowledge and aimed to highlight the State CIOs’ involvement in IT strategic planning.
Download

Paper Nr: 22
Title:

Extending BPMN to Enable the Pre-Modelling of Flexibility for the Control Flow of Business Processes

Authors:

Thomas Bauer

Abstract: Pre-modelling of already known flexibility requirements of business processes (BP) already at build-time has the advantage that the resulting run-time deviations can be reviewed and approved. Furthermore, this results in less effort for the end users compared to completely dynamic changes at run-time. Corresponding concepts have been developed in previous work. In this paper we present an extension of the BPMN standard. It allows to model corresponding BP with existing BP modelling tools. Scientific literature is analysed in order to identify suitable methods for BPMN extensions. Then, they are used to develop a BPMN extension that allows the creation of BP models that contain the mentioned pre-modelled flexibility aspects.
Download

Paper Nr: 34
Title:

Personalized Task Reassignment in Industry 5.0: A MILP-Based Solution Approach

Authors:

Claudia Diamantini, Ornella Pisacane, Domenico Potena and Emanuele Storti

Abstract: Industry 5.0 involves a transformation towards human-centric and green-aware industrial ecosystems. Sustainable, safe and efficient allocation of process activities to workers is crucial in this context, as excessive workloads can bring detrimental effects on them, potentially causing long-term harm and reducing overall productivity. This paper addresses the problem of reassigning activities to workers, balancing between efficiency and sustainability through a flexible and periodic negotiation process, in which workers can refuse assigned activities if these exceed a sustainable stress level, which is monitored through wearable devices. We model it through Mixed Integer Linear Programming (MILP) with a hierarchical objective function, aimed at first maximizing the number of assignments and then minimizing the cost due to reassignments, levels of stress and possible overtimes. As experiments show, the solution time of our MILP model makes dynamic negotiation feasible in realistic settings.
Download

Paper Nr: 42
Title:

Implementing IT Enterprise Architecture to Improve the Provision of IT Resources for Public Sector

Authors:

Karoll Haüssler Carneiro Ramos, Andressa de Souza Cardozo, Tiago Ianuck Chaves, Bruno de Jesus Viana and Jackson Pertusatti

Abstract: This research explores the implementation of IT Enterprise Architecture (ITEA) within a Brazilian public administration office, focusing on enhancing the provision of IT resources and services. Utilising the Action Research methodology, the study developed and implemented an ITEA framework and platform tailored to the specific needs of an IT organisation managing shared services across multiple federated agencies. The findings highlight the potential of ITEA to systematically integrate and optimise organisational processes, improve collaboration, and enhance information flow. Despite the benefits, challenges such as the traceability of business and IT alignment and a significant skills gap among IT professionals were identified. The study underscores the need for competency development programmes and a review of recruitment and training policies to ensure the sustainability and effectiveness of EA practices in the public sector.
Download

Paper Nr: 74
Title:

Behaviour and Execution Semantics of Extended Sequence Edges in Business Processes

Authors:

Thomas Bauer

Abstract: At business processes (BP), activities are usually considered as atomic units. This results in unnecessary restrictions, e.g. when modelling sequences of activities. Here, flexibility can be increased by allowing that a sequence edge refers to the start and to the end events of their source and target activity arbitrarily. This allows additional execution orders at the runtime of the BP, i.e. the end users have more flexibility at BP execution. Nevertheless, we respect all modelled control flow conditions, as well as time constraints defined between activities (e.g. minimum time intervals). A process engine requires a formal execution semantics, to be able to control such a BP automatically. Therefore, in this paper, we develop corresponding execution rules. Furthermore, we present measures that enable the process engine to delay and to speed up the start and the completion of activities in order to respect the modelled time constraints.
Download

Paper Nr: 94
Title:

Benchmarking Efficiency in Mediterranean Ports: A DEA-Based Analysis of Connectivity and Operational Performance

Authors:

Chariton Tsakalidis, Eirini Liani, George Tsakalidis, Kostas Vergidis and Michael Madas

Abstract: This study investigates the operational performance of major Mediterranean ports through a tailored Data Envelopment Analysis (DEA) framework. Recognizing the underrepresentation of these ports in existing benchmarking studies, this research emphasizes both connectivity and efficiency. Utilizing advanced DEA methodologies—Constant Returns to Scale (CCR), Variable Returns to Scale (BCC) and Window Analysis—the study evaluates efficiency trends over time, providing actionable insights for enhancement. Key input variables such as terminal size, berth length and equipment count are analyzed alongside output metrics like annual container throughput to ensure a comprehensive assessment of port performance. The findings reveal significant efficiency disparities among Mediterranean ports, with transshipment hubs like Tanger Med and Piraeus achieving optimal efficiency scores due to strategic investments and infrastructure upgrades. Conversely, many ports operate below optimal levels, indicating opportunities for technical and managerial improvements. This research contributes substantially to the field by introducing a novel benchmarking framework tailored to the unique geopolitical dynamics of the Mediterranean region. It highlights the critical role of connectivity, infrastructure and technology in driving efficiency while offering a valuable foundation for policymakers and port authorities to implement targeted strategies that enhance competitiveness and foster sustainable growth.
Download

Paper Nr: 151
Title:

Critical Characteristics of Enterprise Architects Influencing Stakeholder Engagement Effectiveness

Authors:

Rod Dilnutt, Abm Nayeem, Maryam Alshehri, Sherah Kurnia and William Yeoh

Abstract: Enterprise architecture (EA) aims to enhance business performance through effective IT deployment. Aligning business strategy with IT requires artefacts for business operations and decision-making. Engagement between enterprise architects and stakeholders is crucial for success, yet the characteristics of successful architects have been understudied. This paper explores these characteristics using resource- and capacity-based theories. It seeks to identify traits that influence engagement effectiveness and presents a theoretical model. The study involved two phases: a literature review creating a descriptive model and an in-depth case study with 17 interviews to refine it. The research identifies 11 generic engagement factors and five potentially specific to the studied organization. The resulting model, focusing on the banking industry, is the first to highlight the traits of effective enterprise architects. Further empirical research is needed to validate and calibrate these factors across various contexts, industries, and economic environments.
Download

Paper Nr: 244
Title:

Proposal for Formalization Using Description Logic of Undesirable Models in Business Process Management

Authors:

Jean Elder Santana Araújo and Cleyton Mário de Oliveira Rodrigues

Abstract: Proposal of a method to detect and correct errors in BPMN (Business Process Model and Notation) models using ontologies and Descriptive Logic reasoning to formalize and identify common errors related to the use of gateways, elements that control the flow and decisions in a process. The research highlights how the misuse of gateways can lead to inefficiencies and failures in process execution. Gateways are explored in the article as a demonstration of an experimental structure with a focus on expanding the application of the method to other elements of BPM modeling.
Download

Paper Nr: 281
Title:

The Impact of Innovation Management Systems on Firms’ Innovation Performance: The Mediating Role of Openness to Innovation

Authors:

Rita Giordano, Gian Marco Miele, Filippo Frangi, Antonio Ghezzi and Andrea Rangone

Abstract: This study examines the impact of Innovation Management Systems (IMS) maturity on companies' Innovation Performance, specifically emphasizing the ISO 56002 standard as a guiding framework. The present investigation explores the mediating role of Open Innovation (OI) in this relationship, investigating how openness to external collaboration affects the effectiveness of structured innovation processes. A Systematic Literature Review (SLR) identifies significant gaps, notably the scarcity of empirical evidence regarding the integration of IMS with OI techniques and their collective impact on performance outcomes. Empirical data were gathered via a survey of 139 medium-to-large Italian enterprises spanning several sectors. The study assesses organizations' IMS maturity, their openness to innovation, and the interaction between these factors in influencing Innovation Performance. Structural Equation Modeling (SEM) demonstrates that an established Innovation Management System (IMS) enhances Innovation Performance both directly and indirectly by promoting openness to external knowledge transfer and collaboration. The results enhance the current IMS literature by illustrating that a systematic approach to innovation management, in conjunction with Open Innovation methods, can yield exceptional innovation results. These findings provide practical guidance for managers and decision-makers aiming to improve their organizations' innovation capacities and attain durable competitive advantages in progressively interconnected markets.
Download

Paper Nr: 294
Title:

The Future of BPM in the Era of Industry 4.0: Exploring New Opportunities for Innovation

Authors:

Hadjer Khider, Abdelkrim Meziane and Slimane Hammoudi

Abstract: In today's digital age, the fourth industrial revolution has given rise to Industry 4.0. This new paradigm has brought new challenges for organizations, through a digital transformation. This digital transformation has profoundly impacted the way businesses operate, leading to a fundamental shift in the Business Process Management (BPM), affecting business models, processes, products, relationships and competencies. This transformation is based on the use of cyber-physical systems and information and communication technologies, in particular artificial intelligence and the Internet of Things. This paper aims to identify and define the main challenges, limitations, and opportunities of BPM in the era of Industry 4.0. Furthermore, it aims to identify potential future research directions. in addition to analyzing the impact of Industry 4.0 concepts and related technologies on the management of organizations and their business processes.
Download

Paper Nr: 311
Title:

Industry 4.0 Information Systems for Materials Circularity in Supply Chains: Industry Issues and Research Directions

Authors:

Soujanya Mantravadi and Brian Vejrum Wæhrens

Abstract: The purpose of this paper is to explore the role of information systems in manufacturing to support material circularity practices in the supply chain. The paper attempts to study the usefulness of manufacturing operations management (MOM) systems, particularly manufacturing execution systems (MES), in enabling traceability and supply chain integration for tracking product material details. Theoretical propositions made on MOM systems for materials circularity (based on the literature study) were empirically examined using needs assessment from two case companies with complex product material requirements. Based on the qualitative analysis of propositions and empirical findings, the paper identified traceability-enabled methods, supply chain integration, and the adoption of Industry 4.0 technologies as potential enablers for achieving materials circularity goals. As a result, the priorities for developing research agenda in this area to design factories of the future and to achieve Industry 4.0 vision that supports circular economy were established. Future research directions are put forward and future work will include an in-depth case study analysis to explore the role of Industry 4.0-compliant MOM systems to meet evolving regulatory demands and operational scalability across the supply chain.
Download

Paper Nr: 21
Title:

Critical Success Factors for Enterprise Architecture: Survey, Taxonomy, and Solutions

Authors:

Peter Hillmann, Lovis Justin Immanuel Zenz and Andreas Karcher

Abstract: Enterprise architecture (EA) is a critical key competence in the organization, adaptation and improvement of companies. The objective of this study is to identify and analyze critical success factors associated with EA projects and EA management. In particular, the interrelationships of critical factors as key components were examined. Therefore, we present the first taxonomy of key factors in EA providing an overview of the major areas to be addressed. The assessment revealed five major challenges: communication problems, limited top management support, insufficient EA expertise, ineffective knowledge management and inadequate requirements management. Based on these findings, a comprehensive compilation of strategies was develop encompassing preventative guidelines and reactive approaches. It provides practical recommendations for overcoming the identified obstacles. The measures developed were integrated along the project life cycle with reference to organizational processes, in particular with a focus on change management and controlling. Practical recommendations were tested in expert interviews and business game for their effectiveness.
Download

Paper Nr: 109
Title:

Effort Estimation of Large-Scale Enterprise Application Mainframe Migration Projects: A Case Study

Authors:

Sascha Roth

Abstract: How do you migrate an enterprise application which has decades old legacy code running on an IBM Z-series mainframe? What options do you have and how do you estimate the efforts best? In this paper, we present a model developed during a real-world case study of a migration endeavour of the worldwide warranty system at a major premium automotive. We present a pragmatic approach taken to ballpark migration efforts which allows for similar endeavours to estimate migration efforts in a similar fashion.
Download

Paper Nr: 145
Title:

Diagnosing BPM Governance: A Case Study of Facilitators, Barriers, and Governance Elements in a Hierarchical Public Instituition

Authors:

Giovanni Correa, Jéssyka Vilela and Mariana Peixoto

Abstract: Context: BPM initiatives improve processes and adaptability, with governance as a key factor. Problem: BPM governance in hierarchical public organizations faces structural challenges. Objective: This study examines BPM governance in a Brazilian public institution. Method: A case study evaluated objectives, roles, decision-making, facilitators, and barriers. Results: We identified 8 facilitators and 8 barriers. Core areas showed more process maturity, while key challenges included lacking prioritization methodology and inconsistent performance indicators. Conclusions: This research expands BPM knowledge by analyzing governance in hierarchical institutions.
Download

Paper Nr: 146
Title:

Guidelines for the Application of Event Driven Architecture in Micro Services with High Volume of Data

Authors:

Marcus V. S. Silva, Luiz F. C. dos Santos, Michel S. Soares and Fabio Gomes Rocha

Abstract: Event-Driven architecture (EDA) has proven itself as a transformative strategy within microservices, celebrated for its role in enabling scalable, responsive, and decoupled interactions among system components. This paper draws on insights from diverse domains such as e-commerce, healthcare, IoT, and data processing to showcase how EDA can revolutionize system agility and responsiveness, particularly under high communication loads and real-time processing demands. We underscore EDA’s effectiveness in optimizing critical processes like order management, payment processing, and real-time anomaly detection through 13 case studies. These enhancements not only boost operational efficiency but also foster more informed decision-making. Moreover, the burgeoning interest in applying EDA to complex systems that necessitate dynamic adaptation to environmental changes, such as climate risk management and intelligent manufacturing, underscores its potential. However, adopting EDA is not without its challenges, particularly in state management, consistency, and testing, which necessitate further exploration. This paper contributes to the discourse on EDA by reviewing its current state, challenges, and future directions, offering a comprehensive perspective on its role and potential in modern software architecture.
Download

Paper Nr: 209
Title:

Digital Transformation Framework Inspired by Organisational Semiotics: An Analysis Based a Chinese SOE Manufacturer

Authors:

Wenxuan Li, Qi Li and Yixuan Liu

Abstract: This study examines the process of digital transformation (DT) in a Chinese state-owned enterprise (SOE) using the Organisational Onion Model (OOM) alignment framework inspired by Organisational Semiotics (OS). The research explores how alignments among technical, formal, and informal layers contribute to successful DT. It proposed an OMM alignment model and applied in a case of a Chinese SOE manufacturer, where strategic priorities initiated a top-down approach to adopt new digital systems and reengineer business processes. These changes subsequently influenced organisational culture and employee engagement. Key findings highlight the role of iterative adjustments and feedback loops in achieving alignment, emphasising the interplay between strategy, culture, technology, and process. Three propositions are proposed: alignments can occur at any stage of DT, can be led by different layers in both top-down and bottom-up directions, and are facilitated by digital champions. While the study primarily focuses on the initial stages of DT, future research is encouraged to explore complete DT journeys and identify additional elements in each organisational layer to deepen understanding of alignment dynamics and their impacts.
Download

Paper Nr: 213
Title:

Why Digital Maturity Models Fail: An Exploratory Interview Study Within the Digital Transformation Steering Process

Authors:

Maximilian Breitruck

Abstract: Digital Maturity Models (DMMs) are widely used tools to assess and guide organizational digital transformation (DT). However, their practical contribution to the transformation process often fails due to insufficient stakeholder involvement, inadequate adaptability, or unsuitable assessment tools. This study explores these shortcomings through a socio-technical lens, analyzing why DMMs fail to deliver value in transformation processes. Drawing on an exploratory interview study with experts from the industry, eight key dimensions of failure, such e.g. as misalignment with organizational strategies, cultural resistance, and inadequate iterative usage practices, were identified. These initial results reveal that beyond the design of DMMs, systemic organizational and procedural barriers significantly hinder DMM utility. Building on that, ultimately, a comprehensive framework of utility barriers and derived requirements for building and integrating DMMs should be developed.
Download

Paper Nr: 280
Title:

5G for the Future of Telecommunications: How Innovation Platforms Redefine the Mobile Network Operators' Role

Authors:

Lucrezia Mancini, Edoardo Meraviglia, Mattia Magnaghi, Antonio Ghezzi and Andrea Rangone

Abstract: The paper addresses the transition process that Mobile Network Operators (MNOs) can undertake after the advent of 5G technology and the unique opportunities it introduces. It focuses on how seeing 5G as an innovation platform can support the strategic repositioning of these actors, enabling them to regain influence in the market, especially in the enterprise market. Therefore, an exploratory study of 8 cases was conducted, including different players of the 5G Italian telecommunication industry. Results show that industrial 5G has many common points with innovation platforms, being a promising technology that enables additional services and applications on top of it. The associations between 5G and innovation platforms offers a new perspective on the challenges and the ongoing dynamics for MNOs, highlighting the platform consolidation complexity and outlining potential future scenarios. The MNO is identified as a potential candidate to orchestrate the ecosystem, although this remains a prospective view. Finally, a framework is presented to picture the 5G landscape, offering strategic insights to maximize MNOs’ competitive advantage.
Download