ICEIS 2019 Abstracts


Area 1 - Databases and Information Systems Integration

Full Papers
Paper Nr: 25
Title:

MediBot: An Ontology based Chatbot for Portuguese Speakers Drug’s Users

Authors:

Caio S. Avila, Anderson B. Calixto, Tulio V. Rolim, Wellington Franco, Amanda P. Venceslau, Vânia P. Vidal, Valéria M. Pequeno and Francildo Felix De Moura

Abstract: Brazil is one of the countries with the highest level of drug consumption in the world. By 2012 about 66% claimed to practice self-medication. Such activity can lead to a wide range of risks, including death from drug intoxication. Studies indicate that a lack of knowledge about drugs and their dangers is one of the main aggravating factors in this scenario. This work aims to universalize access to information about medications and their risks for different user profiles, especially Brazilian and lay users. In this paper, we presented the construction process of a Linked Data Mashup (LDM) integrating the datasets: consumer drug prices, government drug prices and drug’s risks in pregnant from ANVISA and SIDER from BIO2RDF. In addition, this work presents MediBot, an ontology-based chatbot capable of responding to requests in natural language in Portuguese through the instant messenger Telegram, smoothing the process to query the data. MediBot acts like a native language query interface on an LDM that works as an abstraction layer that provides an integrated view of multiple heterogeneous data sources.
Download

Paper Nr: 43
Title:

SQL for Stored and Inherited Relations

Authors:

Witold Litwin

Abstract: A stored and inherited relation (SIR) is a stored relation (SR) extended with inherited attributes (IAs) calculated as in a view. Without affecting the normal form of the SR, IAs can make queries free of logical navigation or of value expressions. A view of the SR can do the same. The virtual (dynamic, computed...) attributes (VAs) possibly extending SRs at major DBSs, can do as well for value expressions defining them. VAs are less procedural to declare than any alternate view. Likewise, altering any attribute of an SR with VAs leading to view altering otherwise is less procedural. We propose extensions to SQL generalizing the latter two properties to SIRs. In particular, one may define IAs through value expressions not supported as VAs at present. Also, to define an IA instead of a VA is at most as procedural. We motivate our proposals through the "biblical" Supplier-Part DB. We postulate SIRs standard on SQL DBSs.
Download

Paper Nr: 46
Title:

Enhancing Knowledge Graphs with Data Representatives

Authors:

André Pomp, Lucian Poth, Vadim Kraus and Tobias Meisen

Abstract: Due to the digitalization of many processes in companies and the increasing networking of devices, there is an ever-increasing amount of data sources and corresponding data sets. To make these data sets accessible, searchable and understandable, recent approaches focus on the creation of semantic models by domain experts, which enable the annotation of the available data attributes with meaningful semantic concepts from knowledge graphs. For simplifying the annotation process, recommendation engines based on the data attribute labels can support this process. However, as soon as the labels are incomprehensible, cryptic or ambiguous, the domain expert will not receive any support. In this paper, we propose a semantic concept recommendation for data attributes based on the data values rather than on the label. Therefore, we extend knowledge graphs to learn different dedicated data representations by including data instances. Using different approaches, such as machine learning, rules or statistical methods, enables us to recommend semantic concepts based on the content of data points rather than on the labels. Our evaluation with public available data sets shows that the accuracy improves when using our flexible and dedicated classification approach. Further, we present shortcomings and extension points that we received from the analysis of our evaluation.
Download

Paper Nr: 53
Title:

Anonylitics: From a Small Data to a Big Data Anonymization System for Analytical Projects

Authors:

Alexandra Pomares-Quimbaya, Alejandro Sierra-Múnera, Jaime Mendoza-Mendoza, Julián Malaver-Moreno, Hernán Carvajal and Victor Moncayo

Abstract: When a company requires analytical capabilities using data that might include sensitive information, it is important to use a solution that protects those sensitive portions, while maintaining its usefulness. An analysis of existing anonymization approaches found out that some of them only permit to disclose aggregated information about large groups or require to know in advance the type of analysis to be performed, which is not viable in Big Data projects; others have low scalability which is not feasible with large data sets. Another group of works are only presented theoretically, without any evidence on evaluations or tests in real environments. To fill this gap this paper presents Anonylitics, an implementation of the k-anonymity principle for small and Big Data settings that is intended for contexts where it is necessary to disclose small or large data sets for applying supervised or non-supervised techniques. Anonylitics improves available implementations of k-anonymity using a hybrid approach during the creation of the anonymized blocks, maintaining the data types of the original attributes, and guaranteeing scalability when used with large data sets. Considering the diverse infrastructure and data volumes managed by current companies, Anonylitics was implemented in two versions, the first one uses a centralized approach, for companies that have small data sets, or large data sets, but good vertical infrastructure capabilities, and a Big Data version, for companies with large data sets and horizontal infrastructure capabilities. Evaluation on different data sets with diverse protection requirements demonstrates that our solution maintains the utility of the data, guarantees its privacy and has a good time-complexity performance.
Download

Paper Nr: 67
Title:

Metadata Management for Textual Documents in Data Lakes

Authors:

Pegdwendé N. Sawadogo, Tokio Kibata and Jérôme Darmont

Abstract: Data lakes have emerged as an alternative to data warehouses for the storage, exploration and analysis of big data. In a data lake, data are stored in a raw state and bear no explicit schema. Thence, an efficient metadata system is essential to avoid the data lake turning to a so-called data swamp. Existing works about managing data lake metadata mostly focus on structured and semi-structured data, with little research on unstructured data. Thus, we propose in this paper a methodological approach to build and manage a metadata system that is specific to textual documents in data lakes. First, we make an inventory of usual and meaningful metadata to extract. Then, we apply some specific techniques from the text mining and information retrieval domains to extract, store and reuse these metadata within the COREL research project, in order to validate our proposals.
Download

Paper Nr: 76
Title:

Optimization of Gaps Resolution Strategy in Implementation of ERP Systems

Authors:

Jānis Grabis

Abstract: Enterprise Resources Planning (ERP) systems are packaged applications developed by their vendors. Their functionality is not specifically tailored to particular companies implementing these systems. Differences between provided functionality and company’s needs are identified using fit-gap analysis. The paper develops a novel optimization model for fit-gap analysis. The model yields an optimal gaps resolution strategy, which defines type and timing of customizations made to resolve the gaps and decisions are made with respect to the vendor’s software evolution roadmap. Thus, the model highlights trade-offs between in-house customization and adoption of standard features yet to be released. The optimization results are analysed depending on the company’s customization preferences and an application example is also provided. The model allows for understanding and evaluation of relationships between the company implementing the ERP system and the vendor of the ERP system.
Download

Paper Nr: 114
Title:

Detecting Influencers in Very Large Social Networks of Games

Authors:

Leonardo P. Moraes and Robson F. Cordeiro

Abstract: Online games have become a popular form of entertainment, reaching millions of players. Among these players are the game influencers, that is, players with high influence in creating new trends by publishing online content (e.g., videos, blogs, forums). Other players follow the influencers to appreciate their game contents. In this sense, game companies invest in influencers to perform marketing for their products. However, how to identify the game influencers among millions of players of an online game? This paper proposes a framework to extract temporal aspects of the players’ actions, and then detect the game influencers by performing a classification analysis. Experiments with the well-known Super Mario Maker game, from Nintendo Inc., Kyoto, Japan, show that our approach is able to detect game influencers of different nations with high accuracy.
Download

Paper Nr: 133
Title:

Pattern-based Method for Anomaly Detection in Sensor Networks

Authors:

Ines Kraiem, Faiza Ghozzi, Andre Peninou and Ines Kraiem

Abstract: The detection of anomalies in real fluid distribution applications is a difficult task, especially, when we seek to accurately detect different types of anomalies and possible sensor failures. Resolving this problem is increasingly important in building management and supervision applications for analysis and supervision. In this paper we introduce CoRP ”Composition of Remarkable Points” a configurable approach based on pattern modelling, for the simultaneous detection of multiple anomalies. CoRP evaluates a set of patterns that are defined by users, in order to tag the remarkable points using labels, then detects among them the anomalies by composition of labels. By comparing with literature algorithms, our approach appears more robust and accurate to detect all types of anomalies observed in real deployments. Our experiments are based on real world data and data from the literature.
Download

Paper Nr: 165
Title:

The Gold Tree: An Information System for Analyzing Academic Genealogy

Authors:

Gabriel Madeira, Eduardo N. Borges, Matheus Barañano, Prícilla K. Nascimento, Giancarlo Lucca, Maria F. Maia, Helida Salles and Graçaliz Dimuro

Abstract: Academic genealogy investigates the relationships between student researchers and academy professionals. In recent years, it proved to be a powerful technique to help analyze the spread of scientific knowledge. Tools that make to visualize these relationships among academics easier are potentially useful and have been proposed. This work specifies and describes the development of a Web information system for creating and visualizing academic genealogy trees from a set of metadata extracted and integrated from multiple sources. The proposed system allows a researcher to query and track information about his or her advisers and graduate students at any level. A case study was explored to validate the system using data from more than 570 thousand theses and dissertations.
Download

Paper Nr: 188
Title:

Towards a Runtime Standard-based Testing Framework for Dynamic Distributed Information Systems

Authors:

Moez Krichen, Roobaea Alroobaea and Mariam Lahami

Abstract: In this work, we are interested in testing dynamic distributed information systems. That is we consider a decentralized information system which can evolve over time. For this purpose we propose a runtime standard-based test execution platform. The latter is built upon the normalized TTCN-3 specification and implementation testing language. The proposed platform ensures execution of tests cases at runtime. Moreover it considers both structural and behavioral adaptations of the system under test. In addition, it is equipped with a test isolation layer that minimizes the risk of interference between business and testing processes. The platform also generates a minimal subset of test scenarios to execute after each adaptation. Finally, it proposes an optimal strategy to place the TTCN-3 test components among the system execution nodes.
Download

Short Papers
Paper Nr: 39
Title:

k-means Improvement by Dynamic Pre-aggregates

Authors:

Nabil El malki, Franck Ravat and Olivier Teste

Abstract: The k-means algorithm is one well-known of clustering algorithms. k-means requires iterative and repetitive accesses to data up to performing the same calculations several times on the same data. However, intermediate results that are difficult to predict at the beginning of the k-means process are not recorded to avoid recalculating some data in subsequent iterations. These repeated calculations can be costly, especially when it comes to clustering massive data. In this article, we propose to extend the k-means algorithm by introducing pre-aggregates. These aggregates can then be reused to avoid redundant calculations during successive iterations. We show the interest of the approach by several experiments. These last ones show that the more the volume of data is important, the more the pre-aggregations speed up the algorithm.
Download

Paper Nr: 41
Title:

MDA Process to Extract the Data Model from Document-oriented NoSQL Database

Authors:

Amal A. Brahim, Rabah T. Ferhat and Gilles Zurfluh

Abstract: In recent years, the need to use NoSQL systems to store and exploit big data has been steadily increasing. Most of these systems are characterized by the property "schema less" which means absence of the data model when creating a database. This property brings an undeniable flexibility by allowing the evolution of the model during the exploitation of the base. However, query expression requires a precise knowledge of the data model. In this article, we propose a process to automatically extract the physical model from a document-oriented NoSQL database. To do this, we use the Model Driven Architecture (MDA) that provides a formal framework for automatic model transformation. From a NoSQL database, we propose formal transformation rules with QVT to generate the physical model. An experimentation of the extraction process was performed on the case of a medical application.
Download

Paper Nr: 59
Title:

Detecting Multi-Relationship Links in Sparse Datasets

Authors:

Dongyun Nie and Mark Roantree

Abstract: Application areas such as healthcare and insurance see many patients or clients with their lifetime record spread across the databases of different providers. Record linkage is the task where algorithms are used to identify the same individual contained in different datasets. In cases where unique identifiers are found, linking those records is a trivial task. However, there are very high numbers of individuals who cannot be matched as common identifiers do not exist across datasets and their identifying information is not exact or often, quite different (e.g. a change of address). In this research, we provide a new approach to record linkage which also includes the ability to detect relationships between customers (e.g. family). A validation is presented which highlights the best parameter and configuration settings for the types of relationship links that are required.
Download

Paper Nr: 71
Title:

Generalized Dirichlet Regression and other Compositional Models with Application to Market-share Data Mining of Information Technology Companies

Authors:

Divya Ankam and Nizar Bouguila

Abstract: We explore the idea that market-shares of any given company have a linear relationship with the number of times the company/product is searched for on the internet. This relationship is critical in deducing whether the funds spent by a firm on advertisements have been fruitful in increasing the market-share of the company. To deduce the expenditure on advertisement, we consider google-trends as a replacement resource. We propose a novel regression algorithm, generalized Dirichlet regression, to solve the resulting problem with information from three different information-technology fields: internet browsers, mobile phones and social networks. Our algorithm is compared to Dirichlet regression and ordinary-least-squares regression with compositional transformations. Our results show both the relationship between market-shares and google-trends, and the efficiency of generalized Dirichlet regression model.
Download

Paper Nr: 72
Title:

A Real-time Big Data Framework for Network Security Situation Monitoring

Authors:

Guanyao Du, Chun Long, Jianjun Yu, Wei Wan, Jing Zhao and Jinxia Wei

Abstract: In this paper, we provide a real-time calculation and visualization framework for network security situation monitoring based on big data technology, and it mainly realizes the real-time massive multi-dimensional network attack dynamic display with Data-Driven Documents (D3). Firstly, we propose an integration and storage management mechanism of massive heterogeneous multi-source data for the network security data fusion. Then, we provide a general real time data computation and visualization framework for massive network security data. Based on the framework, we use the real security data of the network security cloud service platform of Chinese Academy of Sciences (CAS) to realize the visualization monitoring of network security dynamic attacks nationwide and worldwide, respectively. Experiment results are given to analyze the performance of our proposed framework on the efficiency of the data integration and computation stages.
Download

Paper Nr: 74
Title:

Adapting Linear Hashing for Flash Memory Resource-constrained Embedded Devices

Authors:

Andrew Feltham, Spencer MacBeth, Scott Fazackerley and Ramon Lawrence

Abstract: Linear hashing provides constant time operations for data indexing and has been widely implemented for database systems. Embedded devices, often with limited memory and CPU resources, are increasingly collecting and processing more data and benefit from fast index structures. Implementing linear hashing for flash-based embedded devices is challenging both due to the limited resources and the unique properties of flash memory. In this work, an implementation of linear hashing optimized for embedded devices is presented and evaluated. Experimental results demonstrate that the implementation has constant time performance on embedded devices, even with as little as 8 KB of memory, and offers benefits for several use cases.
Download

Paper Nr: 87
Title:

Manipulating Triadic Concept Analysis Contexts through Binary Decision Diagrams

Authors:

Kaio A. Ananias, Julio V. Neves, Pedro B. Ruas, Luis E. Zárate and Mark J. Song

Abstract: Formal Concept Analysis (FCA) is an approach based on the mathematization and hierarchy of formal concepts. Nowadays, with the increasing of social network for personal and professional usage, more and more applications of data analysis on environments with high dimensionality (Big Data) have been discussed in the literature. Through the Formal Concept Analysis and Triadic Concept Analysis, it is possible to extract database knowledge in a hierarchical and systematized representation. It is common that the data set transforms the extraction of this knowledge into a problem of high computational cost. Therefore, this paper has an objective to evaluate the behavior of the algorithm for extraction triadic concepts using TRIAS in high dimensional contexts. It was used a synthetic generator known as SCGaz (Synthetic Context Generator a-z). After the analysis, it was proposed a representation of triadic contexts using a structure known as Binary Decision Diagram (BDD).
Download

Paper Nr: 95
Title:

Extraction and Multidimensional Analysis of Data from Unstructured Data Sources: A Case Study

Authors:

Rui Lima and Estrela F. Cruz

Abstract: This paper proposes an approach to detect and extract data from unstructured data source (about the subject to be studied) available online and spread by several Web pages and aggregate and store the data in a Data Warehouse properly designed for it. The Data Warehouse repository will serve as basis for the Business Intelligence and Data Mining analysis. The extracted data may be complemented with information provided by other sources in order to enrich the information to enhance the analysis and draw new and more interesting conclusions. The proposed process is then applied to a case study composed by results of athletics events realized in Portugal in the last 12 years. The files, about competition results, are available online, spread by the websites of the several athletics associations. Almost all files are published in portable document format (PDF) and each association provides files with its own different internal format. The case study also proposes an integrating mechanism between results of athletics events with their geographic location and atmospheric conditions of the events allowing to assess and analyze how the atmospheric and geographical conditions interfere in the results achieved by the athletes.
Download

Paper Nr: 100
Title:

Exploring Data Value Assessment: A Survey Method and Investigation of the Perceived Relative Importance of Data Value Dimensions

Authors:

Rob Brennan, Judie Attard, Plamen Petkov, Tadhg Nagle and Markus Helfert

Abstract: This paper describes the development and execution of a data value assessment survey of data professionals and academics. Its purpose was to explore more effective data value assessment techniques and to better understand the perceived relative importance of data value dimensions for data practitioners. This is important because despite the current deep interest in data value, there is a lack of data value assessment techniques and no clear understanding of how individual data value dimensions contribute to a holistic model of data value. A total of 34 datasets were assessed in a field study of 20 organisations in a range of sectors from finance to aviation. It was found that in 17 out of 20 of the organisations contacted that no data value assessment had previously taken place. All the datasets evaluated were considered valuable organisational assets and the operational impact of data was identified as the most important data value dimension. These results can inform the community’s search for data value models and assessment techniques. It also assists further development of capability maturity models for data value assessment and monitoring. This is to our knowledge the first publication of the underlying data for a multi-organization data value assessment and as such it represents a new stage in the evolution of evidence-based data valuation.
Download

Paper Nr: 101
Title:

Schema Matching with Frequent Changes on Semi-Structured Input Files: A Machine Learning Approach on Biological Product Data

Authors:

Oliver Schmidts, Bodo Kraft, Ines Siebigteroth and Albert Zündorf

Abstract: For small to medium sized enterprises matching schemas is still a time consuming manual task. Even expensive commercial solutions perform poorly, if the context is not suitable for the product. In this paper, we provide an approach based on concept name learning from known transformations to discover correspondences between two schemas. We solve schema matching as a classification task. Additionally, we provide a named entity recognition approach to analyze, how the classification task relates to named entity recognition. Benchmarking against other machine learning models shows that when choosing a good learning model, schema matching based on concept name similarity can outperform other approaches and complex algorithms in terms of precision and F1-measure. Hence, our approach is able to build the foundation for improved automation of complex data integration applications for small to medium sized enterprises.
Download

Paper Nr: 134
Title:

REMS.PA: A Complex Framework for Supporting OLAP-based Big Data Analytics over Data-intensive Business Processes

Authors:

Alfredo Cuzzocrea, Salvatore Cavalieri, Orazio Tomarchio, Giuseppe Di Modica, Concetta Cantone and Angela Di Bilio

Abstract: In this paper, we provide architecture and functionalities of REMS.PA, a complex framework for supporting OLAP-based big data analytics over data-intensive business processes, with particular regards to business processes of the Public Administration. The framework has been designed and developed in the context of a real-life project. In addition to the anatomy of the framework, we describe some case studies that contribute to highlight the benefits coming from our proposed framework.
Download

Paper Nr: 180
Title:

Template-Driven Documentation for Enterprise Recruitment Best Practices

Authors:

Saleh Alamro, Huseyin Dogan, Raian Ali and Keith Phalp

Abstract: Recruitment Best Practices (RBPs) are useful when building complex Enterprise Recruitment Architectures (ERAs). However, they have some limitations that reduce their reusability. A key limitation is the lack of capturing and documenting recruitment problems and their solutions from an enterprise perspective. To address this gap, a template for Enterprise Recruitment Best Practice (ERBP) documentation is defined. This template provides a model-driven environment and incorporates all elements that must be considered for a better documentation, sharing and reuse of ERBPs. For this purpose, we develop a precise metamodel and five UML diagrams to describe the template of the ERBPs. This template will facilitate the identification and selection of ERBPs and provide enterprise recruitment stakeholders with the guidelines of how to share and reuse them. The template is produced using design science method and a detailed analysis of three case studies. The evaluation results demonstrated that the template can contribute to a better documentation of ERBPs.
Download

Paper Nr: 197
Title:

A Quality-based ETL Design Evaluation Framework

Authors:

Zineb El Akkaoui, Alejandro Vaisman and Esteban Zimányi

Abstract: The Extraction, Transformation and Loading (ETL) process is a crucial component of a data warehousing architecture. ETL processes are usually complex and time-consuming. Particularly important (although overlooked) in ETL development is the design phase, since it impacts on the subsequent ones, i.e., implementation and execution. Addressing ETL quality at the design phase allows taking actions that can have a positive and low-cost impact on process efficiency. Using the well-known Briand et al. framework (a theoretical validation framework for system artifacts), we formally specify a set of internal metrics that we conjecture to be correlated with process efficiency. We also provide empirical validation of this correlation, as well as an analysis of the metrics that have stronger impact on efficiency. Although there exist proposals in the literature addressing design quality in ETL, as far as we are aware of, this is the first proposal aimed at using metrics over ETL models to predict the performance associated to these models.
Download

Paper Nr: 208
Title:

The Use of Persuasive Strategies in Systems to Achieve Sustainability in the Fields of Energy and Water: A Systematic Review

Authors:

Un H. Schiefelbein, William B. Pereira, Renan Lírio de Souza, Joao D. Lima and Cristiano Cortez da Rocha

Abstract: The use of persuasive applications to change behavior has presented efficient results in the most varied areas, for example in the health area, applications send notifications remembering daily exercises and in addition to losing calories you can still accumulate points. In the domains that involve sustainability linked to the use of electric energy and water the persuasive applications have shown promise and present a good field to follow. In this sense this work presents an investigation about which persuasive strategies are most used and which can still be explored in the applications that seek to make the user have a sustainable behavior in the use of water and electricity. This research has taken place through a systematic review.
Download

Paper Nr: 214
Title:

An Extended Data Object-driven Approach to Data Quality Evaluation: Contextual Data Quality Analysis

Authors:

Anastasija Nikiforova and Janis Bicevskis

Abstract: This research is an extension of a data object-driven approach to data quality evaluation allowing to analyse data object quality in scope of multiple data objects. Previously presented approach was used to analyse one particular data object, mainly focusing on syntactic analysis. It means that the primary data object quality can be analysed against secondary data objects of unlimited number. This opportunity allows making more comprehensive, in-depth contextual data object analysis. The given analysis was applied to open data sets, making comparison between previously obtained results and results of application of the extended approach, underlying importance and benefits of the given extension.
Download

Paper Nr: 110
Title:

ETL Development using Patterns: A Service-Oriented Approach

Authors:

Bruno Oliveira, Óscar Oliveira, Vasco Santos and Orlando Belo

Abstract: Extract-Transform-Load (ETL) workflows are commonly developed using frameworks and tools that provide a set of useful pre-configured components to develop complete ETL packages. The pattern concept for ETL development is being studied as a way to simplify and improve the ETL development lifecycle. Patterns are independent composite tasks that can be changed without affecting the ETL structure. The pattern implementation reveals several challenges when used with existing ETL tools, mainly due to the monolith architectural style usually followed. The use of small and loosely-coupled components provided by the microservices architectural style can improve the way ETL patterns are used. In this paper, we present an analysis for the use of microservices for ETL application development using patterns.
Download

Paper Nr: 139
Title:

A Legacy ERP System Integration Framework based on Ontology Learning

Authors:

Chuangtao Ma and Bálint Molnár

Abstract: In the past decades, there are various legacy ERP systems that exist in different departments or sub-organizations within the enterprise. The majority of the legacy ERP systems are heterogeneous systems, that may be developed by different software companies under different development framework, which create a big challenge for organizations to develop and implement centralized and integrated management systems based on their existing legacy ERP systems to respond the dynamic business environment with agility. Ontologies are viewed as an effective technology to integrate different data from multiple heterogeneous sources, the ontology learning methods were proposed to achieve (semi-)automated construction of ontologies. This paper proposes a general framework for legacy ERP system integration based on ontology learning to tackle this challenge. Initially, the related literature is reviewed from the perspective of system integration and ontology learning, then an integration framework based on ontology learning is given, and the basic workflow and ontology learning process are analysed and illustrated.
Download

Paper Nr: 213
Title:

Online Consumers’ Opinions Analysis for Marketing Strategy Evaluation

Authors:

Elena Ektoros, Andreas Gregoriades and Michael Georgiades

Abstract: With over two billion users having access to social media accounts, people increasingly choose to express themselves online. Electronic word of mouth generates large amounts of data, making it a valuable source for big data analytics. This provides organisations with key capabilities for improved decision-making through mining insights directly from online sources. In this work we gathered and analysed the sentiment of consumers’ tweets regarding the release of two smartphone products. Tweeter data was collected using a custom-made Android application. The research question addressed in this study focused on whether the marketing positioning strategy of the company under investigation was successful after the release of two of its new products. To evaluate this, we compared the product positioning strategy of the firm before and after the release of the product. Consumers’ opinions were analysed to identify possible discrepancies between planned consumers’ reactions and sentiments, as strategized by the company, and how these were altered with the release of the product.
Download

Area 2 - Artificial Intelligence and Decision Support Systems

Full Papers
Paper Nr: 63
Title:

Smart General Variable Neighborhood Search with Local Search based on Mathematical Programming for Solving the Unrelated Parallel Machine Scheduling Problem

Authors:

Marcelo F. Rego and Marcone F. Souza

Abstract: This work addresses the Unrelated Parallel Machine Scheduling Problem in which machine and job sequence-dependent setup time are considered. The objective is to minimize the makespan. For solving it, a Smart General Variable Neighborhood Search algorithm is proposed. It explores the solution space through five strategies: swap of jobs in the same machine, insertion of job in the same machine, swap of jobs between machines, insertion of jobs to different machines and an application of a Mixed Integer Linear Programming formulation to obtain optimum scheduling on each machine. The first four strategies are used as shaking mechanism, while the last three are applied as local search through the Variable Neighborhood Descent method. The proposed algorithm was tested in a set of 810 instances available in the literature and compared to three state-of-the-art algorithms. Although the SGVNS algorithm did not statistically outperform them in these instances, it was able to outperform them in 79 instances.
Download

Paper Nr: 75
Title:

NETHIC: A System for Automatic Text Classification using Neural Networks and Hierarchical Taxonomies

Authors:

Andrea Ciapetti, Rosario Di Florio, Luigi Lomasto, Giuseppe Miscione, Giulia Ruggiero and Daniele Toti

Abstract: This paper presents NETHIC, a software system for the automatic classification of textual documents based on hierarchical taxonomies and artificial neural networks. This approach combines the advantages of highly-structured hierarchies of textual labels with the versatility and scalability of neural networks, thus bringing about a textual classifier that displays high levels of performance in terms of both effectiveness and efficiency. The system has first been tested as a general-purpose classifier on a generic document corpus, and then applied to the specific domain tackled by DANTE, a European project that is meant to address criminal and terrorist-related online contents, showing consistent results across both application domains.
Download

Paper Nr: 83
Title:

MultiMagNet: A Non-deterministic Approach based on the Formation of Ensembles for Defending Against Adversarial Images

Authors:

Gabriel R. Machado, Ronaldo R. Goldschmidt and Eugênio Silva

Abstract: Deep Neural Networks have been increasingly used in decision support systems, mainly because they are the state-of-the-art algorithms for solving challenging tasks, such as image recognition and classification. However, recent studies have shown these learning models are vulnerable to adversarial attacks, i.e. attacks conducted with images maliciously modified by an algorithm to induce misclassification. Several works have proposed methods for defending against adversarial images, however these defenses have shown to be inefficient, since they have facilitated the understanding of their internal operation by attackers. Thus, this paper proposes a defense called MultiMagNet, which randomly incorporates at runtime multiple defense components, in an attempt to introduce an expanded form of non-deterministic behavior so as to hinder evasions by adversarial attacks. Experiments performed on MNIST and CIFAR-10 datasets prove that MultiMagNet can protect classification models from adversarial images generated by the main existing attacks algorithms.
Download

Paper Nr: 90
Title:

An Iterated Local Search Algorithm for Cell Nuclei Detection from Pap Smear Images

Authors:

Débora N. Diniz, Marcone F. Souza, Claudia M. Carneiro, Daniela M. Ushizima, Fátima M. Sombra, Paulo C. Oliveira and Andrea C. Bianchi

Abstract: In this work, we propose an Iterated Local Search (ILS) approach to detect cervical cell nuclei from digitized Pap smear slides. The problem consists in finding the best values for the parameters to identify where the cell nuclei are located in the image. This is an important step in building a computational tool to help pathologists to identify cell alterations from Pap tests. Our approach is evaluated by using the ISBI Overlapping Cervical Cytology Image Segmentation Challenge (2014) database, which has 945 synthetic images and their respective ground truth. The precision achieved by the proposed heuristic approach is among the best ones in the literature; however, the recall still needs improvement.
Download

Paper Nr: 118
Title:

Decision Support for Planning Maritime Search and Rescue Operations in Canada

Authors:

Irène Abi-Zeid, Michael Morin and Oscar Nilo

Abstract: In this project we constructed and evaluated research artifacts to support Search and Rescue (SAR) mission coordinators in planning searches for missing persons or objects at sea. An iterative heuristic based optimization model was formulated and implemented in a prototype that is integrated in a Decision Support System. Using representative examples, we show that the new planning method can help coordinators with the complex task of allocating search resources to search areas in a way that maximizes the chances of finding survivors quickly. Although developed for the Canadian Coast Guard, our method can be used in other countries. We followed Design Science Research guidelines and our design process was according to the Design Science Research Methodology. The research entry point was client and context initiated and beta testing with users is planned in the spring of 2019. It is expected that our innovative artifacts will contribute to improving the SAR system and saving more lives.
Download

Paper Nr: 137
Title:

A New Labelling Algorithm for Generating Preferred Extensions of Abstract Argumentation Frameworks

Authors:

Samer Nofal, Katie Atkinson, Paul E. Dunne and Ismail Hababeh

Abstract: The field of computational models of argument aims to provide support for automated reasoning through algorithms that operate on arguments and attack relations between them. In this paper we present a new labelling algorithm that lists all preferred extensions of an abstract argumentation framework. The new algorithm is enhanced by a new pruning strategy. We verified our new labelling algorithm and showed that it enumerates preferred extensions faster than the old labelling algorithm.
Download

Paper Nr: 171
Title:

FREEController: A Framework for Relative Efficiency Evaluation of Software-Defined Networking Controllers

Authors:

Eduardo A. Klosowski and Adriano Fiorese

Abstract: A Software-Defined Network (SDN) requires a controller that is responsible for defining how the network will behave, since it has the responsibility to install flow rules for forwarding the data streams through the network devices. Thus, it is necessary that the controller presents performance good enough to attend the network needs. However, with the diversity of existing controllers, some offering more facilities to the developer, while others offer higher performance, a doubt arises regarding which controller manages to attend the network demand, or how much performance can be bargained to get more facilities. To answer these questions, this work presents FREEController. It is an SDN controllers evaluation framework based on relative efficiency obtained by means the Data Envelopment Analysis (DEA) multicriteria decision-making method. This proposed framework takes into account several stages including the controllers’ performance evaluation, creating a performance database, and how to use these database to identify which controllers attend the network demand using the DEA method. Results comprising the proposed framework evaluation indicate the viability of the relative efficiency approach and its relation with the used controllers’ resources.
Download

Paper Nr: 174
Title:

Fuzzy Cooperative Games Usage in Smart Contracts for Dynamic Robot Coalition Formation: Approach and Use Case Description

Authors:

Alexander Smirnov, Leonid Sheremetov and Nikolay Teslya

Abstract: The paper describes an approach to dynamic formation of coalitions of independent robots based on the integration of fuzzy cooperative games and smart contracts. Each member of the coalition is represented in the form of an independent agent, negotiating at the stage of coalition formation for distribution of joint winnings. A cooperative game with fuzzy core is used to form a coalition allowing coordinating the actions of individual members to achieve a common goal, as well as to evaluate and distribute the overall benefit. To implement the negotiation process and store the responsibilities of individual participants, it is proposed to use the smart contract technology, which now become a part of the blockchain technology. Smart contracts are used as entity where the requirements and expected winnings of each participant are stored. The final agreement is also stored in form of smart contract that contains the distribution coefficients of the winnings given all the conditions of participation in the coalition. The availability of smart contracts to all coalition participants provides joint control over the fulfilment of the task assigned to the coalition. The paper describes a use case based on precision farming to illustrate the main concepts of the proposed approach.
Download

Short Papers
Paper Nr: 92
Title:

A Fuzzy Approach for Data Quality Assessment of Linked Datasets

Authors:

Narciso Arruda, J. Alcântara, V. P. Vidal, Angelo Brayner, M. A. Casanova, V. M. Pequeno and Wellington Franco

Abstract: For several applications, an integrated view of linked data, denoted linked data mashup, is a critical requirement. Nonetheless, the quality of linked data mashups highly depends on the quality of the data sources. In this sense, it is essential to analyze data source quality and to make this information explicit to consumers of such data. This paper introduces a fuzzy ontology to represent the quality of linked data source. Furthermore, the paper shows the applicability of the fuzzy ontology in the process of evaluating data source quality used to build linked data mashups.
Download

Paper Nr: 105
Title:

A New Process Model for the Comprehensive Management of Machine Learning Models

Authors:

Christian Weber, Pascal Hirmer, Peter Reimann and Holger Schwarz

Abstract: The management of machine learning models is an extremely challenging task. Hundreds of prototypical models are being built and just a few are mature enough to be deployed into operational enterprise information systems. The lifecycle of a model includes an experimental phase in which a model is planned, built and tested. After that, the model enters the operational phase that includes deploying, using, and retiring it. The experimental phase is well known through established process models like CRISP-DM or KDD. However, these models do not detail on the interaction between the experimental and the operational phase of machine learning models. In this paper, we provide a new process model to show the interaction points of the experimental and operational phase of a machine learning model. For each step of our process, we discuss according functions which are relevant to managing machine learning models.
Download

Paper Nr: 138
Title:

Layout of Routers in Mesh Networks with Evolutionary Techniques

Authors:

Pedro G. Coelho, J. F. M. do Amaral, K. P. Guimarães and Matheus C. Bentes

Abstract: Wireless Mesh Networks show cost-efficient and fast deployment characteristics, however their major problem is mesh router placement. Such optimal mesh router placement ensures desired network performance concerning network connectivity and coverage area. As the problem is NP hard, a motivation to solve the mesh router placement problem and seek optimal solution with suitable performance is to follow a heuristic approach using evolutionary techniques involving genetic algorithms including fuzzy aggregation. Two case studies are considered in this paper. The first one deals with a genetic algorithm application for spatial layout of routers in a two dimensional, obstacle free, wireless mesh network model. The second one considers a hybrid fuzzy-genetic scheme based on a fuzzy aggregation system that assesses the fitness of a genetic algorithm. The hybrid system carries out the routers layout evolution within an area with localization constraints where the placements of such routers are high cost. The results indicate the feasibility of the proposed method for this type of application.
Download

Paper Nr: 141
Title:

An Evaluation Model for Dynamic Motivational Analysis

Authors:

Aluizio H. Filho, Simone Sartori, Hércules Antônio do Prado, Edilson Ferneda and Paulo I. Koehntopp

Abstract: In the past decades, a significant number of researches have sought to determine which factors make a worker satisfied and productive. Currently, there are intensive efforts to develop efficient systems for motivational analysis and performance evaluation. Current approaches of measuring motivation are very focused on questionnaires and periodic interviews. These periods are most often greater than 6 months, and in most cases performed annually. With today's communication dynamics, employees can be influenced at any time by external factors of market supply and demand, as well as communications with peers and colleagues in the device mesh. It is becoming increasingly important to obtain real-time information to take preventive or corrective measures in a timely manner. This paper proposes a framework for real-time motivational analysis using artificial intelligence techniques in order to evaluate employee’ motivation at work. The motivation is evaluated from different groups of indicators: a static and periodic group (interviews and questionnaires), and two other dynamic groups that collect information in real time. With the results generated by the system, it is possible to make important decisions, such as understanding the emotional interactions among employees, improving working conditions, identifying indicators of dissatisfaction and lack of motivation, encouraging promotions, salary adjustments and other situations.
Download

Paper Nr: 163
Title:

The Influence of Various Text Characteristics on the Readability and Content Informativeness

Authors:

Nina Khairova, Anastasiia Kolesnyk, Orken Mamyrbayev and Kuralay Mukhsina

Abstract: Currently, businesses increasingly use various external big data sources for extracting and integrating information into their own enterprise information systems to make correct economic decisions, to understand customer needs, and to predict risks. The necessary condition for obtaining useful knowledge from big data is analysing high-quality data and using quality textual data. In the study, we focus on the influence of readability and some particular features of the texts written for a global audience on the texts quality assessment. In order to estimate the influence of different linguistic and statistical factors on the text readability, we reviewed five different text corpora. Two of them contain texts from Wikipedia, the third one contains texts from Simple Wikipedia and two last corpora include scientific and educational texts. We show linguistic and statistical features of a text that have the greatest influence on the text quality for business corporations. Finally, we propose some directions on the way to automatic predicting the readability of texts in the Web.
Download

Paper Nr: 169
Title:

An Architectural Blueprint for a Multi-purpose Anomaly Detection on Data Streams

Authors:

Christoph Augenstein, Norman Spangenberg and Bogdan Franczyk

Abstract: Anomaly detection means a hypernym for all kinds of applications finding unusual patterns or not expected behaviour like identifying process patterns, network intrusions or identifying utterances with different meanings in texts. Out of different algorithms artificial neuronal nets, and deep learning approaches in particular, tend to perform best in detecting such anomalies. A current drawback is the amount of data needed to train such net-based models. Moreover, data streams make situation even more complex, as streams cannot be directly fed into a neuronal net and the challenge to produce stable model quality remains due to the nature of data streams to be potentially infinite. In this setting of data streams and deep learning-based anomaly detection we propose an architecture and present how to implement essential components in order to process raw input data into high quality information in a constant manner.
Download

Paper Nr: 186
Title:

Hierarchical Ontology Graph for Solving Semantic Issues in Decision Support Systems

Authors:

Hua Guo and Kecheng Liu

Abstract: In the context of the development of AI algorithms in natural language processing, tremendous progress has been made in knowledge abstraction and semantic reasoning. However, for answering the questions with complex logic, AI system is still in an early stage. Hierarchical ontology graph is proposed to establish analysis threads for the complex question in order to facilitate AI system to further support in business decision making. The study of selecting the appropriate corpora is intended to improve the data asset management of enterprises.
Download

Paper Nr: 190
Title:

Towards a Framework for Classifying Chatbots

Authors:

Daniel Braun and Florian Matthes

Abstract: From sophisticated personal voice assistants like Siri or Alexa to simplistic keyword-based search bots, today, the label “chatbot” is used broadly for all kinds of systems that use natural language as input. However, the systems summarized under this term are so diverse, that they often have very little in common with regard to technology, usage, and their theoretical background. In order to make such systems more comparable, we propose a framework that classifies chatbots based on six categories, which allow a meaningful comparison based on features which are relevant for developers, scientists, and users. Ultimately, we hope to support the scientific discourse, as well as the development of chatbots, by providing an instrument to classify and analyze different groups of chatbot systems regarding their requirements, possible evaluation strategies, available toolsets, and other common features.
Download

Paper Nr: 68
Title:

A Probabilistic Approach based on a Finite Mixture Model of Multivariate Beta Distributions

Authors:

Narges Manouchehri and Nizar Bouguila

Abstract: Model-based approaches specifically finite mixture models are widely applied as an inference engine in machine learning, data mining and related disciplines. They proved to be an effective and advanced tool in discovery, extraction and analysis of critical knowledge from data by providing better insight into the nature of data and uncovering hidden patterns that we are looking for. In recent researches, some distributions such as Beta distribution have demonstrated more flexibility in modeling asymmetric and non-Gaussian data. In this paper, we introduce an unsupervised learning algorithm for a finite mixture model based on multivariate Beta distribution which could be applied in various real-world challenging problems such as texture analysis, spam detection and software modules defect prediction. Parameter estimation is one of the crucial and critical challenges when deploying mixture models. To tackle this issue, deterministic and efficient techniques such as Maximum likelihood (ML), Expectation maximization (EM) and Newton Raphson methods are applied. The feasibility and effectiveness of the proposed model are assessed by experimental results involving real datasets. The performance of our framework is compared with the widely used Gaussian Mixture Model (GMM).
Download

Paper Nr: 85
Title:

Human-centered Artificial Intelligence: A Multidimensional Approach towards Real World Evidence

Authors:

Bettina Schneider, Petra M. Asprion and Frank Grimberg

Abstract: This study indicates the significance of a human-centered perspective in the analysis and interpretation of Real World Data. As an exemplary use-case, the construct of perceived ‘Health-related Quality of Life’ is chosen to show, firstly, the significance of Real World Data and, secondly, the associated ‘Real World Evidence’. We settled on an iterative methodology and used hermeneutics for a detailed literature analysis to outline the relevance and the need for a forward-thinking approach to deal with Real World Evidence in the life science and health care industry. The novelty of the study is its focus on a human-centered artificial intelligence, which can be achieved by using ‘System Dynamics’ modelling techniques. The outcome – a human-centered ‘Indicator Set’ can be combined with results from data-driven, AI-based analytics. With this multidimensional approach, human intelligence and artificial intelligence can be intertwined towards an enriched Real World Evidence. The developed approach considers three perspectives – the elementary, the algorithmic and – as novelty – the human-centered evidence. As conclusion, we claim that Real World Data are more valuable and applicable to achieve patient-centricity and personalization if the human-centered perspective is considered ‘by design’.
Download

Paper Nr: 88
Title:

Multi-Agent Analysis Model of Resource Allocation Variants to Ensure Fire Safety

Authors:

Andrey Smirnov, Renat Khabibulin, Nikolay Topolski and Denis Tarakanov

Abstract: Algorithmization and program implementation of theoretical positions of multi-agent analysis of resource allocation variants to ensure fire safety were conducted. The informational decision support system was developed, within which variations of resource allocation in a multi-agent management system are offered. The feature of the developing informational system from similar is an ability of approximations of expert’s opinion accounting multi-level procedure of variation’s analysis in a multi-agent management system. The multi-level procedure of variation’s analysis allows to approximate preference of the management centre more completely and, therefore, to reduce the subjectivity of the process of making decisions on resource allocation to ensure fire safety. The procedure includes two main stages: on the first stage component-goals are distributed by sets; on the second stage we get the ranking according to the preference of the management centre. Using quantitative measures of the Shannon entropy it is proved that the offered multi-level procedure of variations analysis in multi-agent management system allows to approximate the preference of the management centre more completely in comparison with known methods of variations of resource allocation analysis in long-term planning tasks.
Download

Paper Nr: 102
Title:

A Semantic Approach for Handling Probabilistic Knowledge of Fuzzy Ontologies

Authors:

Ishak Riali, Messaouda Fareh and Hafida Bouarfa

Abstract: Today, there is a critical need to develop new solutions that enable classical ontologies to deal with uncertain knowledge, which is inherently attached to the most of the real world’s problems. For that need, several solutions have been proposed; one of them is based on fuzzy logic. Fuzzy ontologies were proposed as candidate solutions based on fuzzy logic. Indeed, they propose a formal representation and reason in presence of vague and imprecise knowledge in classical ontologies. Despite their indubitable success, they cannot handle the probabilistic knowledge, which is presented in most of the real world’s applications. To address this problem, this paper proposes a new solution based on fuzzy Bayesian networks, which aims at enhancing the expressivity of the fuzzy ontologies to handle probabilistic knowledge and benefits from the highlights of the fuzzy Bayesian networks to provide a fuzzy probabilistic reasoning based on vague knowledge stored in fuzzy ontologies.
Download

Paper Nr: 115
Title:

A Software Assistant to Provide Notification to Users of Missing Information in Input Texts

Authors:

Mandy Goram

Abstract: In this paper a software assistant is presented, which supports the users of a small community app in the creation of ads in the social marketplace by pointing out missing information and useful additions. For this purpose, questions about potentially missing aspects of the content should create incentives to supplement the missing information. An insight into the prototypical development of the software assistant shows that automated support functions can be provided for the users with machine learning procedures and natural language processing even despite data protection restrictions and less data. The focus of this paper is on the presentation of text creation support. Its implementation reveals problems with the use of German language models and their language processing and counteracts these with a rule-based approach. The learning ability of the system through automated learning procedures enables the software assistants to react and categorize to linguistic and content-related changes in the input text of the users.
Download

Paper Nr: 125
Title:

Evaluating Use Cases Suitability for Conversational User Interfaces

Authors:

Pedro Ferreira and André Vasconcelos

Abstract: The developments in Natural Language Understanding (NLU) are enabling tasks that were typically performed interacting with humans to be now performed interacting with dialog systems, using the same natural language. Dialog systems can also be used in alternative to more traditional graphic user interface (GUI) applications. A review of the intrinsic differences and benefits of humans interacting with dialog systems in alternative to other humans or GUI applications is performed. It is also reviewed the types of use cases that are now being performed by chatbots. This paper aims to identify the factors that influence the selection of use cases suitable for conversational user interfaces, enabling organizations to make more informed decisions regarding chatbots implementations. The factors identified are grouped in three categories: (i) general factors, (ii) factors to be considered to implement a chatbot over a human operator; and (iii) factors that should be considered when implementing a chatbot over a traditional GUI application. Finally, an assessment to the scheduling a medical appointment use case is performed, using the defined factors. This use case is considered suitable to a conversational user interface according to the factors.
Download

Paper Nr: 148
Title:

Towards a Privacy Compliant Cloud Architecture for Natural Language Processing Platforms

Authors:

Matthias Blohm, Claudia Dukino, Maximilien Kintz, Monika Kochanowski, Falko Koetter and Thomas Renner

Abstract: Natural language processing in combination with advances in artificial intelligence is on the rise. However, compliance constraints while handling personal data in many types of documents hinder various application scenarios. We describe the challenges of working with personal and particularly sensitive data in practice with three different use cases. We present the anonymization bootstrap challenge in creating a prototype in a cloud environment. Finally, we outline an architecture for privacy compliant AI cloud applications and an anonymization tool. With these preliminary results, we describe future work in bridging privacy and AI.
Download

Paper Nr: 184
Title:

Multi-agent Manufacturing Execution System (MES): Concept, Architecture & ML Algorithm for a Smart Factory Case

Authors:

Soujanya Mantravadi, Chen Li and Charles Møller

Abstract: Smart factory of the future is expected to support interoperability on the shop floor, where information systems are pivotal in enabling interconnectivity between its physical assets. In this era of digital transformation, manufacturing execution system (MES) is emerging as a critical software tool to support production planning and control while accessing the shop floor data. However, application of MES as an enterprise information system still lacks the decision support capabilities on the shop floor. As an attempt to design intelligent MES, this paper demonstrates one of the artificial intelligence (AI) applications in the manufacturing domain by presenting a decision support mechanism for MES aimed at production coordination. Machine learning (ML) was used to develop an anomaly detection algorithm for multi-agent based MES to facilitate autonomous production execution and process optimization (in this paper switching the machine off after anomaly detection on the production line). Thus, MES executes the ‘turning off’ of the machine without human intervention. The contribution of the paper includes a concept of next-generation MES that has embedded AI, i.e., a MES system architecture combined with machine learning (ML) technique for multi-agent MES. Future research directions are also put forward in this position paper.
Download

Paper Nr: 187
Title:

Analytics Applied to the Study of Reputational Risk through the Analysis of Social Networks (Twitter) for the El Dorado Airport in the City of Bogotá (Colombia)

Authors:

Luis Gabriel Moreno Sandoval and Liliana María Pantoja Rojas

Abstract: Within a society increasingly technological and immersed in the different possibilities offered by the Internet as a channel of interaction and communication, there is a need for private and state entities to explore the information contained in social networks, an opportunity that implies new challenges associated with emerging risks such as reputational. For this reason, through the study of social networks, reputational risk and computational linguistics, an analysis of the strategic accounts of the El Dorado Airport of the city of Bogotá (Colombia) on Twitter (@bog_eldorado) is carried out, taking into account the mentions and most relevant hashtags, to identify the polarity through of BoW (Bag of Words) of your comments and infer the user experience, customer satisfaction and how this influences reputational risk.
Download

Paper Nr: 212
Title:

The Effects of Digitalisation on Accounting Service Companies

Authors:

Tommi Jylhä and Nestori Syynimaa

Abstract: Rapidly expanding digitalisation profoundly affects several jobs and businesses in the following years. Some of the jobs are expected to disappear altogether. The expanding digitalisation can be seen as an example of the diffusion of innovations. The world has witnessed similar developments since the early years of industrialisation. Some of the sectors that will face the most disruptive changes are accounting, bookkeeping, and auditing. As much as 94 to 98 per cent of these jobs are at risk. The purpose of this study was to find out how digitalisation, automation of routines, robotics, and artificial intelligence are expected to affect the business structure, organisations, tasks, and employees in Finland in the following years. In this study, 11 of the biggest companies providing outsourced accounting services in Finland were interviewed. According to the results, the development of the technology will lead to substantial loss of routine jobs in the industry in the next few years. The results of the study will help estimate the changes the rapidly developing technology will bring to the industry in focus. The results will also help the organisations in the industry to learn from the experiences of the other organisations, see the potential benefits, and prepare for the forthcoming change through strategic choices, management, and personnel training.
Download

Area 3 - Information Systems Analysis and Specification

Full Papers
Paper Nr: 3
Title:

Business Process Support in the Context of Records Management

Authors:

João Pires, André Vasconcelos and José Borbinha

Abstract: For an organization to achieve benefits from records management, it needs to include several practices such as the implementation of a records management system in alignment with the organization's reality and business processes. The focus of this work is on framing records management in business process management by identifying the role that records management plays in business. This research is applied in the replacement of a records management system workflow module. This research proposes the replacement of the workflow module by a flexible and open source solution, composed by an enterprise service bus and two workflow engines, accessed through a single API. The solution proves to be a flexible and scalable solution to the workflow module and that can even be used by more systems than records management systems. The implementation of records management in organizations is analysed, including the alignment between business processes, records management and workflows. This research shows that records management acts as an essential component to business that supports business processes and all the phases of their management, allowing for improvement regarding the storage, management and monitoring of records while also optimizing the execution of business activities.
Download

Paper Nr: 8
Title:

Managing Scope, Stakeholders and Human Resources in Cyber-Physical System Development

Authors:

Filipe P. Palma, Marcelo Fantinato, Laura Rafferty and Patrick K. Hung

Abstract: Cyber-Physical Systems (CPS) represent the convergence of physical processes and computational platforms into technological solutions composed by sensors, actuators and software. Such as for other types of projects, project management practices can benefit a CPS development project. Due to some particularities of CPS, such as multidisciplinary team and high innovative aspect, generic project management practices may not be enough to enhance project success. Therefore, specific practices are proposed for better managing CPS projects, called CPS-PMBOK approach. CPS-PMBOK is based on the Project Management Institute’s PMBOK body of knowledge. It is focused on the integration, scope, human resource and stakeholder knowledge areas; which were chosen considering a systematic literature review conducted to identify the main CPS challenges. Managers and developers of a R&D organization evaluated the approach. According to the practitioners consulted, the proposed practices can improve several aspects related to CPS projects.
Download

Paper Nr: 21
Title:

Production-Aware Analysis of Multi-disciplinary Systems Engineering Processes

Authors:

Lukas Kathrein, Arndt Lüder, Kristof Meixner, Dietmar Winkler and Stefan Biffl

Abstract: The Industry 4.0 vision of flexible manufacturing systems depends on the collaboration of domain experts coming from a variety of engineering disciplines and on the explicit representation of knowledge on relationships between products and production systems (PPR knowledge). However, in multi-disciplinary systems engineering organizations, process analysis and improvement has traditionally focused on one specific discipline rather than on the collaboration of several workgroups and their exchange of knowledge on product/ion, i.e., product and production processes. In this paper, we investigate requirements for the product/ion-aware analysis of engineering processes to improve the engineering process across workgroups. We introduce a product/ion-aware engineering processes analysis (PPR EPA) method, to identify gaps in PPR knowledge needed and provided. For representing PPR knowledge, we introduce a product/ion-aware data processing map (PPR DPM) by extending the BPMN 2.0 standard, adding PPR knowledge classification. We evaluate the contribution in a case study at a large production systems engineering company. The domain experts found the PPR EPA method using the PPR DPM usable and useful to trace design decisions in the engineering process as foundation for advanced quality assurance analyses.
Download

Paper Nr: 30
Title:

Improving Reproducibility whilst Maintaining Accuracy in Function Point Analysis

Authors:

Marcos D. Freitas Jr., Marcelo Fantinato, Violeta Sun, Lucinéia H. Thom and Vanja Garaj

Abstract: Existing proposals to improve the measurement reproducibility of Function Point Analysis (FPA) oversimplify its standard rules, threatening its measurement accuracy. We introduce a new artifact called Function Point Tree (FPT), which allows for full data collection required to count function points, reducing the experts’ personal interpretation and thus the size variation. The new measurement method, called FPT-based FPA (FPT-FPA), enlarges FPA standardization and systematization. Using this method allows to improve measurement reproducibility whilst maintaining its accuracy. Preliminary results of an empirical study show coefficients of variation for FTP-FPA lower than the maximum expected for both reproducibility and accuracy for some scenarios.
Download

Paper Nr: 33
Title:

Work Processes in Virtual Teams: A Matching Algorithm for Their Technological Facilitation

Authors:

Birgit Großer and Ulrike Baumöl

Abstract: Virtual teams have almost become normality, especially in larger organizations. Often globally dispersed project teams work together in a virtual setting, but we also find organizations that are fully organized following a virtual design. New technology facilitates the implementation of virtual teamwork into the organization. However, new technology steadily evolves and adaptations by organizations are not always considered as successful. We therefore propose an algorithm for matching technology to work processes of virtual teams. The results are evaluated through interviews and derived on a generalizable level, making them transferrable to changing work environments and also to technologies yet to be innovated.
Download

Paper Nr: 35
Title:

Records Management Support in the Interoperability Framework for the Portuguese Public Administration

Authors:

Catarina Viegas, André Vasconcelos, José Borbinha and Zaida Chora

Abstract: The Portuguese public administration has a core technological infrastructure for interoperability, which assures reliable core transactions, but takes all information objects as equals, leaving any necessary specialization to the applications. However, public administrations are highly regulated environments, which implies business processes involving entities of that domain are subject to strong requirements for information management. Records management in special is a specific concern, meaning metadata for that purpose must be produced along the production of the regular business information objects. In that sense, when two or more entities of a domain of this kind engage in transactions, it is helpful for all those involved if also metadata created for that purpose can be shared, which requires it to be commonly understood. In Portugal, national guidelines have been developed to support that goal, remaining now the challenge of their implementation. This is a classic problem of interoperability in distributed information systems, which has particular challenges when scoped in the domain of a large public administration, involving thousands of local systems. This paper describes the results of a research project intended to provide a proof of concept for that for the case of the Portuguese public administration, which resulted in a case of application of the Canonical Data Model method. The metadata schema produced is assessed using the Bruce-Hillman metadata quality framework, which made possible to conclude by its effectiveness, along with suggestions for future improvements.
Download

Paper Nr: 40
Title:

Automated Measurement of Technical Debt: A Systematic Literature Review

Authors:

Ilya Khomyakov, Zufar Makhmutov, Ruzilya Mirgalimova and Alberto Sillitti

Abstract: Background: Technical Debt (TD) is a quite complex concept that includes several aspect of software development. Often, people talk about TD as the amount of postponed work but this is just a basic approximation of the concept that includes many aspects that are technical and managerial. If TD is managed properly, it can provide a huge advantage but it can also make projects unmaintainable, if not. Therefore, being able of measuring TD is a very important aspect for a proper management of the development process. However, due to the complexity of the concept and the different aspects that are involved, such measurement it not easy and there are several different approaches in literature. Goals: This work aims at investigating the existing approaches to the measurement and the analysis of TD focusing on quantitative methods that could also be automated. Method: The Systematic Literature Review (SLR) approach was applied to 331 studies obtained from the three largest digital libraries and databases. Results: After applying all filtering stages, 21 papers out of 331 were selected and deeply analyzed. The majority of them suggested new approaches to measure TD using different criteria not built on top of existing ones. Conclusions: Existing studies related to the measurement of TD were observed and analyzed. The findings have shown that the field is not mature and there are several models that have almost no independent validation. Moreover few tools for helping to automate the evaluation process exist.
Download

Paper Nr: 54
Title:

Using Model Scoping with Expected Model Elements to Support Software Model Inspections: Results of a Controlled Experiment

Authors:

Carlos G. Neto, Amadeu A. Neto, Marcos Kalinowski, Daniel C. Moraes de Oliveira, Marta Sabou, Dietmar Winkler and Stefan Biffl

Abstract: Context: Software inspection represents an effective way to identify defects in early phase software artifacts, such as models. Unfortunately, large models and associated reference documents cannot be thoroughly inspected in one inspection session of typically up to two hours. Considerably longer sessions have shown a much lower defect detection efficiency due to cognitive fatigue. Goal: The goal of this paper is to propose and evaluate a Model Scoping approach to allow inspecting specific parts of interest in large models. Method: First, we designed the approach, which involves identifying Expected Model Elements (EMEs) in selected parts of the reference document and then using these EMEs to scope the model (i.e., remove unrelated parts). These EMEs can also be used to support inspectors during defect detection. We conducted a controlled experiment using industrial artifacts. Subjects were asked to conduct UML class diagram inspections based on selected parts of functional specifications. In the experimental treatment, Model Scoping was applied and inspectors were provided with the scoped model and the EMEs. The control group used the original model directly, without EMEs. We measured the inspectors’ defect detection effectiveness and efficiency and collected qualitative data on the perceived complexity. Results: Applying Model Scoping prior to the inspection significantly increased the inspector defect detection effectiveness and efficiency, with large effect sizes. Qualitative data allowed observing a perception of reduced complexity during the inspection. Conclusion: Being able to effectively and efficiently inspect large models against selected parts of reference documents is a practical need, in particular in the context of incremental and agile process models. The experiment showed promising results for supporting such inspections using the proposed Model Scoping approach.
Download

Paper Nr: 70
Title:

Reinforcing Diversity Company Policies: Insights from StackOverflow Developers Survey

Authors:

Karina K. Silveira, Soraia Musse, Isabel Manssour, Renata Vieira and Rafael Prikladnicki

Abstract: Diversity is being intensively discussed by different knowledge areas of society and discussions in Software Engineering, are increasing as well. There are unconscious bias and lack of representativeness and when we talk about characteristics as ethnicity and gender, to mention a few. How can tech companies support diversity, minimizing unconscious bias in their teams? Studies say that diversity builds better teams and delivers better results, among other benefits. Cognitive diversity is linked to better outcomes, and is influenced by identity diversity (e.g., gender, race, etc.), mainly when tasks are related to problem-solving and prediction. In this work, we are interested in understanding the pain points in software engineering regarding diversity and provide insights to support the attraction, hiring and retention policies for more diverse software engineering environments. StackOverflow is a popular community question&answer forum, with a high engagement of software developers. Yearly, they apply a survey, present straightforward results, and made the anonymized results available for download. So, it is possible to perform additional analysis beyond the original ones. Using data visualization techniques, we analyzed 2018 data implying insights and recommendations. Results show that diversity in companies is not yet a conscious decision-making factor for developers assessing a new job opportunity, and respondents from underrepresented groups tend to believe more they are not as good as their peers. We also propose a discussion about the unconscious bias, stereotypes, and impostor syndrome and how to provide support on that.
Download

Paper Nr: 94
Title:

Some Reflections on the Discovery of Hyponyms between Ontologies

Authors:

Ignacio Huitzil, Fernando Bobillo, Eduardo Mena, Carlos Bobed and Jesús Bermúdez

Abstract: Using intelligent techniques to automatically compute semantic relationships across ontologies is a challenging task that is necessary in many real-world applications requiring the integration of semantic information coming from different sources. However, most of the work in the field is restricted to the discovery of synonymy relationships. Hyponymy relationships, although in the real world they are more frequent than synonymy, have not received similar attention. In this paper, we evaluate a technique based on shared properties used in the discovery of hyponymy relationships and identify some limitations of ontology sets commonly used as benchmarks. We also argue that new lexical similarity measures are needed and discuss a preliminary proposal.
Download

Short Papers
Paper Nr: 5
Title:

Data Governance and Information Governance: Set of Definitions in Relation to Data and Information as Part of DIKW

Authors:

Jan Merkus, Remko Helms and Rob Kusters

Abstract: Chaos emerges with the ever growing amounts of data and information within organisations. But it is problematic to manage these valuable assets and also remain accountable and compliant for them because there is no agreement about even their definitions. Our objective is to propose a coherent set of definitions for data governance and information governance within and across organisations in relation with data and information as underlying concepts. As a research method, we explore elements from existing definitions in literature about the Data-Information-Knowledge-Wisdom pyramid and about data governance and information governance. Classification of these elements and coding them in concepts during discussions among peers resulted in a new vocabulary. This forms the basis for formulation and design of an original coherent set of definitions for data, information, meaning, data governance and information governance. This research is grounded, goal oriented and uses multiple accepted literature review methods. But it is limited to the literature found and the IS domain.
Download

Paper Nr: 7
Title:

Using ABE for Medical Data Protection in Fog Computing

Authors:

Abdelghani Krinah, Yacine Challal, Mawloud Omar and Omar Nouali

Abstract: Fog is an extension of the cloud computing paradigm, developed to fix the clouds latency, especially for applications requiring a very short response time, such as e-health applications. However, these applications also require a high level of data confidentiality, hence the need to apply appropriate encryption techniques, which can ensure security needs, while respecting the characteristics of the infrastructures devices. In this article, we will focus on ABE encryption, through the work done to study its applicability in the cloud and the Internet of things, as well as the improvements that can be made to adapt it to the fog computing environment.
Download

Paper Nr: 61
Title:

Agile ERP Implementation: The Case of a SME

Authors:

Sarra Mamoghli and Luc Cassivi

Abstract: The definition of system requirements is a critical management issue during an ERP project. The traditional and plan-driven approaches to define these system requirements during an ERP implementation create challenges for small organizations, notably difficulties in the expression of their requirements, often related to a lack of expertise on the subject. This research investigates the implementation of an agile project management approach to deal with this issue in the context of an ERP project. Due to the complexity of adopting agile methods in practice, few organizations are fully agile and are hence more comfortable adopting hybrid approaches by selecting a set of practices among the agile and traditional methods. Even though the adoption of these methods is rapidly growing in the case of real-life ERP projects, few research initiatives have looked into this subject. This paper analyses the case of a Canadian SME that recently implemented an ERP with an agile mindset. Based on the data collected through interviews, observations and documentation, a set of agile practices applied during the ERP project in the SME is highlighted and analyzed. Finally, the impact of this approach on requirements definition and possible improvements for the organization are also presented.
Download

Paper Nr: 80
Title:

Integrating SPL and MDD to Improve the Development of Student Information Systems

Authors:

A. Cunha, S. Fernandes and A. P. Magalhães

Abstract: Software development has become increasingly complex in recent years, with the growing multiplicity of development platforms, the integration between components in heterogeneous environments and platforms, and frequent changes in requirements. Academic systems usually integrate various subsystems, such as student enrolment and class planning which can change almost every semester. To address these issues, different development approaches can be used, for example, Model-Driven Development (MDD) and Software Product Lines (SPL). This paper presents an approach that integrates MDD with SPL for the development of evaluation criteria in a family of educational systems. The solution comprises a modeling language, called DSCHOLAR, for creating the models; and a transformation for C# code generation. This article details the transformation responsible for generating the code of evaluation criteria components for the student evaluations according to different universities scenarios. The transformation was validated using proofs of concepts in which evaluation criteria from three public and private universities were modeled using DSCHOLAR and subsequently converted into C# code.
Download

Paper Nr: 81
Title:

Aligning Software Requirements with Strategic Management using Key Performance Indicators: A Case Study for a Telephone Sales Software

Authors:

Lucas R. Conceição

Abstract: Companies are increasingly dependent on tailor-made software to achieve their organizational goals. Much is already known about how to specify a software from an idea or concept, however, predicting the impact of building it on a company's results is still something little studied, and often the impact is measured only after its construction, resulting sometimes in a misuse of resources, compared to the result obtained. This paper presents a way of relating and measuring the impact of software requirements on strategic KPI’s, in order to extract quantitative and qualitative analyses of these relationships, providing relevant information in the decision making process regarding prioritization against business value. Through a case study, it is shown how to use Goal Modelling techniques to extract and relate requirements from the KPI's of a Balanced Scorecard. It is possible to extract, from the described techniques, qualitative and quantitative results that show the impact of each of the requirements on the mapped KPI’s.
Download

Paper Nr: 91
Title:

An Effective RF-based Intrusion Detection Algorithm with Feature Reduction and Transformation

Authors:

Jinxia Wei, Chun Long, Wei Wan, Yurou Zhang, Jing Zhao and Guanyao Du

Abstract: Intrusion detection systems are essential in the field of network security. To improve the performance of detection model, many machine learning algorithms have been applied to intrusion detection models. Higher-quality data is critical to the accuracy of detection model and could greatly improve the performance. In this paper, an effective random forest-based intrusion detection algorithm with feature reduction and transformation is proposed. Specifically, we implement the correlation analysis and logarithm marginal density ratio to reduce and strengthen the original features respectively, which can greatly improve accuracy rate of classifier. The proposed classification system was deployed on NSL-KDD dataset. The experimental results show that this paper achieves better results than other related methods in terms of false alarm rate, accuracy, detection rate and running time.
Download

Paper Nr: 97
Title:

A Smart Product Co-design and Monitoring Framework Via Gamification and Complex Event Processing

Authors:

Spyros Loizou, Amal Elgammal, Indika Kumara, Panayiotis Christodoulou, Mike P. Papazoglou and Andreas S. Andreou

Abstract: In the traditional software development cycle, requirements gathering is considered the most critical phase. Getting the requirements right early has become a dogma in software engineering because the correction of erroneous or incomplete requirements in later software development phases becomes overly expensive. For product-service systems (PSS), this dogma and standard requirements engineering (RE) approaches are not appropriate because classical RE is considered concluded once a product service is delivered. This paper proposes a novel framework that enables the customer and the product engineer to co-design smart products by integrating three novel and advanced technologies to support: view-based modelling, visualization and monitoring, i.e., Product-Oriented Configuration Language (PoCL), gamification and Complex Event Processing (CEP), respectively. These create a “digital-twin” model of the connected ‘smart’ factory of the future. The framework is formally founded on the novel concept of manufacturing blueprints, which are formalized knowledge-intensive structures that provide the basis for actionable PSS and production “intelligence” and a move toward more fact-based manufacturing decisions. Implementation and validation of the proposed framework through real-life case studies are ongoing to validate the applicability, utility and efficacy of the proposed solutions.
Download

Paper Nr: 99
Title:

A Field Research on the Practices of High Performance Software Engineering Teams

Authors:

Alessandra S. Dutra, Rafael Prikladnicki and Tayana Conte

Abstract: This paper presents the results of a field research aiming at identifying the practices adopted by High Productivity Software Engineering Teams .This field research was developed through interviews with project managers from several companies with the following objectives: to evaluate the knowledge of the professionals in relation to the characteristics of the high performance teams found in the literature; understand and identify which practices companies use to develop each high performance characteristic; identify the training approaches that are used to improve the professionals in each practice.
Download

Paper Nr: 104
Title:

Managing Discipline-Specific Metadata Within an Integrated Research Data Management System

Authors:

Marius Politze, Sarah Bensberg and Matthias S. Müller

Abstract: Our university intends to improve the central IT-support for management of research data. A core demand is supporting FAIR guiding principles. In order to make research data findable for future research projects, an application for the creation and storage of structured meta data for research data was developed. The created meta data repository enable creating, maintaining and querying research data based on discipline-specific properties. Since large number of meta data standards exist for different scientific domains, technologies from the areas of Linked Data and Semantic Web are used to process and store meta data. This work describes the requirements, the design and the implementation a the meta data application that can be integrated into existing research work flows and gives an overview of technical backgrounds used for creating the meta data repository.
Download

Paper Nr: 117
Title:

Application of Methodologies and Process Models in Big Data Projects

Authors:

Rosa Quelal, Luis E. Mendoza and Mónica Villavicencio

Abstract: The concept of Big Data is being used in different business sectors; however, it is not certain which methodologies and process models have been used for the development of these kind of projects. This paper presents a systematic literature review of studies reported between 2012 and 2017 related to agile and non-agile methodologies applied in Big Data projects. For validating our review process, a text mining method was used. The results reveal that since 2016 the number of articles that integrate the agile manifesto in Big Data project has increased, being Scrum the agile framework most commonly applied. We also found that 44% of articles obtained from a manual systematic literature review were automatically identified by applying text mining.
Download

Paper Nr: 119
Title:

A Food Value Chain Integrated Business Process and Domain Models for Product Traceability and Quality Monitoring: Pattern Models for Food Traceability Platforms

Authors:

Estrela F. Cruz and António D. Miguel Rosado Cruz

Abstract: Traceability of product lots in perishable products’ value chains, such as food products, is driven by increasing quality demands and customers’ awareness. Products’ traceability is related to the geographical origin and location of products and their transport and storage conditions. These properties must be continuosly measured and monitored, enabling products’ lots traceability concerning location and quality throughout the value chain. This paper proposes pattern integrated business-process and domain models for food product lots traceability in the inter-organizational space inside a food value chain, allowing organizations to exchange information about the quality and location of product lots, from their production and first sale until the sale to the final customer, passing through the transportation, storage, transformation and sale of each lot. The paper also presents the process followed for obtaining these two pattern models. Three exploratory case studies are used, towards the end of the paper, for validating the proposed business-process and domain pattern models.
Download

Paper Nr: 130
Title:

Rough Logs: A Data Reduction Approach for Log Files

Authors:

Michael Meinig, Peter Tröger and Christoph Meinel

Abstract: Modern scalable information systems produce a constant stream of log records to describe their activities and current state. This data is increasingly used for online anomaly analysis, so that dependability problems such as security incidents can be detected while the system is running. Due to the constant scaling of many such systems, the amount of processed log data is a significant aspect to be considered in the choice of any anomaly detection approach. We therefore present a new idea for log data reduction called ‘rough logs’. It utilizes rough set theory for reducing the number of attributes being collected in log data for representing events in the system. We tested the approach in a large case study - the experiments showed that data reduction possibilities proposed by our approach remain valid even when the log information is modified due to anomalies happening in the system.
Download

Paper Nr: 136
Title:

An Innovative Framework for Supporting Social Network Polluting-content Detection and Analysis

Authors:

Alfredo Cuzzocrea, Fabio Martinelli and Francesco Mercaldo

Abstract: In last years we are witnessing a growing interest in tools for analyzing big data gathered from social networks in order to find common opinions. In this context, content polluters on social networks make the opinion mining process difficult to browse valuable contents. In this paper we propose a method aimed to discriminate between pollute and real information from a semantic point of view. We exploit a combination of word embedding and deep learning techniques to categorize semantic similarities between (pollute and real) linguistic sentences. We experiment the proposed method on a data set of real-world sentences obtaining interesting results in terms of precision and recall.
Download

Paper Nr: 152
Title:

Methodological Approach of Integrating the Internet of Things with Enterprise Resource Planning Systems

Authors:

Danijel Sokač and Ruben Picek

Abstract: The ubiquitous term “digital transformation” brings in the focus the need for analyzing this concept in the industry today. Specifically, the paper describes how organizations look at one of the digital transformation’s technologies – Internet of Things (IoT) and if they would like to integrate it with the existing information system or it should be mandatory with implementation of Enterprise Resource Planning (ERP) system. The digital transformation surely put the ladder higher and the organizations are researching new options and innovative ways of doing a business in order to survive in the global market. On the other side, the ERP vendors offer methodologies for implementing their product packages. The digital transformation brought the idea of integrating the IoT with ERP system. Studying these methodologies became obvious that no segments are dedicated to integrating the IoT with ERP system. Therefore, we suggest a generic set of steps applicable in existing methodologies of ERP’s implementation.
Download

Paper Nr: 159
Title:

Architecture for Mapping Relational Database to OWL Ontology: An Approach to Enrich Ontology Terminology Validated with Mutation Test

Authors:

Cristiane G. Huve, Alex M. Porn and Leticia M. Peres

Abstract: Ontologies are structures used to represent a specific domain. One well-known method to simplify the ontology building is to extract domain concepts from a relational database. This article presents an architecture which enables an automatic mapping process from a relational database to OWL ontology. It proposes to enrich the terminology of ontology elements and it was validated with mutation tests. The architecture mapping process makes use of new and existent mapping rules and overcome lacks not previously addressed, such as the use of database logic model to eliminate duplicated elements of ontology and mapping inheritance relationships from tables and records. We stand out the structure of element mapping, which allows maintaining source-to-target traceability for verification. We validate our approach with two experiments: the first one focuses on architecture validation applying an experiment with three scenarios and the second one uses a testing engine applying a mutation test methodology to OWL ontology validation.
Download

Paper Nr: 164
Title:

Issue Reports Analysis in Enterprise Open Source Systems

Authors:

Lerina Aversano

Abstract: In many organizations Enterprise Resource Planning (ERP) systems can be considered the backbone to managing business processes. Therefore, understanding their maintenance processes is a relevant topic for practitioners. As occur for many open source projects change requirements for ERP software are managed trough Issue Tracker systems, that, collect requests for change in form of Issue Reports. However, very often issue reports have relevant lack of information. Consequently, the time to resolution is strongly influenced by the quality of the reporting. In this paper, we investigate the quality of issue reports for enterprise open source systems. We examined some relevant metrics impacting the quality of issue reports, such as the presence of itemization, presence of attachments, comments, and readability. Then, the evaluation of the quality of the issue reports has been conducted on enterprise open source software.
Download

Paper Nr: 177
Title:

Towards Automated Modelling of Large-scale Cybersecurity Transformations: Potential Model and Methodology

Authors:

Artur Rot and Bartosz Blaicke

Abstract: The purpose of this paper is to propose a proprietary methodology and model to generate a “cybersecurity transformation workplan” for large organizations that can improve their cybersecurity posture. The key input is based on risk-based assessment or maturity-based questionnaires depending on existing governance processes and available information. The original scoring can be then used to prioritize a portfolio of all possible initiatives by selecting the ones that are missing from typical foundation elements or would have high potential impact in relation to required investment and effort. Additional constraints such as budget limitation and FTE availability, logical sequencing and time requirements could be added to ensure effective use of company resources and actionability of the recommendations. The Gantt-like output would ease the burden on the security teams by providing an individualized set of activities to be implemented to improve risk posture.
Download

Paper Nr: 199
Title:

Visualizing Business Ecosystems: Results of a Systematic Mapping Study

Authors:

Anne Faber, Maximilian Riemhofer, Dominik Huth and Florian Matthes

Abstract: Researchers and practitioners increasingly recognize the relevance of the complex business environment in which companies develop, produce, and distribute their services and products, which we refer to as business ecosystems. In scientific research, the characteristics of business ecosystems, including the changing relations between ecosystem entities, are often visualized. We conducted a systematic mapping study analyzing overall 136 papers to identify types of visualizations used in the business ecosystem context. We provide an overview of 17 visualization types and their frequency of application in scientific literature. In addition, we collected visualization tool requirements, which we enriched with our own experience visualizing business ecosystems with practitioners, leading to overall nine tool requirements.
Download

Paper Nr: 20
Title:

How to Manage Privacy in Photos after Publication

Authors:

Srinivas Madhisetty, Mary-Anne Williams, John Massy-Greene, Luke Franco and Mark El Khoury

Abstract: Photos and videos once published may stay available for people to view it unless they are deleted by the publisher of the photograph. If the content is downloaded and uploaded by others then they lose all the privacy settings once afforded by the publisher of the photograph or video via social media settings. This means that they could be modified or in some cases misused by others. Photos also contain tacit information, which cannot be completely interpreted at the time of their publication. Sensitive information may be revealed to others as the information is coded as tacit information. Tacit information allows different interpretations and creates difficulty in understanding loss of privacy. Free flow and availability of tacit information embedded in a photograph could have serious privacy problems. Our solution discussed in this paper illuminates the difficulty of managing privacy due the tacit information embedded in a photo. It also provides an offline solution for the photograph such that it cannot be modified or altered and gets automatically deleted over a period of time. By extending the Exif data of a photograph by incorporating an in-built feature of automatic deletion, and the access to an image by scrambling the image via adding a hash value. Only a customized application can unscramble the image therefore making it available. This intends to provide a novel offline solution to manage the availability of the image post publication.
Download

Paper Nr: 45
Title:

A Service Definition for Data Portability

Authors:

Dominik Huth, Laura Stojko and Florian Matthes

Abstract: Data portability is one of the new provisions that have been introduced with the General Data Protection Regulation (GDPR) in May 2018. Given certain limitations, the data subject can request a digital copy of her own personal data. Practical guidelines describe how to handle such data portability requests, but do not support in identifying which data has to be handed out. We apply a rigorous method to extract the necessary information properties to fulfill data portability requests. We then use these properties to define an abstract service for data portability. This service is evaluated in seven expert interviews.
Download

Paper Nr: 51
Title:

Recommendation Framework for on-Demand Smart Product Customization

Authors:

Laila Esheiba, Amal Elgammal and Mohamed E. El-Sharkawi

Abstract: Product-service systems (PSSs) are being revolutionized into smart, connected products, which changes the industrial and technological landscape and unlocks unprecedented opportunities. The intelligence that smart, connected products embed paves the way for more sophisticated data gathering and analytics capabilities ushering in tandem a new era of smarter supply and production chains, smarter production processes, and even end-to-end connected manufacturing ecosystems. This vision imposes a new technology stack to support the vision of smart, connected products and services. In a previous work, we have introduced a novel customization PSS lifecycle methodology with underpinning technological solutions that enable collaborative on-demand PSS customization, which supports companies to evolve their product-service offerings by transforming them into smart, connected products. This is enabled by the lifecycle through formalized knowledge-intensive structures and associated IT tools that provide the basis for production actionable “intelligence” and a move toward more fact-based manufacturing decisions. This paper contributes by a recommendation framework that supports the different processes of the PSS lifecycle through analysing and identifying the recommendation capabilities needed to support and accelerate different lifecycle processes, while accommodating with different stakeholders’ perspectives. The paper analyses the challenges and opportunities of the identified recommendation capabilities, drawing a road-map for R&D in this direction.
Download

Paper Nr: 82
Title:

A Model-based Framework to Automatically Generate Semi-real Data for Evaluating Data Analysis Techniques

Authors:

Guangming Li, Renata Medeiros de Carvalho and Wil P. van der Aalst

Abstract: As data analysis techniques progress, the focus shifts from simple tabular data to more complex data at the level of business objects. Therefore, the evaluation of such data analysis techniques is far from trivial. However, due to confidentiality, most researchers are facing problems collecting available real data to evaluate their techniques. One alternative approach is to use synthetic data instead of real data, which leads to unconvincing results. In this paper, we propose a framework to automatically operate information systems (supporting operational processes) to generate semi-real data (i.e., “operations related data” exclusive of images, sound, video, etc.). This data have the same structure as the real data and are more realistic than traditional simulated data. A plugin is implemented to realize the framework for automatic data generation.
Download

Paper Nr: 96
Title:

Smart Shop-floor Monitoring via Manufacturing Blueprints and Complex-event Processing

Authors:

Michalis Pingos, Amal Elgammal, Indika Kumara, Panayiotis Christodoulou, Mike P. Papazoglou and Andreas S. Andreou

Abstract: Nowadays, Product-Service-Systems (PSS) are being modernized into smart connected products that target to transform the industrial scenery and unlock unique prospects. This concept enforces a new technological heap and lifecycle models to support smart connected products. The intelligence that smart, connected products embed paves the way for more sophisticated data gathering and analytics capabilities ushering in tandem a new era of smarter supply and production chains, smarter production processes, and even end-to-end connected manufacturing ecosystems. The main contribution of this paper is a smart shop-floor monitoring framework and underpinning technological solutions, which enables the proactive identification and resolution of shop-floor distributions. The proposed monitoring framework is based on the synergy between the novel concept of Manufacturing Blueprints and Complex Event Processing (CEP) technologies, while it encompasses a middleware layer that enables loose coupling and adaptation in practice. The framework provides the basis for actionable PSS and production “intelligence” and facilitates a shift toward more fact-based manufacturing decisions. Implementation and validation of the proposed framework is performed through a real-world case study which demonstrates its applicability, and assesses the usability and efficiency of the proposed solutions.
Download

Paper Nr: 108
Title:

Systematic Review of Bibliography on Social Interactions using the Meta-analytical Approach

Authors:

William B. Pereira, Renan L. Souza, Un H. Schiefelbein, João D. Lima, Bolívar Menezes da Silva and Cristiano Cortez da Rocha

Abstract: The general objective of this systematic review was to evaluate the evolution of studies on social interactions up to 2018, in addition to proposing a diagnostic tool on social interactions, sensors used, among others. The methodology of the research was the bibliographic research of an exploratory nature, using the data collection, it was possible to see that there is a significant growth of articles on this subject. The analysis identified different countries that conducted research on this subject and the most cited articles with their authors. Different jobs were found, such as care for the elderly, interactions in vehicular networks, social interactions in public environments, among others.
Download

Paper Nr: 111
Title:

A Practical Guide to Support Change-proneness Prediction

Authors:

Cristiano S. Melo, Matheus M. Lima da Cruz, Antônio F. Martins, Tales Matos, José M. Filho and Javam C. Machado

Abstract: During the development and maintenance of a system of software, changes can occur due to new features, bug fix, code refactoring or technological advancements. In this context, software change prediction can be very useful in guiding the maintenance team to identify change-prone classes in early phases of software development to improve their quality and make them more flexible for future changes. A myriad of related works use machine learning techniques to lead with this problem based on different kinds of metrics. However, inadequate description of data source or modeling process makes research results reported in many works hard to interpret or reproduce. In this paper, we firstly propose a practical guideline to support change-proneness prediction for optimal use of predictive models. Then, we apply the proposed guideline over a case study using a large imbalanced data set extracted from a wide commercial software. Moreover, we analyze some papers which deal with change-proneness prediction and discuss them about missing points.
Download

Paper Nr: 160
Title:

A Structured Approach to Guide the Development of Incident Management Capability for Security and Privacy

Authors:

Luis Tello-Oquendo, Freddy Tapia, Walter Fuertes, Roberto Andrade, Nicolay S. Erazo, Jenny Torres and Alyssa Cadena

Abstract: The growth and evolution of threats, vulnerabilities, and cyber-attacks increase security incidents and generate adverse impacts on organizations. Nowadays, organizations have been strengthened in aspects of information security and information through the implementation of various technological solutions. Nevertheless, defined processes for the proper handling and coordinated management of security incidents should be established. In this paper, we propose an incident management framework that is adaptable to educational organizations and allows them to improve their management processes in the face of computer incidents. We introduce a coordination network with three levels of decision-making that defines interfaces and communication channels with supporting policies and procedures for coordination across processes and process actors. It enables different organizations to maintain focus on different objectives, to work jointly on common objectives, and to share information that supports them all in case of security incidents. Our model enables the examination of incident management processes that cross organizational boundaries, both internally and externally. This can help CSIRTs improve their ability to collaborate with other business units and other organizations when responding to incidents.
Download

Paper Nr: 196
Title:

Adoption of Machine Learning Techniques to Perform Secondary Studies: A Systematic Mapping Study for the Computer Science Field

Authors:

Leonardo S. Cairo, Glauco F. Carneiro and Bruno C. da Silva

Abstract: Context: Secondary studies such as systematic literature reviews (SLR) have been used to collect and synthesize empirical evidence from relevant studies in several areas of knowledge, including Computer Science. However, secondary studies are time-consuming and require a significant effort from researchers. Goal: This paper aims to identify contributions derived from the adoption of machine learning (ML) techniques in Computer Science SLRs. Method: We performed a systematic mapping study querying well-known repositories and first found 399 studies as a result of applying the search string in each of the selected search engines. Following the research protocol, we analyzed titles and abstracts and applied inclusion, exclusion and quality criteria to finally obtain a set of 17 studies to be further analyzed. Results: The selected papers provided evidence of relevant contributions of the machine learning usage in performing secondary studies. We found that ML techniques have not been applied yet to all the stages of a SLR. Typically, the preferred stage to apply ML in an SLR is the study selection phase (typically the initial phase). For assessing the effectiveness of ML support while performing SLRs, researchers have provided a comparison either across different ML techniques tested or between manual and ML-supported SLRs. Conclusion: There is significant evidence that the use of machine learning applied to SLR activities (especially the study selection activity) in Computer Science is feasible and promising, and the findings can be potentially extended to other research fields. Also, there is a lack of studies exploring ML techniques for other stages than study selection.
Download

Paper Nr: 204
Title:

A Functional Model of Information System for IT Education Company

Authors:

Snezana Savoska, Blagoj Ristevski and Aleksandra Bogdanoska

Abstract: Object Oriented Analysis and Design uses diagrams for three types of modeling: structural, dynamic and functional. For external point of view, functional modelling is used to define business requirements through users’ requirements, to refine system functionality and to define processes that have to be enabled with the objects, containing attributes and methods. The functional model specifies the meanings of operations over the objects and actions of the dynamic modelling. The acquired functional models are visualized by UML use case diagrams and use case scenarios according to the software engineering principles. As a result of using use case diagrams, activity diagrams have to be done as a part of the functional model. These components of the model define the basic building blocks, roles and activities as well as the rules. They are used in the next software development phases in order the developers to have a clear understanding of the whole project. The paper presents the practical application of the functional modelling by using use case diagrams through scenario methods and highlighted processes with the use case and activity diagrams for information system for IT education company management.
Download

Area 4 - Software Agents and Internet Computing

Full Papers
Paper Nr: 44
Title:

Effect of Item Representation and Item Comparison Models on Metrics for Surprise in Recommender Systems

Authors:

Andre Paulino de Lima and Sarajane M. Peres

Abstract: Surprise is a property of recommender systems that has been receiving increasing attention owing to its links to serendipity. Most of the metrics for surprise poorly agree with definitions employed in research areas that conceptualise surprise as a human factor, and because of this, their use in the task of evaluating recommendations may not produce the desired effect. We argue that metrics with the characteristics that are presumed by models of surprise from the Cognitive Science may be more successful in that task. Moreover, we show that a metric for surprise is sensitive to the choices of how items are represented and compared by the recommender. In this paper, we review metrics for surprise in recommender systems, and analyse to which extent they align to two competing cognitive models of surprise. For that metric with the highest agreement, we conducted an off-line experiment to estimate the effect exerted on surprise by choices of item representation and comparison. We explore 56 recommenders that vary in recommendation algorithms, and item representation and comparison. The results show a large interaction between item representation and item comparison, which suggests that new distance functions can be explored to promote serendipity in recommendations.
Download

Paper Nr: 112
Title:

A Mobility Restriction Authoring Tool Approach based on a Domain Specific Modeling Language and Model Transformation

Authors:

Adalberto T. Azevedo Jr., Fernando Benedito, Luciano R. Coutinho, Francisco E. Silva, Marcos P. Roriz Junior and Markus Endler

Abstract: There are many situations in which there is a need to monitor the location and behavior of people and/or vehicles in order to detect possible irregularities and control where they are located and how they move, such as in companies, public transportation and public security. In this paper, we present MobCons-AT (Mobility Constraints Authoring Tool), an authoring tool that allows the specification of mobility restrictions rules that must be followed by mobile nodes. Rules are specified through a Domain-Specific Modeling Language (DSML) called MobCons-SL (Mobility Constraints Specification Language). Once specified in MobCons-SL, these rules are automatically transformed into software artifacts that performs the detection of the mobility restrictions violations performed by mobile nodes. This approach allows faster delivery time and lower the cost for the development of software systems aiming the detection of mobility restrictions. This paper also describes the use of MobCons-AT in two case studies, showing its applicability for diverse mobility scenarios.
Download

Paper Nr: 123
Title:

IoT Semantic Interoperability: A Systematic Mapping Study

Authors:

Amanda P. Venceslau, Rossana C. Andrade, Vânia P. Vidal, Tales P. Nogueira and Valéria M. Pequeno

Abstract: The Internet of Things (IoT) is a paradigm in which the Internet connects people and the environment using devices and services that are spread in the user daily routine. In this scenario, different agents, devices and services are able to exchange data and knowledge using a common vocabulary or mappings that represent and integrate heterogeneous sources. This semantic interoperability is facilitated by the Semantic Web that provides consolidated technologies, languages and standards, offering data and platforms interoperability. In this context, this work reviews and analyzes the state-of-the-art of IoT semantic interoperability, investigating and presenting not only which Semantic Web technologies are employed but also the challenges that support the studies in this area of research.
Download

Short Papers
Paper Nr: 29
Title:

Secure Endpoint Device Agent Architecture

Authors:

Kevin Foltz and William R. Simpson

Abstract: Software agents are installed on endpoint devices to monitor local activity, prevent harmful behavior, allow remote management, and report back to the enterprise. The challenge in this environment is the security of the agents and their communication with the enterprise. This work presents an agent architecture that operates within a high-security Enterprise Level Security (ELS) architecture that preserves end-to-end integrity, encryption, and accountability. This architecture uses secure hardware for sensitive key operations and device attestation. Software agents leverage this hardware security to provide services consistent with the ELS framework. This enables an enterprise to manage and secure all endpoint device agents and their communications with other enterprise services.
Download

Paper Nr: 32
Title:

Towards Integration between OPC UA and OCF

Authors:

Salvatore Cavalieri, Salvatore Mulè, Marco G. Salafia and Marco S. Scroppo

Abstract: The paper deals with Industry 4.0 presenting a solution to improve interoperability between industrial applications and IoT ecosystems. In particular, the proposal aims to reach interoperability between OPC UA and the emerging Open Connectivity Foundation (OCF), through a mapping between the relevant information models. A novel OPC UA information model will be presented as extension of the standard one, in order to allow the mapping of whatever information produced by an OCF device into the information model of an OPC UA server. The paper fills the existing lack of integration between OPC UA and the OCF specifications, as no other solution is present in literature at the moment.
Download

Paper Nr: 47
Title:

No More Hiding! WALDO: Easily Locating with a Wi-Fi Opportunistic Approach

Authors:

Bolívar Silva, João C. Lima, Celio Trois, William Pereira and Cristiano da Rocha

Abstract: The popularization of mobile devices and increasing the number of sensors and embedded resources boosted a large amount of research in the area of context-aware. Among the most relevant contextual information is the location. In outdoor environments, the GPS technology is already widespread and used. However, the people tend to spend most of their time indoors, such as universities, hospitals, malls, and supermarkets, where the GPS location is compromised. Several approaches using mainly radio frequency technologies have been proposed to solve the problem of indoor location. So far, no widely accepted solution solves the problem of location indoors. In this way, this work to use an opportunistic approach, making use of the Wi-Fi infrastructure available in the environment, to provide the location of mobile stations. Based on this objective, the WALDO architecture was developed that unites the characteristics of different approaches of the works that have been produced in recent years, taking into account the techniques that present better results at each stage, in conjunction with a zone-based approach and ranking, on phase online of the fingerprint technique, which makes ignoring RSS readings that present noises.
Download

Paper Nr: 64
Title:

MobileECG: An Ubiquitous Heart Health Guardian

Authors:

José M. Monteiro, João V. Madeiro, Angelo Brayner and Narciso Arruda

Abstract: Electrocardiogram (ECG) is a widespread and efficient medical procedure for monitoring heart health. ECG is a fast, low-cost and non-invasive examination. Its output allows anomaly analysis by health experts. Despite its application in clinical environments, ECG acquisition and analysis as a daily routine is far from being a reality for a large part of the world’s population. In this context, we present here a mobile and pervasive platform, named MobileECG, which provides ECG signal acquisition, automatic feature extraction and real-time prediagnosis. Furthermore, MobileECG implements the ubiquitous computing features. Hence, it runs on mobile devices (smartphone or tablet), assuring this way anytime and anywhere access for anyone to its functionalities. MobileECG is in fact an ubiquitous Heart Health Guardian. Besides, MobileECG supports ECG data integration and publication using Linked Data technology, providing a public knowledge base, which may be used to support complex queries, run mining algorithms and yield collaboration among experts.
Download

Paper Nr: 121
Title:

Impact of Personality Traits (BFI-2-XS) on using Cloud Storage

Authors:

Antonin Pavlicek and Frantisek Sudzina

Abstract: Cloud storage is a trending issue shifting away from computing as a product that is purchased, to computing as a service that is delivered to consumers over the internet from large-scale data centres - or "clouds". The research focused on impact of Big Five Inventory personality traits on use of cloud storage services. The research was conducted in the Czech Republic. The respondents were 478 university students. Gender, age, and type of student's job were used as control variables. With regards to the results, openness to experience and gender influence the acceptance rate of cloud storage services.
Download

Paper Nr: 140
Title:

Improving Decision-making in Virtual Learning Environments using a Tracing Tutor Agent

Authors:

Aluizio H. Filho, Jeferson M. Thalheimer, Rudimar S. Dazzi, Vinícius Santos and Paulo I. Koehntopp

Abstract: Quality of care in the Virtual Learning Environment is often compromised by large numbers of students. This presents a difficult task for human tutors. On the other hand, Intelligent Tutoring Systems are evolving towards a decision support system. One vision of Artificial Intelligence and education is to produce a tutor for every student or a “community of tutors for every student”. Here we present a model of intelligent tracing tutor agent responsible for tracking students in the virtual learning environment. We have designed the Tracing Tutor Agent as one of the agents of a collaborative organization of intelligent tutor agents. Each agent has his role, responsibilities and permissions. The main focus of this work is to present a model of Tracing Tutor Agent (TTA), which is one of the organization’s agents. It has the following responsibilities: (i) monitor students’ actions in VLE; (ii) to monitor the actions of the human tutor in the VLE; and (iii) collaborate and interact with the other organization’s agents to supply the human tutor with information in order to improve decision making and performance, increasing attendance and avoiding evasion.
Download

Paper Nr: 143
Title:

The Future of Data-driven Personas: A Marriage of Online Analytics Numbers and Human Attributes

Authors:

Joni Salminen, Soon-gyo Jung and Bernard J. Jansen

Abstract: The massive volume of online analytics data about customers has led to novel opportunities for user segmentation. However, getting real value from data remains challenging for many organizations. One of the recent innovations in online analytics is data-driven persona generation that can be used to create high-quality human representations from online analytics data. This manuscript (a) summarizes the potential of data-driven persona generation for online analytics, (b) characterizes nine open research questions for data-driven persona generation, and (c) outlines a research agenda for making persona analytics more useful for decision makers.
Download

Paper Nr: 154
Title:

A Fog-enabled Smart Home Analytics Platform

Authors:

Theo Zschörnig, Robert Wehlitz and Bogdan Franczyk

Abstract: Although the usage of smart home devices such as smart speakers, light bulbs and thermostats has increased rapidly in the past years, their added value, compared to conventional devices, is mostly limited to simple control and automation logic. In order to provide adaptive smart home environments, it is necessary to gain deeper insights into the data generated by these devices and use it in sophisticated data processing pipelines. Providing such analytics to a multitude of consumers requires specialised architectures, which are able to overcome various challenges identified by scientific literature. Currently available smart home analytics architectures are not designed to tackle all of these issues, specifically fault-tolerance, network-usage, latency and external regulations. In this paper, we propose an architectural solution to address these challenges based on the concept of Fog computing. Furthermore, we provide insight into the motivation for this research as well as an overview of the current state of the art in this field.
Download

Paper Nr: 155
Title:

Methods and System for Cloud Parallel Programming

Authors:

Victor N. Kasyanov and Elena V. Kasyanova

Abstract: In this paper, a cloud parallel programming system CSSP being under development at the Institute of Informatics Systems is considered. The system is aimed to be an interactive visual environment of functional and parallel programming for supporting of computer science teaching and learning. The system will support the development, verification and debugging of architecture-independent parallel programs and their correct conversion into efficient code of parallel computing systems for its execution in clouds. In the paper, the CPPS system itself, its input functional language, and its internal graph presentation of the functional programs are described.
Download

Paper Nr: 158
Title:

A Proposal for an Integrated Smart Home Service Platform

Authors:

Robert Wehlitz, Theo Zschörnig and Bogdan Franczyk

Abstract: The growing popularity of Internet-connected smart devices in consumers’ homes has led to a steady increase in the number of smart homes worldwide. In order to provide meaningful added value for consumers and businesses alike, smart devices need to be accompanied by services and applications which excel conventional devices’ usages. However, integrated solutions to support the development and operation of smart home services for, and across, different devices and application scenarios are still missing in research as well as in industry. In this paper, we present the motivation, requirements and an initial concept for a fully integrated smart home service platform (SHSP), which may serve as a basis for further discussion.
Download

Paper Nr: 167
Title:

“It’s Modern Day Presidential! An Evaluation of the Effectiveness of Sentiment Analysis Tools on President Donald Trump’s Tweets”

Authors:

Ann M. Perry, Terhi Nurmikko-Fuller and Bernardo P. Nunes

Abstract: This paper reports on an evaluation of five commonly used, lexicon-based sentiment analysis tools (MeaningCloud, ParallelDots, Repustate, RSentiment for R, SentiStrength), tested for accuracy against a collection of Trump’s tweets spanning from election day November 2016 to one year post inauguration (January 2018). Repustate was found to be the most accurate at 67.53%. Our preliminary analysis suggests that this percentage reflects Trump’s frequent inclusion of both positive and negative sentiments in a single tweet. Additionally to providing an evaluative comparison of sentiment analysis tools, a summary of shared features of a number of existing datasets containing Twitter content along with a comprehensive discussion is also provided.
Download

Paper Nr: 36
Title:

A Data Traffic Reduction Approach Towards Centralized Mining in the IoT Context

Authors:

Ricardo Brandão, Ronaldo Goldschmidt and Ricardo Choren

Abstract: The use of Internet of Things (IoT) technology is growing each day. Its capacity to gather information about the behaviors of things, humans, and process is grabbing researchers’ attention to the opportunity to use data mining technologies to automatically detect these behaviors. Traditionally, data mining technologies were designed to perform on single and centralized environments requiring a data transfer from IoT devices, which increases data traffic. This problem becomes even more critical in an IoT context, in which the sensors or devices generate a huge amount of data and, at the same time, have processing and storage limitations. To deal with this problem, some researchers emphasize the IoT data mining must be distributed. Nevertheless, this approach seems inappropriate once IoT devices have limited capacity in terms of processing and storage. In this paper, we aim to tackle the data traffic load problem by summarization. We propose a novel approach based on a grid-based data summarization that runs in the devices and sends the summarized data to a central node. The proposed solution was experimented using a real dataset and obtained an expressive reduction in the order of 99% without compromising the original dataset distribution’s shape.
Download

Paper Nr: 73
Title:

Cybersecurity by Design for Smart Home Environments

Authors:

Pragati Siddhanti, Petra M. Asprion and Bettina Schneider

Abstract: The Internet of Things (IoT) is being increasingly adopted by businesses to improve operations, processes, and products. While it opens endless opportunities, it also presents tremendous challenges especially in the area of cyber risks and related security controls. With billions of interconnected devices worldwide, how do we ensure that they are sufficiently secure and resilient? As a reasonable solution, ‘Cybersecurity by Design’ seems a promising approach. In this research, ‘Smart Homes’ - as IoT containing products – are selected as unit of analysis because they are exposed to numerous cyber threats with corresponding adverse consequences for the life, safety and health of residents. By aiming to secure Smart Home Environments (SHEs) from cyber threats, we adopted ‘design science’ as methodology and developed a holistic approach, highlighting ‘good practices’, which can be applied in every phase of the SHEs product lifecycle. In addition to these good practices, a ‘Cyber Security Maturity Assessment’ tool for SHEs has been developed. Both artefacts have already been validated and incrementally improved, and are now awaiting their future application and further enhancements.
Download

Paper Nr: 162
Title:

Deployment of MOOCs in Virtual Joint Academic Degree Programs

Authors:

George Sammour, Abdallah Al-Zoubi and Jeanne Schreurs

Abstract: This research paper investigates the readiness of students to opt for MOOC courses in universities offering a joint master degree international programme. A study is conducted on two joint academic study programs offered by the University of Hasselt in Belgium and Princess Sumaya University for Technology in Jordan. The study examines the readiness of students to take MOOC courses and their acceptance by universities’ management staff and professors. The study reveals promising results as the results suggest that such virtual study programs are readily accepted in both universities by professors and students. On the other hand, management staff and some professors expressed concerns on the approval of the equivalence of a MOOC onto courses.
Download

Area 5 - Human-Computer Interaction

Full Papers
Paper Nr: 56
Title:

Using Demographic Features for the Prediction of Basic Human Values Underlying Stakeholder Motivation

Authors:

Adam Szekeres, Pankaj S. Wasnik and Einar A. Snekkenes

Abstract: Human behavior plays a significant role within the domain of information security. The Conflicting Incentives Risk Analysis (CIRA) method focuses on stakeholder motivation to analyze risks resulting from the actions of key decision makers. In order to enhance the real-world applicability of the method, it is necessary to characterize relevant stakeholders by their motivational profile, without relying on direct psychological assessment methods. Thus, the main objective of this study was to assess the utility of demographic features-that are observable in any context-for deriving stakeholder motivational profiles. To this end, this study utilized the European Social Survey, which is a high-quality international database, and is comprised of representative samples from 23 European countries. The predictive performances of a pattern-matching algorithm and a machine-learning method are compared to establish the findings. Our results show that demographic features are marginally useful for predicting stakeholder motivational profiles. These findings can be utilized in settings where interaction between a stakeholder and an analyst is limited, and the results provide a solid benchmark baseline for other methods, which focus on different classes of observable features for predicting stakeholder motivational profiles.
Download

Paper Nr: 77
Title:

Bringing Life to the Classroom: Engaging Students through the Integration of HCI in SE Projects

Authors:

Milene S. Silveira and Alessandra S. Dutra

Abstract: Integration is a key aspect of teaching and learning processes. Integrating theory and practice based on knowledge from different courses and even from different programs is fundamental and strategic so that students can better understand not only their specific fields of study and work but also their relationship within research and market contexts. This study aims at presenting integration experiences between theory and practice from a Software Engineering Program. We focus on the application of methods studied in a course from the Human-Computer Interaction area in courses of software development practices, through projects involving real clients, developed by the students themselves. We discuss the lessons learned, challenges, and perspectives of change in those involved courses, bringing the students’ opinions, who have highlighted, among other points, the importance of such integration to bring theory closer to their real universe of action.
Download

Paper Nr: 93
Title:

Assessment User Interface: Supporting the Decision-making Process in Participatory Processes

Authors:

Lars Schütz and Korinna Bade

Abstract: We introduce a novel intelligent user interface for assessing contributions submitted in participatory planning and decision processes. It assists public administrations in decision making by recommending ranked contributions that are similar to a reference contribution based on their textual content. This allows the user to group contributions in order to treat them consistently which is crucial in this domain. Presently, the assessment process is done manually with no sophisticated computer-aided support. The assessment user interface provides a two-column layout with a basic list of contributions in the left column and a list of similar contributions in the right column. We present results of a user study that we conducted with 21 public administration workers to evaluate the proposed interface. We found that the assessment user interface is well suited to the assessment task and the related decision-making process. But there are also unclear elements in the ranking visualization as well as some distrust in the ranked contributions or intelligent methods among the participants.
Download

Paper Nr: 142
Title:

Design Process for Human-Data Interaction: Combining Guidelines with Semio-participatory Techniques

Authors:

Eliane Z. Victorelli, Julio D. Reis, Antonio S. Santos and Denis J. Schiozer

Abstract: The complexity of analytically reasoning to extract and identify useful knowledge from large masses of data requires that the design of visual analytics tools addresses challenges of facilitating human-data interaction (HDI). Designing data visualisation based on guidelines is fast and low-cost, but does not favour the engagement of people in the process. In this paper, we propose a design process to integrate design based on guidelines with participatory design practices. We investigate, and when necessary, adapt existing practices for each step of our design process. The process was evaluated on a design problem involving a visual analytics tool supporting decisions related to the production strategy in oil reservoirs with the participation of key stakeholders. The generated prototype was tested with adapted participatory evaluation practices. The obtained results indicate participants’ satisfaction with the design practices used and detected the fulfilment of users’ needs. The design process and the associated practices may serve as a basis for improving the HDI in other contexts.
Download

Short Papers
Paper Nr: 98
Title:

Real-time Operational Dashboards for Facilitating Transparency in Supply Chain Management: Some Considerations

Authors:

Siv M. Magnus and Amit Rudra

Abstract: The real-time sharing of data has created a unique opportunity to design software applications for the purpose of improving operations in a supply chain (SC) - both horizontally and vertically. As a result of these developments, dashboards have been designed to facilitate transparency - providing a better overview of a specific operation. In this paper, we outline our research to show that most of the operational dashboard designs are data driven but only a very few of them are designed from a user’s perspective. Further, not many in their design process tap into the benefits of building a dashboard based on the principles of cognition. We argue that building dashboards based on how our brain is wired will result in enhancing the decision-making processes for a Supply Chain.
Download

Paper Nr: 103
Title:

mHealth Applications: Can User-adaptive Visualization and Context Affect the Perception of Security and Privacy?

Authors:

Joana Muchagata, Pedro Vieira-Marques and Ana Ferreira

Abstract: Through mobile applications, patients and health professionals are able to access and monitor health data. But even with user-adaptive systems, which can adjust interface content according to individual’s needs and context (e.g., physical location), data privacy can be at risk, as these techniques do not aim to protect them or even identify the presence of vulnerabilities. The main goal of this paper is to test with end-users the adaptive visualization techniques, together with the context where they are used, to understand how these may influence users’ security perception, and decide which techniques can be applied to improve security and privacy of visualized data. An online survey was applied to test two different use-cases and contexts, where traditional access and access using visualization techniques are compared in terms of security characteristics. Preliminary results with 27 participants show that when accessing personal data from a patients’ perspective, the context has higher influence in the perception of confidentiality (authorized access) and integrity (authorized modification) of visualized data while for a health professional’s perspective, independently of the context, the visualization techniques are the ones that seem to primarily influence participants’ choices for those security characteristics. For availability (data available to authorized users whenever necessary), both visualization techniques and context have little, or no influence, in the participants’ choice.
Download

Paper Nr: 124
Title:

Visual Schedule: A Mobile Application for Autistic Children - Preliminary Study

Authors:

Joana Muchagata and Ana Ferreira

Abstract: Children with autism often experience considerable challenges and one of them is the difficulty in understanding, structuring and predicting their daily life activities and routines. Several methodologies have been studied and implemented to help autistic children with these routine activities and tasks, and one of those methods is the use of visual schedules. For this, mobile apps and related technology have been considered as an excellent tool in supporting autistic children’s development. But despite the technological resources and the variety of mobile apps available today, the authors could not find such needed resources available for the Portuguese speaking autistic children population, especially in relation to visual schedules/routines, which are considered very important for the child’s development. Therefore, based on the literature and in some apps available in other countries for autistic children, the authors propose a set of mock-ups of a visual schedule application for smartphone. The visual mock-ups represent the idea of the app that we intend to implement in a near future to be used by Portuguese autistic children aged between 4 to 10 years old to support them in their daily routine and the performance of related tasks.
Download

Paper Nr: 145
Title:

Impacts of Personal Characteristics of Students on Their Acceptance of ERP Solutions in Learning Process

Authors:

Ruben Picek, Samo Bobek and Simona S. Zabukovšek

Abstract: Enterprise Resource Planning (ERP) solutions are the most frequently used software tool in companies in all industries. The growing body of scientific literature about the acceptance of ERP solutions by users in companies reflects the growing perceived importance of ERP solutions for business management as well. The labour market requires the knowledge and skills for usage of ERP solutions from graduates – future employees. The main objective of our paper is therefore the identification of important factors that contribute to the acceptance of ERP solutions by students in economics and business and that shape their intentions to use this knowledge in the future. The conceptual model of our research is based on the Technology Acceptance Model (TAM), extended by previously identified important multidimensional external factors that refer to students’ personal characteristics and information literacy. The conceptual model formed was tested using the structural equation modelling. Research results revealed that only two of external factors play an important role in shaping the attitudes towards acceptance of ERP solutions by students. Results of the study have important implications for higher education institutions, reforming and updating their study programs, as well as for educators in the field of information science.
Download

Paper Nr: 22
Title:

The Determination of Customer Location as Human-Computer Interaction in the Retail Trade: Potentials and New Business Models

Authors:

Matthias Wißotzki, Philipp Schweers and Johannes Wichmann

Abstract: The Customer Journey has to be better grasped and understood, so that enterprises will be able to exist at the stationary trade regarding the future of digital competition. The seamless transition from outdoor to indoor navigation and analyses of movement streams enable many new fields of application, which improve the Customer Journey. The derivation of new fields of application and business models requires knowledge of a market’s needs and potentials. The aim of this work is to determine the needs and potentials for the regulation of indoor location as well as the derivation of new cases of application and potential business models. For this purpose, interviews with experts from leading software and technology companies as well as specialists of retail trade with many years of professional experience were conducted.
Download

Paper Nr: 207
Title:

Usability Testing of MOOC: Identifying User Interface Problems

Authors:

Olga Korableva, Thomas Durand, Olga Kalimullina and Irina Stepanova

Abstract: In the modern world, more and more information systems are actively used in the educational process. Examples of such systems are platforms for hosting remote MOOC (massive online open course). However, multiple MOOCs are perceived differently by users and have various levels of completion. It was proved that the existing problems of such systems are related to the usability of their user interface. A number of techniques are used to investigate user satisfaction with the interface. Most of them evaluate, first of all, the user satisfaction index after the course completion or at the stage of prototype creation and testing. The authors of the article carried out a research of existing approaches and proposed their own methodology for the evaluation of user satisfaction with the interface design on the basis of questionnaires UMUX-Lite, SUS, Testbirds Company approach and the ISO standards. The study allowed identifying gaps in the design of each of the analyzed platforms and its perception by users.
Download

Area 6 - Enterprise Architecture

Full Papers
Paper Nr: 26
Title:

Towards a Conceptual Framework for Decomposing Non-functional Requirements of Business Process into Quality of Service Attributes

Authors:

Camila F. Castro, Marcelo Fantinato, Ünal Aksu, Hajo A. Reijers and Lucinéia H. Thom

Abstract: Non-functional Requirements (NFRs) of web services are defined by IT teams at the implementation level often as Quality of Service (QoS) attributes. Orchestrating web services to run business processes requires a rigorous definition of the NFRs of such web services. The definition of QoS attributes should consider the business process NFRs since misinterpretations of web service NFRs may affect the behavior of the web services and hence achieving the business goals. The approaches proposed so far are still heavily dependent on an IT expert’s knowledge to identify the appropriate QoS attributes required to meet particular business process NFRs. Defining appropriate QoS attributes without reference to business process-level NFRs may be a costly, time-consuming task. We propose a conceptual framework for the hierarchical decomposition of NFRs from the business process level to the web service level. This framework seeks to reduce the dependence on a particular IT expert’s knowledge by simplifying the dialog between the business and IT areas. The proposed framework relies on a structure of NFRs interdependence. The main reference was the ISO/IEC 25010 Product Quality Model, extended by additional software quality models and particular QoS attributes.
Download

Paper Nr: 55
Title:

A Taxonomy for Enterprise Architecture Analysis Research

Authors:

Amanda O. Barbosa, Alixandre Santana, Simon Hacks and Niels von Stein

Abstract: Enterprise Architecture (EA) practitioners and researchers have put a lot of effort into formalizing EA model representation by defining sophisticated frameworks and meta-models. Because EA modelling is a cost and time-consuming effort, it is reasonable for organizations to expect to extract value from EA models in return. Due to the plethora of models, techniques, and stakeholder concerns in literature, the task of choosing an analysis approach might be challenging when no guidance is provided. Even worse, the design of analysis efforts might be redundant if there is no systematization of the analysis techniques due to the inefficient dissemination of practices and results. This paper contributes with one important step to overcome those issues by screening existing EA analysis literature and defining a taxonomy to classify EA research according to their analysis concerns, analysis techniques, and modelling languages employed. The proposed taxonomy had a significant coverage tested with a set of 46 papers also collected from the literature. Our work thus identifies and systematizes the state of art of EA analysis and further, establishes a common language for researchers, tool designers, and EA subject matter experts.
Download

Paper Nr: 79
Title:

Innovating in Digital Platforms: An Integrative Approach

Authors:

Fábio Neves da Rocha and Neil Pollock

Abstract: Increasing competitive pressures are leading companies to innovate through digital platforms. The dominant theme within extant research on innovation in these platforms conceptualises two different processes: Generativity (Tilson et al., 2010; Yoo et al., 2012) and generification (Hanseth and Bygstad, 2015; Pollock et al., 2007). Each of the conceptualisations gives extensive accounts separately, but they have questionable ability to provide a full understanding of innovation in digital platforms when there is a plural occurrence of these processes (Sørensen and Williams, 2002). Drawing on an analysis of rich archival data complemented by interviews reporting five-year relationship between a platform owner and its customer, we revisited underlying assumptions of its processes. We argue that generativity and generification are related to each other in a constant flux in which one fuels the other. In this relation, control has new roles other than as key factor for innovation productivity (cf. Eaton et al., 2015; Yoo et al., 2012), and it is subordinated to the purpose of innovation. As a consequence, innovation purpose seems to constrain the ‘control vs autonomy’ paradox (Lyytinen et al., 2017).
Download

Paper Nr: 179
Title:

Automatic Decomposition of IoT Aware Business Processes with Data and Control Flow Distribution

Authors:

Francisco Martins, Dulce Domingos and Daniel Vitoriano

Abstract: The Internet of Things (IoT) is generally seen as a distributed information gathering platform when used in business processes (BP). IoT devices have computational capabilities that can and should be used to execute fragments of BP that present benefits for both the devices and the BP execution engine. In fact, executing parts of the BP in the IoT devices may result in the reduction of the number of messages exchanged between the IoT network and the BP execution engine, increasing the battery lifespan of the IoT devices; also, it reduces the workload of the BP execution engine. However, processes are still defined following a centralised approach, making it difficult to use the full capabilities of these devices. In this paper, we present an automatic decomposition solution for IoT aware business processes, described using the Business Process Model and Notation (BPMN). We start from a BP model that follows a centralised approach and apply our decomposition method to transfer to the IoT devices the operations that can be performed there. This transformation preserves the control and the data flows of the original process and reduces the central processing and the number of messages exchanged in the network. The code that IoT devices execute is automatically generated from the BPMN process being decentralised.
Download

Paper Nr: 185
Title:

Using Enterprise Modeling in Development of New Business Models

Authors:

Ilia Bider and Azeem Lodhi

Abstract: In the dynamic world of today, enterprises need to be innovative not only in their current line of products and services, but also in their business models. One of the challenges in Business Model Innovation (BMI) is to introduce radical changes in the current business model when entering new markets. Ideas for new models can come from various sources, however each such idea needs to be analysed from the sustainability and implementation perspectives. This paper evaluates whether enterprise modelling can help in analysis of hypotheses for radical changes of BMI. The evaluation is carried on a particular practice of an organization. Analysis of a new idea has been done using a so-called Fractal Enterprise Model (FEM). FEM ties various enterprise business processes together and connects them to enterprise assets (resources) that are used and/or are managed by the processes. FEM has been used to understand which new assets and processes should be acquired, and which existing ones can be reused when planning the implementation of a new business model.
Download

Paper Nr: 201
Title:

Measuring Architecture Principles and Their Sets in Practice: Creating an Architecture Principle Measurement Instrument Challenged in a Case Study at the Dutch Tax Agency

Authors:

Michiel Borgers and Frank Harmsen

Abstract: Although architecture principles are important in the implementation of information systems requirements, empirical evidence of the effect of architecture principles is lacking. To find this empirical evidence, we need an instrument to measure architecture principles in the first place. This paper is the result of creating an architecture principle measurement instrument challenged in a case study. We describe both the measurement instrument and the related measurement method, including the test in a real-life case. Based on the outcome we extended the instrument with extra architecture principles characteristics and attributes. We also made some improvements on the measurement method as well.
Download

Short Papers
Paper Nr: 62
Title:

Monitoring of Non-functional Requirements of Business Processes based on Quality of Service Attributes of Web Services

Authors:

Evando S. Borges, Marcelo Fantinato, Ünal Aksu, Hajo A. Reijers and Lucinéia H. Thom

Abstract: Business monitoring approaches usually address indicators associated with processes only at the service level; i.e., related to the services implementing the processes. Monitoring at the service level raises technical measures geared to Information Technology (IT) managers. Monitoring of Key Performance Indicators (KPIs) is usually carried out at a higher level, but transversely to the organization’s processes, i.e., uncoupled from the processes. We present a component designed to aid in strategic alignment between business and IT by monitoring Non-Functional Requirements (NFR) of processes based on Quality of Service attributes. This component aims to allow business managers to monitor process executions by focusing on the indicators that truly respond to the execution of such processes. We evaluated the component via a proof of concept.
Download

Paper Nr: 116
Title:

ticAPP – Digital Transformation in the Portuguese Government

Authors:

Francisco L. Santos, André Vasconcelos, José Tribolet and Pedro Viana

Abstract: IT is fundamental to digital transformation. Digital transformation focuses on driving the organization to a new level, exposing and extending its processes beyond the organization. Enterprise Architecture provides the tools and methodologies to manage the complexity of Digital Transformation. A Digital Center of Excellence, named ticAPP, is going to be assigned to support the Public Administration’s Digital Transformation process. This paper focuses on building the future state Business Architecture of ticAPP, and how to enable its continuous evolution. We need to ensure the future state of ticAPP is not confined by the technology used and that it takes into account both Government’s and ticAPP’s strategic goals. To accomplish that, we follow a Top-Down Design approach development of ticAPP’s Business architecture, based on TOGAF ADM methodology. In the final steps, we will evaluate ticAPP’s maturity level using the Architecture Capability Maturity Model framework that is included in TOGAF and calculate its maturity rating.
Download

Paper Nr: 131
Title:

Technology Architecture as a Driver for Business Cooperation: Case Study - Public Sector Cooperation in Finland

Authors:

Nestori Syynimaa

Abstract: The current premise in the enterprise architecture (EA) literature is that business architecture defines all other EA architecture layers; information architecture, information systems architecture, and technology architecture. In this paper, we will study the ICT-cooperation between eight small and mid-sized municipalities and cities in Southern Finland. Our case demonstrates that the ICT-cooperation is possible without business cooperation and that ICT-cooperation can be a driver for future business cooperation. The findings challenge the current premise of the guiding force of the business architecture and encourage organisations’ ICT-functions to seek daringly cooperation with other organisations.
Download

Paper Nr: 132
Title:

What Roles Do Decisions Play in Context-aware Business Process Management?

Authors:

Rongjia Song, Jan Vanthienen, Weiping Cui, Ying Wang and Lei Huang

Abstract: Recently, Business Process Management (BPM) is moving towards the separation of concerns paradigm by externalizing the decisions from the process flow. Most notably, the introduction of the Decision Model and Notation (DMN) standard provides a solution and technique to model decisions and the process flow separately and consistently integrated. In the area of context-aware BPM, decisions are still considered within business processes in a traditional way. In this paper, we examine how context affects business processes at design time and at run time. Different types of decisions influence the context-aware effect on business processes in their own way. Through analyzing these effects, we have observed that decisions play key roles in the ecosystem of context-aware BPM, including identifying the need of context-awareness, anticipating possible context-dependent variants and the contextualization of a business process. We also examine the opportunity to apply the DMN technique in context-aware BPM.
Download

Paper Nr: 156
Title:

Literature Review Studies in Public Sector’s Enterprise Architecture

Authors:

Karoll C. Ramos, Gabriel D. Andrade Souza and André F. Rosa

Abstract: Recognize the relevant bibliography is important to researchers. It allows better research performance and the conjunction of academic knowledge. This study presents a bibliometric analysis of the main articles, on a worldwide scale, focusing on Enterprise Architecture in public administration. The method uses bibliometric indicators to show existing correlations, based on data from 82 articles cited in the world’s main journals between 2013 and 2018. A multidisciplinary approach has identified contributions in technology, science, and government programs. It is possible to acknowledge that research in corporate architecture in the public sector is correlated with e-government strategies and interoperability to achieving goals and solving problems in this area.
Download

Paper Nr: 203
Title:

A Framework for the Consistent Management of eHealth Interoperability in Greece

Authors:

Dimitrios G. Katehakis and Angelina Kouroubali

Abstract: This work presents an approach for the organized development of a countrywide framework to address the ever-growing demand for acquiring, exchanging and exploiting patient information to support high quality and cost-effective healthcare delivery. The national electronic health (eHealth) landscape in Greece is examined within the context of the recent recommendation on a European electronic health record (EHR) exchange format. Improving quality of life and well-being, in a secure and safe manner that respects the patients’ privacy, is the key challenge. Interoperability of information and communication technology (ICT) systems is central for reliable and efficient collaboration between the involved stakeholders, including the patient and associated caretakers. In order to accelerate transformation towards citizen empowerment and a more sustainable health system, national authorities need to address issues relevant to mutually beneficial goals in a coherent manner. Practical implications are related to the sustainability of the underlying national infrastructure required to support reliable and secure exchange of meaningful EHR data, for both primary and secondary use, and by defining technical specifications for well-defined use cases, in a legitimate and standardized manner, under a highly regulated environment.
Download

Paper Nr: 206
Title:

Towards Risk-aware Scheduling of Enterprise Architecture Roadmaps

Authors:

Christophe Ponsard, Fabian Germeau and Gustavo Ospina

Abstract: Enterprises need to keep their organisation aligned with their business objectives. Enterprise Architecture provides a way to identify the current and target states as well as defining the evolution roadmap taking the form of a complex project portfolio. Conducting changes requires to deal with the occurrence of several risks at the project level which can affect strategic decisions. This paper investigates how to optimally deal with such risks in the global scheduling process. We start by identifying and structuring risks and projects concepts based on a domain model. We then identify a set of risks and related management scenarios. Experiments in risk control are carried out using two local search optimisation tools able to consider risk information in addition with inter-project and resource dependencies. We show the feasibility of efficiently anticipating risks and even dynamically adapting the scheduling. A dedicated focus is set on specific characteristics of IT projects.
Download

Paper Nr: 12
Title:

Pre-Modelled Flexibility for Business Processes

Authors:

Thomas Bauer

Abstract: At process-aware information systems (PAIS), sometimes, a flexible deviation from the rigidly designed process becomes necessary. Otherwise the users would be restricted too much. This paper presents an approach that allows to define the expected flexibility requirements only once already at build-time and apply them at run-time in the PAIS. Compared to dynamic changes during run-time, this has the advantage that the usage of the pre-defined information reduces the effort for the end users at each deviation. In addition, applying flexibility becomes saver; e.g., since user rights can be defined. This paper presents the corresponding requirements, with a special focus on the kind of information that has to be pre-defined at build-time. Thereby, all relevant process aspects were respected and the necessity of the requirements is illustrated with examples from practice.
Download

Paper Nr: 23
Title:

A Survey on Dynamic Business Processes and Dynamic Business Processes Modelling

Authors:

Bouafia Khawla and Bálint Molnár

Abstract: Organizations as sets of business processes (BP) that can be analyzed and improved by approaches such as BP modelling. Many books, articles, thesis, and previous research are focus on static BP and workflows but neglect the need of dynamic BP which is very interesting. For that reason, the aim of this paper is to discuss and review various sources related to the topic of dynamic BP definition, requirements and changes can be changed during the time under the effect of various factors. Business is rapidly changing today to meet the requirements of the changing environment or other factors. All those changes of a business must be solved using various approaches because this gap triggered significant research efforts to solving dynamic BP modelling problems and ensuring the processes dynamicity. The main goal motives this work is to survey a group of currently active researchers and investigate some fundamental concepts in this domain related with the concepts is strongly support dynamic BP to can guide to find a representative model can express all proprieties and answer the different questions presented later in the introduction.
Download

Paper Nr: 31
Title:

Impact of Business Rule Management on Enterprise Architecture

Authors:

Marlies van Steenbergen, Koen Smit and Martijn Zoet

Abstract: Business Rule Management (BRM) is a means to make decision-making within organizations explicit and manageable. BRM functions within the context of an Enterprise Architecture (EA). The aim of EA is to enable the organization to achieve its strategic goals. Ideally, BRM and EA should be well aligned. This paper explores through study of case study documentation the BRM design choices that relate to EA and hence might influence the organizations ability to achieve a digital business strategy. We translate this exploration into five propositions relating BRM design choices to EA characteristics.
Download

Paper Nr: 50
Title:

An Enterprise Architecture Planning Process for Industry 4.0 Transformations

Authors:

Emmanuel Nowakowski, Matthias Farwick, Thomas Trojer, Martin Häusler, Johannes Kessler and Ruth Breu

Abstract: Industry 4.0 changes the manufacturing industry significantly. In order to stay competitive, companies need to develop new business capabilities and business models that are enabled by Industry 4.0 concepts. However, companies are currently struggling with expensive and risky IT transformation projects that are needed to implement such concepts. We observed a lack of research on the planning and modeling part of IT transformations towards Industry 4.0. Therefore, we conducted a series of expert interviews on the topic of enterprise architecture in the context of modeling and planning Industry 4.0 transformations. As a result, we were able to develop a metamodel that can be used as target model for planning endeavors and a planning process that helps as guideline for such planning projects.
Download

Paper Nr: 58
Title:

A Framework to Evaluate Architectural Solutions for Ubiquitous Patient Identification in Health Information Systems

Authors:

Zuhaib Memon, Ovidiu Noran and Peter Bernus

Abstract: The lack of accurate, reliable and consistent patient information is a major issue in healthcare, despite a relatively high Health Information System (HIS) adoption level worldwide. The main reason for this appears to be patient records lacking accurate particulars, including links to associated care programs, disease classification and treatment plans. The causes for this are multiple, including incompatibility of healthcare standards between version releases, inconsistent HIS implementation, lack of effective data input / validation, and the rapid evolution of and absence of a single ‘universal’ technological solution. Sustainable, stable and long-term architectural solutions are required. This research builds on previous work identifying major challenges and root causes of the problem and proposing essential non-functional requirements for HIS architectures. The paper elaborates on non-functional requirements and proposes an evaluation framework (based on a new international standard) that can be used to assess aspiring HIS architectures for long term stability and self-evolution and thus to support strategic decision making from within the evolving HIS.
Download

Paper Nr: 109
Title:

Business Model as a Cloud Services-based Movility Strategy That Allows to Diminish the Number of PYMES Closures in Ecuador

Authors:

Cristina Pesántez, Freddy Tapia and Jorge E. Lascano

Abstract: Nowadays, enterprises face big problems in a global market, especially small and medium-sized enterprises (PYMES), they are required to innovate the way they offer their products/services, without affecting their limited financial resources. In consequence they need to find new business opportunities, also cloud services trend could complement the business models promoted by PYMES, providing them competitive advantages. Here we propose the development of a business model based on data mobility and ease of access. This model contributes to reduce PYMES mortality causes, and at the same time to increase their growth projection rates.
Download

Paper Nr: 113
Title:

Analysis of Data Warehouse Architectures: Modeling and Classification

Authors:

Qishan Yang, Mouzhi Ge and Markus Helfert

Abstract: With decades of development and innovation, data warehouses and their architectures have been extended to a variety of derivatives in various environments to achieve different organisations’ requirements. Although there are some ad-hoc studies on data warehouse architecture (DWHA) investigations and classifications, limited research is relevant to systematically model and classify DWHAs. Especially in the big data era, data is generated explosively. More emerging architectures and technologies are leveraged to manipulate and manage big data in this domain. It is therefore valuable to revisit and investigate DWHAs with new innovations. In this paper, we collect 116 publications and model 73 disparate DWHAs using Archimate, then 9 representative DWHAs are identified and summarised into a ”big picture”. Furthermore, it proposes a new classification model sticking to state-of-the-art DWHAs. This model can guide researchers and practitioners to identify, analyse and compare differences and trends of DWHAs from componental and architectural perspectives.
Download

Paper Nr: 149
Title:

Categories of Research Methods and Types of Research Results Illustrated with a Continuous Project

Authors:

Ella Roubtsova

Abstract: Research projects are often continuous, they are initiated by one researcher and continued by another. Each researcher needs to understand the continuous project and his part in it. This part we call a mini project. Explanation of research methods and projects results in literature does not help in understanding continuous research projects. Descriptions of research methods are often full of details and do not show relations between projects supported by different research methods. In this paper, we put all research methods into three categories and define two types of results. We suggest defining a mini project by a research method category and its result type. From such mini projects we build continuous projects. This way of structuring is illustrated with a continuous project investigating a view on changes of enterprise architecture. The relations of mini projects of different categories are generalized in a guidance for researchers, supervisors and reviewers.
Download

Paper Nr: 157
Title:

On Designing a Content Management System for the Documents Related to Past Civil Engineering Projects or Call-for-Tender Responses

Authors:

Christian Esposito and Oscar Tamburis

Abstract: The progressive dematerialization of paper-based documents in favour of digital ones held within servers, and processed/exchanged by means of the ICT technologies have massively revolutionized several aspects of our daily lives and industrial practice. The large volume of data handled by the companies has an undeniable value for their business and must be exploited so as to strengthen their competitiveness. In certain domains, such as the healthcare or manufacturing, this is nowadays an accepted practice within the context of Big data Analytics, but in other domains this is not the reality. As an example, we can see the case of the civil engineering companies, where a large volume of digital documents on past projects or response to public/private tender procedures are stored, but not efficiently used. The main obstacle for the effective use of such data is related to their storage approach, which is mainly considered as an archive rather than the company’s knowledge to be inferred and effectively used in the business. The driving idea of this paper is to devise a content management system within the context of civil engineering to pave the way for a better use of the data held by such a kind of companies.
Download

Paper Nr: 161
Title:

Integration of Monitoring and Alarm Management in Power Plants

Authors:

Vincenza Carchiolo, Alessandro Longheu, Michele Malgeri, S. Sorbello and A. Torcetta

Abstract: A power plant monitoring system is crucial for ensuring normal operation of the whole power plant as well as initiating alarms to avoid further development of unattended fault within the power generating system. In this work, a computer vision-based solution is introduced in order to quickly detect anomalies and possible failure affecting the monitored infrastructure.
Download

Paper Nr: 166
Title:

Towards an Easy Transition for Legacy Systems Evolution with the Extension for BPTrends Methodology

Authors:

Gabriel D. Favera, Vinicius Maran, Jonas B. Gassen, Tamires S. Giffoni and Alencar Machado

Abstract: With the growing demand for producs and services, organizations are considering ways to better manage their processes. Being the business process management (BPM) the most widely used approach in this context. However, traditional methodologies of BPM do not encopass all different needs yet. Some stages of the BPM lifecycle do not present the information needed to meet the requirements of some organizations. Some of those still use legacy systems that not directly support business process management systems (BPMS). Therefore, for instance, it is dificult to mesure performance or to update processes. In this paper we present an extension based on the BPTrends methodology. It intends to approach problems such as the aforementioned ones, which are not preditected by traditional methodologies. To evaluate the extension, tests were carried out in a real scenario. Furthermore, it was possible to observe, e.g., that the extension allowed BPM to be adopted with ease compared with traditional metodologies, that do not have the possibility to model a legacy system and need to wait organizations to invest on systems compatible with a BPMS.
Download

Paper Nr: 172
Title:

Use Cases of the Application Reference Model in IRAN

Authors:

Hassan Haghighi, Maziar Mobasheri, Farhoud J. Kaleibar and Faezeh Hoseini

Abstract: In this article, the main elements of the Iranian Application Reference Model, called INARM, are briefly introduced. This model includes three levels of systems, application components, and interfaces. In the “system” section of this model, there are 11 system groups, 74 systems and more than 250 modules. The section of application components contains 4 application component groups, 36 application components and more than 100 modules. Finally, the section of interfaces contains 16 interfaces. The mere provision of the application reference model is not very helpful, and it is necessary to specify the use cases of the model. It is also necessary to make clear the considerations and risks of using the model for government agencies. In this regard, this paper describes 10 use cases for the INARM. As a specific use case, the government’s participation in procuring public software for the agencies (based on INARM and with the aim of cost reduction in system’s procurement and maintenance, and increasing system quality) is explained.
Download

Paper Nr: 182
Title:

ICT for Advanced Manufacturing

Authors:

Štefan Kozák, Eugen Ružický, Alena Kozáková, Juraj Štefanovič and Vladimir Kozák

Abstract: Information and communication technologies (ICT), automation, and robotics remain key sciences of the 21st century. Currently, manufacturing enterprises are facing challenges with regard to new concepts such as Internet of Things, Industrial Internet of Things, Cyber-physical Systems or Cloud-based Manufacturing. The Industrial Internet of Things (IIoT) is an emerging paradigm in today’s control industry comprising Internet-enabled cyber-physical devices with the ability to link to new interconnection technologies. Under this perspective, new industrial cyber-physical “things” can be accessible and available from remote locations; information on them can be processed and stored in distributed locations favouring cooperation and coordination to achieve high performance in real time. The paper presents the state-of-the-art in research, development and education in new information a communications technologies for advanced manufacturing based on intelligent modelling and control methods, and their applications with the focus on new trends declared in Industry 4.0.
Download