ICEIS 2018 Abstracts


Area 1 - Databases and Information Systems Integration

Full Papers
Paper Nr: 23
Title:

Eliminating Redundant and Irrelevant Association Rules in Large Knowledge Bases

Authors:

Rafael Garcia Leonel Miani and Estevam Rafael Hruschka Junior

Abstract: Large growing knowledge bases are being an explored issue in the past few years. Most approaches focus on developing techniques to increase their knowledge base. Association rule mining algorithms can also be used for this purpose. A main problem on extracting association rules is the effort spent on evaluating them. In order to reduce the number of association rules discovered, this paper presents ER component, which eliminates the extracted rules in two ways at the post-processing step. The first introduces the concept of super antecedent rules and prunes the redundant ones. The second method brings the concept of super consequent rules, eliminating those irrelevant. Experiments showed that both methods combined can decrease the amount of rules in more than 30%. We also compared ER to FP-Growth, CHARM and FPMax algorithms. ER generated more relevant and efficient association rules to populate the knowledge base than FP-Growth, CHARM and FPMax.
Download

Paper Nr: 67
Title:

SQL Query Performance on Hadoop: An Analysis Focused on Large Databases of Brazilian Electronic Invoices

Authors:

Cristiano Cortez da Rocha, Márcio Parise Boufleur, Leandro da Silva Fornasier, Júlio César Narciso, Andrea Schwertner Charão, Vinícius Maran, João Carlos D. Lima and Benhur O. Stein

Abstract: Hadoop clusters have established themselves as a foundation for various applications and experiments in the field of high-performance processing of large datasets. In this context, SQL-on-Hadoop emerged as trend that combines the popularity of SQL with the performance of Hadoop. In this work, we analyze the performance of SQL queries on Hadoop, using the Impala engine, comparing it with a RDBMS-based approach. The analysis focuses on a large set of electronic invoice data, representing an important application to support fiscal audit operations. The experiments performed included frequent queries in this context, which were implemented with and without data partitioning in both RDBMS and Impala/Hadoop. The results show speedups from 2.7 to 14x with Impala/Hadoop for the queries considered, on a lower cost hardware/software platform.
Download

Paper Nr: 105
Title:

Visual Support to Filtering Cases for Process Discovery

Authors:

Luiz Schirmer, Leonardo Quatrin Campagnolo, Sonia Fiol González, Ariane M. B. Rodrigues, Guilherme G. Schardong, Rafael França, Mauricio Lana, Simone D. J. Barbosa, Marcus Poggi and Hélio Lopes

Abstract: Working with average-sized event logs is still a major task in process mining, where the main goal is to gain process-related insights based on event logs created by a wide variety of systems. An event log contains a sequence of events for every case that was handled by the system. Several discovery algorithms have been proposed and work well in specific cases but fail to be generic strategies. Moreover, there is no evidence that the existing strategies can handle events with a large number of variants. For this reason, a generic approach is needed to allow experts to explore event log data and decompose information into a series of smaller problems, to identify outliers and relations between the analyzed cases. In this paper we present a visual filtering approach for event logs that makes process analysis tasks more feasible and tractable. To evaluate our approach, we have developed a visual filtering tool and used it with the event log from BPI Challenge 2017.
Download

Paper Nr: 163
Title:

An Approach to Extract Proper Implications Set from High-dimension Formal Contexts using Binary Decision Diagram

Authors:

Phillip Santos, Julio Neves, Paula Silva, Sérgio M. Dias, Luis Zárate and Mark Song

Abstract: Formal concept analysis (FCA) is currently used in a large number of applications in different areas. However, in some applications the volume of information that needs to be processed may become infeasible. Thus, demand for new approaches and algorithms to enable the processing of large amounts of information is increasing substantially. This paper presents a new algorithm for extracting proper implications from high-dimensional contexts. The proposed algorithm, ProperImplicBDD, was based on the PropIm algorithm. Using a data structure called binary decision diagram (BDD) it is possible to simplify the representation of the formal context and to improve the performance on extracting proper implications. In order to analyze the performance of the ProperImplicBDD algorithm, we performed tests using synthetic contexts varying the number of attributes and context density. The experiments shown that ProperImplicBDD has a better perfomance – up to 8 times faster – than the original one, regardless of the number of attributes, objetcts and densities.
Download

Paper Nr: 173
Title:

Querying Heterogeneous Document Stores

Authors:

Hamdi Ben Hamadou, Faiza Ghozzi, André Péninou and Olivier Teste

Abstract: NoSQL document stores offer support to store documents described using various structures. Hence, the user has to formulate queries using the possible representations of the desired information from different schemas. In this paper, we propose a novel approach that enables querying operators over a collection of documents with structural heterogeneity. Our work introduces an automatic query rewriting mechanism based on combinations of elementary operators: project, restrict and aggregate. We generate a custom dictionary that tracks all representations for attributes used in the documents. Finally, we discuss the results of our approach with a series of experiments.
Download

Paper Nr: 230
Title:

Gathering and Combining Semantic Concepts from Multiple Knowledge Bases

Authors:

Alexander Paulus, André Pomp, Lucian Poth, Johannes Lipp and Tobias Meisen

Abstract: In the context of the Industrial Internet of Things, annotating data sets with semantic models enables the automatic interpretability and processing of data values and their context. However, finding meaningful semantic concepts for data attributes cannot be done fully automated as background information, such as expert knowledge, is often required. In this paper, we propose a novel modular recommendation framework for semantic concepts. To identify the best fitting concepts for a given set of labels, our approach queries, weights and aggregates the results of arbitrary pluggable knowledge bases. The framework design is based on an intensive review of labels that were used in real-world data sets. We evaluate our current approach regarding correctness and speed as well as stating the problems we found.
Download

Paper Nr: 252
Title:

Building Contextual Implicit Links for Image Retrieval

Authors:

Hatem Aouadi, Mouna Torjmen Khemakhem and Maher Ben Jemaa

Abstract: In context-based image retrieval, the textual information surrounding the image plays a main role in image retrieval. Although text-based approaches outperform content-based retrieval approaches, they can fail when query keywords are not matching the document content. Therefore, using only keywords in the retrieval process is not sufficient to have good results. To improve the retrieval accuracy, researchers proposed to enhance search accuracy by exploiting other contextual information such as hyperlinks that reflect a topical similarity between documents. However, hyperlinks are usually sparse and do not guarantee document content similarity (advertising and navigational hyperlinks). In addition, there are many missed links between similar documents (only few semantic links are created manually). In this paper, we propose to automatically create implicit links between images through computing the semantic similarity between the textual information surrounding those images. We studied the effectiveness of the links generated automatically in the image retrieval process. Results showed that combining different textual representations of the image is more suitable for linking similar images.
Download

Short Papers
Paper Nr: 8
Title:

JSON-based Interoperability Applying the Pull-parser Programming Model

Authors:

Leandro Pulgatti and Marcos Didonet Del Fabro

Abstract: The JSON format is been applied in a variety of applications: it is established as the de-facto standard for representing document stores; it is widely used to achieve interoperability and as the exchange format in RESTful web APIs. For these reasons, it is necessary to provide interoperability between JSON and other NoSQL formats. There are several approaches that aims to translate between different NoSQL formats, however, most of them attempt to be generic and do not focus on JSON. They aim on providing an abstract and generic representation capturing all the data models constructs and to provide wrapper-like structures, or to develop pairs of translators. In this paper, we present an approach that uses the JSON data model as driving format for interoperability with distinct NoSQL data models. We take advantage of its nested textual structure to apply the pull-parser programming model to process it and to develop translators between JSON and a set of representative NoSQL formats. We focus on the JSON extraction and on the development and application of the data transformations. We validate our approach through an implementation handling a large number of data representation strategies.
Download

Paper Nr: 11
Title:

Extending CryptDB to Operate an ERP System on Encrypted Data

Authors:

Kevin Foltz and William R. Simpson

Abstract: Prior work demonstrated the feasibility of using partial homomorphic encryption as part of a database encryption scheme in which standard SQL queries are performed on encrypted data. However, this work involved only translating raw SQL queries to the database through the CryptDB proxy. Our work extends the prior work to an Oracle application. The goal for this work was to determine feasibility for a full-scale implementation on a real Oracle Enterprise Resource Planning (ERP) system. This requires accommodating extra features such as stored procedures, views, and multi-user access controls. Our work shows that these additional functionalities can be practically implemented using encrypted data, and they can be implemented in a way that requires no code changes to the ERP application code. The overall request latency and computational resource requirements for operating on encrypted data are under one order of magnitude and within a small factor of those for unencrypted data. These results demonstrate the feasibility of operating an Oracle ERP on encrypted data.
Download

Paper Nr: 50
Title:

Managing Enterprise Resource Planning System Customisation Post-Implementation - The Case of an African Petroleum Organisation

Authors:

Sharif Gool and Lisa F. Seymour

Abstract: Implementing an Enterprise Resource Planning (ERP) system is a challenging endeavour. One dominant challenge is to determine when to customise the ERP system to match organisational requirements and when to rather change business processes to fit standard ERP delivered functionality. However, there is agreement that ERP customisation needs to be managed. While research has been done in understanding drivers of ERP customisation during an implementation, little research has focused on the post-implementation journey. This paper describes factors impacting ERP customisation post-implementation. The study is an interpretive single organisational case study in a multinational African petroleum organisation. The study identifies multiple reasons for the need for customisation post-implementation and describes practices that organisations can employ to manage customisation, including staff training interventions, systematically removing modifications and approval processes. This paper contributes to our understanding of ERP customisation and should be a value to practitioners trying to manage customisation post-implementation.
Download

Paper Nr: 59
Title:

An Approach for Modeling Polyglot Persistence

Authors:

Cristofer Zdepski, Tarcizio Alexandre Bini and Simone Nasser Matos

Abstract: The emergence of NoSQL databases has greatly expanded database systems in both storage capacity and performance. To make use of these capabilities many systems have integrated these new data models into existing applications, making use of multiple databases at the same time, forming a concept called ”Polyglot Persistence”. However, the lack of a methodology capable of unifying the design of these integrated data models makes design a difficult task. To overcome this lack, this paper proposes a modeling methodology capable of unifying design patterns for these integrated databases, bringing an overview of the system, as well as a detailed view of each database design.
Download

Paper Nr: 112
Title:

Classification Analysis of NDVI Time Series in Metric Spaces for Sugarcane Identification

Authors:

Lucas Felipe Kunze, Thábata Amaral, Leonardo Mauro Pereira Moraes, Jadson José Monteiro Oliveira, Altamir Gomes Bispo Junior, Elaine Parros Machado de Sousa and Robson Leonardo Ferreira Cordeiro

Abstract: In Brazil, agribusiness is an important task to the economy, since it provides a substantial part of the country's Gross Domestic Product (GDP). Besides that, interest in biofuels has grown, considering that they viabilize the use of renewable energy. Brazil is the world's largest producer of sugarcane, which enables a large ethanol production. Thus, to monitor agricultural areas is important to support decision making. However, the amount of generated and stored data about these areas has been increasing in such a way that far exceeds the human capacity to manually analyze and extract information from it. That is why automatic and scalable data mining approaches are necessary. This work focuses on the sugarcane classification task, taking as input NDVI time series extracted from remote sensing images. Existing related works propose to analyze non-metric features spaces using the DTW distance function as a basis. Here we demonstrate that analyzing the multidimensional space with Minkowski distance provides better results, considering a variety of classifiers. XGBoost and kNN, both using L2 distance, performed similarly or better than the DTW-based classifiers in terms of accuracy
Download

Paper Nr: 179
Title:

Integration of Decision-Making Components in ERP Systems

Authors:

Jānis Pekša and Janis Grabis

Abstract: Enterprise resource planning (ERP) systems are large modular enterprise applications intended for execution of majority of enterprise business processes with focus on transaction processing. However, the business processes often also require complex decision-making. Data processing logics is deemed as complex decision-making logics if it involves complex analytical calculations and requires domain specific knowledge. This paper reviews existing research on decision-making capabilities of ERP systems and identifies different approaches for integrating decision-making logics in ERP systems. This review leads to an initial framework for integration. This framework evaluates current solutions used in integration. The research findings suggest that decoupling of decision-making logics from ERP systems enables usage of advanced decision-making techniques for execution of decision intensive business processes in real-time though logical integration between decision-making components and business processes should be improved.
Download

Paper Nr: 208
Title:

SEPA Files Transmission: Implementing Security Guarantees in Enterprise Resource Planning Systems

Authors:

Diogo Gonçalves and Isabel Seruca

Abstract: The SEPA regulation has defined a set of technical and business requirements and common standards that any payment system must respect to be considered compatible with the Single Euro Payments Area (SEPA) project. The technical requirements and the mandatory nature set by the EU of joining the SEPA project require a set of adaptations to be made by companies in their business relationship with Payment Service Providers (PSPs), with particular emphasis on: adapting their Enterprise Resource Planning Systems (ERPs), often referred to as "ERP SEPA compliance", and the integration of secure C2B file transmission solutions, since a XML file is readable and editable. This paper describes a project developed at SBX Consulting targeting the implementation of security guarantees for the sending of SEPA files between a client company and the banking entities with which the company works. The security software solution developed addresses the encryption and hashing of the SEPA files and was integrated into the existing SAP system used by the company.
Download

Paper Nr: 209
Title:

Evaluating Open Source E-commerce Tools using OSSpal Methodology

Authors:

Tânia Ferreira, Isabel Pedrosa and Jorge Bernardino

Abstract: E-commerce presents several advantages in relation to traditional retail, which reflects a better competitive advantage. Open source tools have the main advantage of not increasing costs for companies although it is necessary to choose an appropriate tool to meet their specific needs. For a more precise evaluation of open source e-commerce tools, the OSSpal assessment methodology was applied, which combines quantitative and qualitative evaluation measures. By using the OSSpal methodology, this paper compares three of the top e-commerce tools: Magento, OpenCart, and PrestaShop.
Download

Paper Nr: 229
Title:

A Comprehensive Framework for Detecting Sybils and Spammers on Social Networks

Authors:

Lixin Fu

Abstract: Social media becomes a common platform for millions of people to communicate with one another online. However, some accounts and computer generated robots can greatly disrupt the normal communications. For example, the fake accounts can simultaneously "like" or "dislike" a tweet, therefore, distort the true nature of the attitudes of real human beings. They collectively respond with similar or the same automate messages to influence sentiment towards certain subject or a tweet. They may also generate large amounts of unwanted spam messages including the irrelevant advertisements of products and services. Even worse, some messages contain harmful phishing links that steal people's sensitive information. We propose a new system that can detect these disruptive behaviours on OSNs. Our methods is to integrate several sybil detection models into one prediction model based on the account profiles, social graph characteristics, comment content, and user feedback reports. Specifically, we give two new detection algorithms that have better prediction accuracy than that of the state--of-the-art systems and real time performance. In addition, a prototype system including the software modules and real and synthetic data sets on which comprehensive experiments may confirm our hypothesis. Currently most sybil detection algorithms are based on the structural connections such as few connections of densely connected Sybil communities to normal nodes. Their detection accuracy is mixed and not well. Some algorithms are based on machine learning. The different approaches are separated. We expect our new model will more accurately detect the disruptive behaviour of fake identities with high positive rates and low false negative rates.
Download

Paper Nr: 232
Title:

Towards Semi-automatic Generation of R2R Mappings

Authors:

Valéria M. Pequeno, Vânia M. P. Vidal and Tiago Vinuto

Abstract: Translating data from linked data sources to the vocabulary that is expected by a linked data application requires a large number of mappings and can require a lot of structural transformations as well as complex property value transformations. The R2R mapping language is a language based on SPARQL for publishing expressive mappings on the web. However, the specification of R2R mappings is not an easy task. This paper therefore proposes the use of mapping patterns to semi-automatically generate R2R mappings between RDF vocabularies. In this paper, we first specify a mapping language with a high level of abstraction to transform data from a source ontology to a target ontology vocabulary. Second, we introduce the proposed mapping patterns. Finally, we present a method to semi-automatically generate R2R mappings using the mapping patterns.
Download

Paper Nr: 237
Title:

SSV: An Interactive Visualization Approach for Social Media Stock-related Content Analysis

Authors:

Felipe Lodur and Wladmir Cardoso Brandão

Abstract: Users interactions in social media have proven to be highly correlated with changes in the Stock Market, and the large volume of data generated every day in this market makes the manual analytical processing impractical. Data visualization tools are powerful to enable this analysis, generating insights to support decisions. In this article we present SSV, our data visualization approach to analyze social media stock-related content. In particular, we present the SSV architecture, as well as the techniques used by it to provide data visualization. Additionally, we show that the visualizations displayed by SSV are not disposed arbitrarily, by contrary, it uses a ranking system based on visualization entropy. Moreover, we perform experiments to evaluate the ranking system and the results show that SSV is effective to rank data visualizations. We also conducted a case study with finance specialists to capture the usefulness of our proposed approach, which points out room for improvements.
Download

Paper Nr: 247
Title:

ISE: Interactive Image Search using Visual Content

Authors:

Mohamed Hamroun, Sonia Lajmi, Henri Nicolas and Ikram Amous

Abstract: CBIR (Content-Based Image Retrieval) is an image retrieval method that exploits the feature vector of the image as the retrieval index, which is based upon the content, including colors, textures, shapes and distributions of objects in the image, etc. The implementation of the image feature vector and the searching process take a great influence upon the efficiency and result of the CBIR. In this paper, we are introducing a new CBIR system called ISE based on the optimum combination of color and texture descriptors, in order to improve the quality of image recovery using the Particle Swarm Optimization algorithm (PSO). Our system operates also the Interactive Genetic Approach (GA) for a better research output. The performance analysis shows that the suggested 'DC' method upgrades the average precision metric from 66.6% to 89.50% for the Food category color histogram, from 77.7% to 100% concerning CCV for the Flower category, and from 44.4% to 67.65% regarding co-occurrence matrix for the Building category using the Corel data set. Besides, our ISE system showcases an average precision of% 95.43 which is significantly higher than other CBIR systems presented in related works.
Download

Paper Nr: 249
Title:

From ETL Conceptual Design to ETL Physical Sketching using Patterns

Authors:

Bruno Oliveira and Orlando Belo

Abstract: The ETL systems development has been the focus of many research works, addressing the complexity and effort required for their implementation and maintenance, and proposing several techniques that represent valuable contributions to improve the ETL final quality. In the last few years, we presented a pattern-oriented approach for developing these systems based on patterns that encapsulate well-known design techniques. Basically, patterns embed common practices using abstract components that can be configured for enabling its instantiation according to each pattern rule. However, each ETL system is unique, dealing with very specific data structures and decision-making requirements. Thus, several operational requirements need to be considered and system correctness is hard to validate, which can result in several implementation problems. In this paper, we present a conceptual approach based on patterns covering the main ETL phases, ranging from the conceptual design to its enrichment at logical phases that can be used for the generation of executable programs.
Download

Paper Nr: 259
Title:

PP-OMDS: An Effective and Efficient Framework for Supporting Privacy-Preserving OLAP-based Monitoring of Data Streams

Authors:

Alfredo Cuzzocrea, Assaf Schuster and Gianni Vercelli

Abstract: In this paper, we propose PP-OMDS (Privacy-Preserving OLAP-based Monitoring of Data Streams), an innovative framework for supporting the OLAP-based monitoring of data streams, which is relevant for a plethora of application scenarios (e.g., security, emergency management, and so forth), in a privacypreserving manner. The paper describes motivations, principles and achievements of the PP-OMDS framework, along with technological advancements and innovations. We also incorporate a detailed comparative analysis with competitive frameworks, along with a trade-off analysis.
Download

Paper Nr: 63
Title:

MPT: Suite Tools to Support Performance Tuning in NoSQL Systems

Authors:

M. El Malki, H. Ben Hamadou, N. El Malki and A. Kopliku

Abstract: NoSQL databases are considered as a serious alternative for processing data whose volume reaches limits that are difficult to manage by relational DBMS. So far, they are praised for the capability to scale, replication and their capability to deal with new flexible data models. Most of these systems are compared to read/write throughput and their ability to scale. However, there is a need to get more in depth to monitor more precise metrics related to RAM, CPU and disk usage. In this paper, we propose a benchmark suite tools that enables data generation, monitoring and comparison. It supports several NoSQL systems including: column-oriented, document-oriented as well as multistores. We present some experimental results that show its utility.
Download

Paper Nr: 84
Title:

Predicting the Success of NFL Teams using Complex Network Analysis

Authors:

Matheus de Oliveira Salim and Wladmir Cardoso Brandão

Abstract: The NFL (National Football League) is the most popular sports league in the United States and has the highest average attendance of any professional sports league in the world, moving billions of dollars annually through licensing agreements, sponsorships, television deals, ticket and product sales. In addition, it moves a billionaire betting market, which heavily consumes statistical data on games to produce forecasts. Moreover, game statistics are also used to characterize players performance, dictating their salaries. Thus, the discovery of implicit knowledge in the NFL statistics becomes a challenging problem. In this article, we model the behavior of NFL players and teams using complex network analysis. In particular, we represent quarterbacks and teams as nodes in a graph and labor relationships among them as edges to compute metrics from the graph, using them to discover implicit properties of the NFL social network and predict team success. Experimental results show that this social network is a scale-free and small-world network. Furthermore, node degree and clustering coefficient can be effectively used to predict team success, outperforming the usual passer rating statistic.
Download

Paper Nr: 86
Title:

TendeR-Sims - Similarity Retrieval System for Public Tenders

Authors:

Guilherme Q. Vasconcelos, Guilherme F. Zabot, Daniel M. de Lima, José F. Rodrigues Jr., Caetano Traina Jr., Daniel dos S. Kaster and Robson L. F. Cordeiro

Abstract: TendeR-Sims (Tender Retrieval by Similarity) is a system that helps to search for satisfiable request for tender's lots in a database by filtering irrelevant lots, so companies can easily discover the contracts they can win. The system implements the Similarity-aware Relational Division Operator in a commercial Relational Database Management System (RDBMS), and compares products by combining a path distance in a preprocessed ontology with a textual distance. Tender-Sims focuses on answering the following query: select the lots where a company has a similar enough item for each of all required items. We evaluated our proposed system employing a dataset composed of product catologs of Brazilian companies in the food market and real requests for tenders with known results. In the presented experiments, TendeR Sims achieved up to 66\% cost reduction at 90\% recall when compared to the ground truth.
Download

Paper Nr: 104
Title:

Mobility Service Platforms - Cross-Company Cooperation for Transportation Service Interoperability

Authors:

Markus C. Beutel, Sevket Gökay, Fabian Ohler, Werner Kohl, Karl-Heinz Krempels, Thomas Rose, Christian Samsel, Felix Schwinger and Christoph Terwelp

Abstract: The growing number of modes of transportation with diverse characteristics and situational suitability would allow a multifaceted mobility behavior. Unforunately, the usage of a combination of heterogeneous modes of transportation -- specifically during a complex travel chain with multiple changeovers -- is hindered in various ways. Users have to query, compare, combine, book and use multiple specialized mobility service individually which results in inefficiencies both on demand and supply side. Centralized mobility service platforms can form a technological bridge to deliver service interoperability. In cross section between competition and cooperation, the need for suitable, profitable, and sustainable market forms to provide complex service configurations arises. As a result of interdisciplinary workshops with domain experts, we describe a role relationship model and identify relevant market forms. To do so, we present a conceptional tool to analyze, characterize and differentiate various mobility service platforms and apply it to set of platforms currently beeing developed.
Download

Paper Nr: 155
Title:

Linking Environmental Data Models to Ecosystem Services’ Indicators for Strategic Decision Making

Authors:

Jurijs Holms, Irina Arhipova and Gatis Vitols

Abstract: The quality of decision making mostly correlates with the quality of source data and data models. Aims of the decision making influence the decisions. In its turn, the sustainable land management is to ensure the growing of the humanity in a confined space without negative consequences to the environment and future generation. Uniting the existing environmental data models with Ecosystem Services assessment practices makes it possible to build Information System that supports decision making for territory planning specialists. The architecture of this Information System partially will be based on the Web Services technologies, which ensure the accessibility of input data from many sources/stakeholders and provides the availability of the output data in any stage of distributed decision making process’s step. The purpose of the research is to highlight processes which make it possible to link the data from environmental data models with Ecosystem Services indicators. The task is to formulate proposal for facilitating data exchange process in distributed strategic decision making information systems for land management. This allows making Ecosystem Services’ (Human benefits) assessment as an input using existing standardized (ISO/INSPIRE) and machine-readable (XML) data. Moreover, these assessments ensure feedback for strategic/sustainable land management which is based on distributed decision making.
Download

Paper Nr: 169
Title:

An Architecture for Efficient Integration and Harmonization of Heterogeneous, Distributed Data Sources Enabling Big Data Analytics

Authors:

Andreas Kirmse, Vadim Kraus, Max Hoffmann and Tobias Meisen

Abstract: We present a lightweight integration architecture as an enabler for the application of process optimization via Big Data analytics and machine learning in large scale, multi-site manufacturing companies by harmonizing heterogeneous data sources. The reference implementation of the architecture is entirely based on open-source software and makes use of message queuing techniques in combination with Big Data related storage and extraction technologies. The approach specifically targets challenges related to different network zones and security levels in enterprise information architectures and across divergent production sites.
Download

Paper Nr: 201
Title:

A Study on Persuasive Applications for Electric Energy Saving

Authors:

Un Hee Schiefelbein, William B. Pereira, Renan L. Souza, João C. D. Lima, Alencar Machado, Eduardo C. Stabel and Cristiano C. da Rocha

Abstract: The growing development of persuasive technologies has led to the creation of systems that help society in a variety of sectors, one in the electric power sector, where applications seek to persuade users to change behavior and save electricity. In this sense, this article seeks to present concepts and techniques of persuasion applied in systems with this objective and to present a prototype of an application that seeks to show the user a prediction about the consumption of electric energy without him acquiring intelligent sensors, only with data that he has easy access.
Download

Paper Nr: 202
Title:

The Social Media Perception and Reality – Possible Data Quality Deficiencies between Social Media and ERP

Authors:

Mirona Ana-Maria Popescu, Mouzhi Ge and Markus Helfert

Abstract: With the increase of digitalisation, data in social media are often seen as more updated and realistic than the information system representations. Due to the fast changes in the real world and the increasing Big Social media data, there is usually certain misalignment between the social media and information system in the enterprise such as ERP, therefore there can be data deficiencies or data quality problems in the information systems, which is caused by the differences between the external social media and internal information system. In this paper, underpinned by the work of ontological data quality from Wang and Wand 1996, we investigate a set of data quality problems between two representations - Social Media and ERP. We further discuss how ERP system can be improved from the data quality perspective.
Download

Paper Nr: 211
Title:

Sensitivity Analysis in OLAP Databases

Authors:

Emiel Caron and Hennie Daniels

Abstract: The theoretical underpinnings under which sensitivity analysis is valid in OLAP databases are dealt with in this paper. Sensitivity analysis is considered to be the reverse of explanation generation in diagnostic reasoning. Our exposition differentiates between sensitivity analysis in systems of purely drill-down equation and mixed systems of equations with also business model equations. It is proven that there is an unique additive drill-down measure defined on all cubes of the aggregation lattice. This proof is the basis for sensitivity analysis in OLAP databases, where a change in some base cell in the lattice is propagated to all descendants in its upset. For sensitivity analysis in mixed systems of equations a matrix notation is presented and the conditions for solvability are discussed. Due to the fact that such systems are typically overdetermined in OLAP databases, the implicit function theorem cannot be applied. Therefore, we proposed a method to reduce the number of equations in the system and apply the implicit function theorem on a subsystem of the original system. We conclude with an alternative method for what-if analysis in mixed systems of equations.
Download

Paper Nr: 250
Title:

The Portuguese iAP Services Platform as a Building Block for User’s Centric Information Systems - The Case of the Higher Education Institutions

Authors:

Arsénio Reis, Jorge Borges, Paulo Martins and João Barroso

Abstract: Higher education is a complex business including very different process. At its core are the teaching and research processes and along the edge of the business model are the administrative processes, targeted to the students and alumni. Some of these processes translate into services that must be available during the whole life of the users and the institutions. For example, a higher education institution is expected to issue diplomas during the whole lifetime of its former students. In this context, we’ve been working in order to use the current e-government infrastructure of electronic services as building blocks for some of the features of the higher education institution electronic services. This work proposes the adoption of a set of those services. We’ve concluded a successful testing stage and expect to deploy a full production system very soon.
Download

Paper Nr: 258
Title:

Age Classification from Spanish Tweets - The Variable Age Analyzed by using Linear Classifiers

Authors:

Luis G. Moreno-Sandoval, Joan Felipe Mendoza-Molina, Edwin Alexander Puertas, Arturo Duque-Marín, Alexandra Pomares-Quimbaya and Jorge A. Alvarado-Valencia

Abstract: Text classification or text categorization in social networks such as Twitter has taken great importance with the growth of applications of this process in diverse domains of society. Literature about text classifiers is significantly wide especially in languages such as English; however, this is not the case for age classification whose studies have been mainly focused on image recognition and analysis. This paper presents the results of testing linear classifiers performance in the task of identifying Twitter users age from their profile descriptions and tweets. For this purpose, a Spanish Lexicon of 45 words around the concept “cumpleaños” was created and the Gold Standard of 1541 users with age correctly identified was obtained. The experiments are presented with the description of the algorithms used to finally obtain the best seven models that permit to identify the user's age with accuracy results between 66% and 69 %. Considering the information-retrieval layer, the new results showed that accuracy was increased from 69,09% to 72,96%.
Download

Paper Nr: 260
Title:

Yet Another Automated OLAP Workload Analyzer: Principles, and Experiences

Authors:

Alfredo Cuzzocrea, Rim Moussa and Enzo Mumolo

Abstract: In order to tune a data warehouse workload, we need automated recommenders on when and how (i) to partition data and (ii) to deploy summary structures such as derived attributes, aggregate tables, and (iii) to build OLAP indexes. In this paper, we share our experience of implementation of an OLAP workload analyzer, which exhaustively enumerates all materialized views, indexes and fragmentation schemas candidates. As a case of study, we consider TPC-DS benchmark -the de-facto industry standard benchmark for measuring the performance of decision support solutions including.
Download

Area 2 - Artificial Intelligence and Decision Support Systems

Full Papers
Paper Nr: 66
Title:

A Variable Neighborhood Search Algorithm for the Long-term Preventive Maintenance Scheduling Problem

Authors:

Roberto D. Aquino, Jonatas B. C. Chagas and Marcone J. F. Souza

Abstract: In this work we propose a Variable Neighborhood Search (VNS) approach for the long-term maintenance programming of an iron ore processing plant of a company in Brazil. The problem is a complex maintenance programming where we have to assign the machine preventive programming orders to the available work teams over a 52-week planning. In order to evaluate our solution we developed a general mixed integer programming model and used the numerical results as the benchmark. The proposed VNS approach improved most of the instances leading to new benchmarks.
Download

Paper Nr: 109
Title:

A Bio-inspired Approach in Decision-making of Multiple Robots Applied on Partitioned Surveillance Task

Authors:

Bruno Massaki Emori and Rodrigo Calvo

Abstract: This paper proposes a robot coordination strategy for surveillance task execution. The strategy is based on artificial ants behavior, both for non explored regions and areas already discovered by the robots. In a strategy already known, the robots are not capable to distinguish its own pheromone from the pheromone of the others. In the proposed strategy, the ability to distinct the substances causes the environment’s partition. Experimental results shows the typical behavior of each strategy applied to different environments and show the superiority of the proposed strategy due to the environment partitioning.
Download

Paper Nr: 111
Title:

Reducing Empty Truck Trips in Long Distance Network by Combining Trips

Authors:

Bárbara da Costa Rodrigues and André Gustavo dos Santos

Abstract: Brazilian import and export activities on ports are subject to considerable slow queues and congestion, revealing a lack of medium and/or short-term logistic planning. One of the causes is the number of trucks traveling with empty containers, performing one-way trips, from inland cities to the port or from the port to the cities. This issue may be reduced by combining trips, i.e., after bringing goods to the port (export trip), a truck should, when possible, carry goods from the port to the origin or a nearby city (import trip). In this paper we investigate a combinatorial optimization problem where a set of import/export/inland trips should be combined in order to reduce total traveling time, which in turn reduces the number of empty trucks traveling to/from the port. Individual trips and combined trips must obey national law regulation of resting time, as typical road trips in Brazil covers hundreds, even thousands of kilometers. We also consider opening operation hours on each location (time windows), which may force a driver to wait upon arriving. We test exact and heuristic approaches, and present the total travel time and number of trucks needed for each solution, considering instances based on real freight data.
Download

Paper Nr: 150
Title:

Composite Alternative Pareto Optimal Recommendation System with Individual Utility Extraction (CAPORS-IUX)

Authors:

William Jeffries and Alexander Brodsky

Abstract: We propose a methodology and develop a system for generating composite alternative recommendations combining user-guided continuous improvement with Pareto optimal trade-off considerations and for extracting individual utility. The methodology describes a way to (1) construct a set of Pareto optimal recommendations given a selected metric and the user’s current utility, (2) explore the feasibility space by relaxing the Pareto optimal constraint in a given dimension, and (3) extract the utility for an individual user by capturing the interactions between the user and the system. The system itself consists of (1) a mechanism for generating feasible recommendations, (2) implementation of the key algorithms of the methodology, and (3) user interface for enabling interaction with the user.
Download

Paper Nr: 241
Title:

Online Surgery Rescheduling - A Data-driven Approach for Real-time Decision Support

Authors:

Norman Spangenberg, Moritz Wilke, Christoph Augenstein and Bogdan Franczyk

Abstract: The operating room area is still one of the most expensive sections in the hospital due to its high and cost-intensive resource requirements. Further, several uncertainties like complications, cancellations and emergencies as well as the need to monitor and control the interventions during execution distinguish the operational planning tasks of surgery scheduling from more tactical and strategical planning activities. However there are few solutions that support monitoring and decision-making in operating room management at this level since they focus on creation of initial schedules or the efficient resource allocation. In this paper we describe a solution approach for supporting online surgery scheduling by a real-time decision support system. It allows the rescheduling based on intra-surgical information about the current surgical phases and predictions about remaining intervention times and further allows replanning due to emergent or canceled patients.
Download

Paper Nr: 245
Title:

Proposed Solutions to the Tripper Car Positioning Problem

Authors:

Felipe Novaes Caldas and Alexandre Xavier Martins

Abstract: The trippers are equipments often found in mineral processing plants. Their role is to distribute ore coming from past stages of process in a silo with several hoppers. Positioning trippers is a scheduling problem defined by position determination of the equipment through the bins and along time. The system silo-tripper was modeled as a combinatorial linear optimization program aiming to get the optimal tripper positioning. Two paradigms were used to find out an exact solution: mixed integer linear programming and dynamic programming.
Download

Paper Nr: 263
Title:

Fuzzy Analogical Reasoning in Cognitive Cities - A Conceptual Framework for Urban Dialogue Systems

Authors:

Stefan Markus Müller, Sara D'Onofrio and Edy Portmann

Abstract: This article presents a conceptual framework for urban dialogue systems to let them emulate human analogical reasoning by using cognitive computing and particularly soft computing. Since creating analogies is crucial for humans to learn unknown concepts, this article proposes an approach of urban applications to human cognition by introducing analogical reasoning as a sound component of their fuzzy reasoning process. Pursuing an approach derived from (transdisciplinary) design science research, two experiments were conducted to reinforce the theoretical foundation.
Download

Short Papers
Paper Nr: 3
Title:

Incremental TextRank - Automatic Keyword Extraction for Text Streams

Authors:

Rui Portocarrero Sarmento, Mário Cordeiro, Pavel Brazdil and João Gama

Abstract: Text Mining and NLP techniques are a hot topic nowadays. Researchers thrive to develop new and faster algorithms to cope with larger amounts of data. Particularly, text data analysis has been increasing in interest due to the growth of social networks media. Given this, the development of new algorithms and/or the upgrade of existing ones is now a crucial task to deal with text mining problems under this new scenario. In this paper, we present an update to TextRank, a well-known implementation used to do automatic keyword extraction from text, adapted to deal with streams of text. In addition, we present results for this implementation and compare them with the batch version. Major improvements are lowest computation times for the processing of the same text data, in a streaming environment, both in sliding window and incremental setups. The speedups obtained in the experimental results are significant. Therefore the approach was considered valid and useful to the research community.
Download

Paper Nr: 20
Title:

Hurst Exponent and Trading Signals Derived from Market Time Series

Authors:

Petr Kroha and Miroslav Škoula

Abstract: In this contribution, we investigate whether it is possible to use chaotic properties of time series in forecasting. Time series of market data have components of white noise without any trend, and they have components of brown noise containing trends. We constructed a new technical indicator MH (Moving Hurst) based on Hurst exponent that describes chaotic properties of time series. Further, we stated and proved a hypothesis that this indicator can bring more profit than the very well known indicator MACD (Moving Averages Convergence Divergence) that is based on moving averages of time series values. In our experiments, we tested and evaluated our proposal using hypothesis testing. We argue that Hurst exponent can be used as an indicator of technical analysis under considerations discussed in our paper.
Download

Paper Nr: 28
Title:

The Calculation of Educational Indicators by Uncertain Gates

Authors:

Guillaume Petiot

Abstract: Learning Management Systems allow us to retrieve a large scale of data about learners in order to better understand them and how they learn. Thus, it is possible to suggest educational differentiated approaches which take into account the students’ specific needs. The knowledge about the behavior of learners can be extracted by datamining or can be provided by teachers. The available data is often imprecise and incomplete. The possibility theory provides a solution to these problems. The modeling of knowledge can be performed by a possibilistic network but it requires the definition of all Conditional Possibility distributions. This constitutes a limitation for complex knowledge modeling. Uncertain Gates allow, as Noisy Gates in the probability theory, the automatic calculation of Conditional Possibility Tables. The existing Uncertain MIN and Uncertain MAX connectors are not sufficient for applications which need a compromise between both connectors. Therefore we have developed new Uncertain Compromise connectors. In this paper, we will present an experimentation of educational indicator calculation for a decision support system using Uncertain Gates.
Download

Paper Nr: 29
Title:

MedClick Health Recommendation Algorithm - Recommending Healthcare Professionals Handling Patient Preferences and Medical Specialties

Authors:

Rui Miguel Dos Santos Patornilho and André Vasconcelos

Abstract: Today’s health has a determinant role and it is a subject of concern by society. Diagnosing a disease or obtaining a medical specialty, given a set of symptoms, is not a trivial task and different decisions and approaches can be adopted to solve and handle this problem. Expert systems advise patients about a possible diagnosis, associated diseases, treatments and more concrete information about a disease considering simple symptoms. However, most systems don’t have the recommendation component of a medical doctor, which will be the differentiating factor of this research. The aim of this paper is to develop an algorithm capable of determining the medical specialties associated with a set of symptoms and diseases, and based on the medical specialties obtained, recommend the most suitable specialists. The algorithm is divided into two phases: Health Screening and Health Professional Recommendation. Health Screening has the purpose of determining and computing all the medical specialties probabilities, given a set of patient symptoms and applying a statistical model based on all the relations symptom!disease and disease!medical specialty. Health Professional Recommendation has the purpose of recommending the best health professionals, given a set of patient preferences, applying a weighted mean average, where each weight of a health professional feature is given by a patient according to his preferences. This algorithm was evaluated through a set of test cases, having a database with information about symptoms, diseases and medical specialties. This algorithm was later compared to other systems that have the same purpose, to access its quality. The comparison result between the algorithm and WebMD system indicates that the diseases found by the solution are in 80% of all the cases equal to the diseases found and pointed by WebMD system.
Download

Paper Nr: 38
Title:

SLA Non-compliance Detection and Prevention in Batch Jobs

Authors:

Alok Patel, Abhinay Puvvala and Veerendra K. Rai

Abstract: This paper reports the study done on SLA non-compliance detection and prevention in batch job systems. It sets out the task of determining optimal and the smallest set of levers to minimize SLA non-compliance at minimum impact business requirements. The methodology to address the problem consists of a four-step process that includes inputs, pre-processing, modelling & solving and post processing. This paper uses Integer Linear Programming (ILP) to achieve global optima given a set of varied constraints such as sacrosanct con-straints, auxiliary constraints, reach time constraints and SLA non-compliant identifier constraints. Method-ology has been tested on two sets of data- synthetic data of small size to corroborate the correctness of ap-proach and a real batch job system data of a financial institution to test the rigor of the approach.
Download

Paper Nr: 100
Title:

A Proactive Approach to Support Risk Management in Software Projects using Multi-agent Systems

Authors:

Thayse Alencar, Mariela Cortés, Nécio Veras and Lui Magno

Abstract: Software project management is a complex and demanding task full of threats or negative risks that lead to the delay or the failure of the project. Risks stem from many different internal sources as well as external ones in the company and the project. In addition, these events can originate in any phase of the project life cycle, and thereby increase the complexity of the decisions for the project manager. Aiming to reduce the negative consequences caused by these events, we propose an approach that extends a multi-agent system to provide support for risk management in software projects by using metrics and contingency reserves. The approach is evaluated with a feasibility study demonstrating that agent-oriented approaches are promising solutions that support risk management processes.
Download

Paper Nr: 138
Title:

On the Adoption of Big Data Analytics: Interdependencies of Contextual Factors

Authors:

Anke Schüll and Natalia Maslan

Abstract: Even though the number of papers on the adoption of big data analytics (BDA) has increased, the literature still only scratches the surface in terms of understanding the influential factors of BDA adoption. To cope with the complexity of these factors, this paper focuses on the influence of some of the most important factors regarding BDA and their interrelations. We followed the technology, organization, and environment framework (TOE framework), which is frequently used to explain the process of technology adoption, to examine the context of the decision-making process and combined it with insights from dynamic capability theory. This paper contributes to BDA research by extending the TOE framework towards a dynamic capability view. It assists in the decision-making process regarding the development of BDA capabilities by determining the most influential factors and their side effects, thereby helping to prioritize these factors and to encourage investments accordingly.
Download

Paper Nr: 142
Title:

Interactive Fuzzy Decision Support to Adjust Human Resource Structures

Authors:

Peter Rausch and Michael Stumpf

Abstract: Human resource planning plays a key role for enterprises’ and organizations’ sustainable success. This paper focuses on issues and challenges in the field of human resource planning in hierarchical organizations. Due to current challenges, like digital transformation, progress in artificial intelligence, etc., a fundamental structural transformation of workforce is initiated in many companies and organizations. Especially, huge enterprises in many industries and the service sector as well as organizations in the public sector have to review their mid-term and long-term desired human resource (HR) target structures. Based on an organization’s target structure, a strategy to transfer the actual HR structure to a desired new target structure is needed. This step is a big challenge because of many uncertainties of system parameters and complex structures of the planning approaches with many constraints and conflicting goals. To bridge gaps in this field, an interactive fuzzy approach which supports the development of strategies for actual-target structural adjustments (ATSA) in big organizations will be presented. This approach manages conflicting goals and is based on experience gained in an organization of the public sector, but it can also be transferred to non-governmental industry and service companies.
Download

Paper Nr: 146
Title:

An Evaluation Method for the Performance Measurement of an Opinion Mining System

Authors:

Manuela Angioni, Andrea Devola, Mario Locci, Francesca Mura, Franco Tuveri and Mirella Varchetta

Abstract: This paper proposes an evaluation method for the performance measurement of an Opinion Mining system, parameterized according to the reviewer's point of view. The work aims to highlight and resolve some issues resulting from previous activities in evaluating the goodness of the results obtained by the analysis of the reviews. The evaluation method is based on a model of Opinion Mining system able to identify and assess the aspects included in a collection of reviews and the weighted importance of such aspects for their authors. A user profiling system will work together with the Opinion Mining system, providing the set of parameters to associate with the aspects and allowing the Opinion Mining system to configure itself according to the user preferences. For the preliminary experiments, a narrower sub-set of Yelp dataset limited to restaurants has been used.
Download

Paper Nr: 185
Title:

A Case-Based System Architecture based on Situation-Awareness for Speech Therapy

Authors:

Maria Helena Franciscatto, Jo˜ao Carlos Damasceno Lima, Augusto Moro, Vinícius Maran, Iara Augustin, Márcia Keske Soares and Cristiano Cortez da Rocha

Abstract: Situation Awareness (SA) involves the correct interpretation of situations, allowing a system to respond to the observed environment and providing support for decision making in many systems domains. Speech therapy is an example of domain where situation awareness can provide benefits, since practitioners should monitor the patient in order to perform therapeutic actions. However, there are few proposals in the area that address reasoning about a situation to improve these tasks. Likewise, the case-based reasoning methodology is little approached, since existing proposals rarely use previous knowledge for problem solving. For this reason, this paper proposes a case-based architecture to assist Speech-Language Pathologists (SLPs) in tasks involving screening and diagnosis of speech sound disorders. We present the modules that compose the system’s architecture and results obtained from the evaluation using the Google Cloud Speech API. As main contributions, we present the architecture of a system that aims to be situation-aware, encompassing perception, comprehension and projection of actions in the environment. Also, we present and discuss the results, towards a speech therapy system for decision making support.
Download

Paper Nr: 205
Title:

Towards an Evolution Strategy Approach in Binary Image Registration for Solving Digital Signature Recognition Tasks

Authors:

Catalina Cocianu and Alexandru Stan

Abstract: This paper focuses on the development of an image registration methodology for digital signature recognition. We consider two perturbation models, namely the rigid transformation and a mixture of shear and rigid deformation. The proposed methodology involves three stages. In the first stage, both the acquired image and the stored one are binarized to reduce the computational effort. Then an evolution strategy (ES) is applied to register the obtained binary images. The quality of each chromosome belonging to a certain population is evaluated in terms of mutual information-based fitness function. In order to speed up the computation of fitness values, we propose a computation strategy based on the binary representation of images and the sparsity of the image matrices. Finally, we evaluate the registration capabilities of the proposed methodology by means of quantitative measures as well as qualitative indicators. The experimental results and some conclusions concerning the capabilities of various methods derived from the proposed methodology are reported in the final section of the paper.
Download

Paper Nr: 248
Title:

Enhance Classroom Preparation for Flipped Classroom using AI and Analytics

Authors:

Prajakta Diwanji, Knut Hinkelmann and Hansfriedrich Witschel

Abstract: In a flipped classroom setting, it is important for students to come prepared for the classroom. Being prepared in advance helps students to grasp the concepts taught during classroom sessions. A recent student survey at Fachhochschule Nordwestschweiz (FHNW), Business School, Switzerland, revealed that only 27.7% students often prepared before a class and only 7% always prepared before a class. The main reason for not preparing for classes was lack of time and workload. A literature review study revealed that there is a growth of the use of Artificial Intelligence (AI), for example, chatbots and teaching assistants, which support both teachers and students for classroom preparation. There is also a rise in the use of data analytics to support tutor decision making in real time. However, many of these tools are based on external motivation factors like grading and assessment. Intrinsic motivation among students is more rewarding in the long term. This paper proposes an application based on AI and data analysis that focuses on intrinsically motivating and preparing students in a flipped classroom approach.
Download

Paper Nr: 255
Title:

BPMN Model and Text Instructions Automatic Synchronization

Authors:

Leonardo Guerreiro Azevedo, Raphael de Almeida Rodrigues and Kate Revoredo

Abstract: The proper representation of business processes is important for its execution and understanding. BPMN is the de facto standard notation for business process modeling. However domain specialists, which are experts in the business, do not have necessarily the modeling skills to easily read a BPMN model. Natural language is easier for them to read. Thus, both model and text are necessary artifacts for a broad communication. The manual edition of both artifacts may result in inconsistencies, due to unilateral modifications. This research proposes a framework for synchronizing BPMN model artifacts and its natural language text representation. It generates textual work instructions from the model, and it updates the original model if the textual instructions are edited. The framework was implemented using Java standard technology and evaluated through experiments. The first experiment concluded the textual work instructions can be considered equivalent to process models in terms of knowledge representation. The second experiment concluded the knowledge represented by the manually updated text can be considered equivalent to the automatically updated process model after the synchronization.
Download

Paper Nr: 74
Title:

A Decision-Support System for Identifying the Best Contractual Delivery Methods of Mega Infrastructure Developments

Authors:

Moza T. Al Nahyan, Yaser E. Hawas, Mohammad S. Mohammad and Basil Basheerudeen

Abstract: This article describes the Decision Support System (DSS) software for identifying the best contractual delivery methods for megaprojects, based on the elements of risks, opportunities of investments and project constraints. A fuzzy-based multi-criterion decision-making technique is used to develop the DSS, to assist the client in the selection of the appropriate contractual delivery method. The system accounts for the relative importance of the various stakeholders in the different project stages. The system enables the client to depict his/her best choices (regarding project delivery methods and stakeholder entities) that would likely provide the best environs for the project to succeed. With such complicated system, the client can also investigate the specifics of the various project stages and study the effects of enhancements or deficiencies of the stakeholder entities capabilities. The system was developed and calibrated based on the results obtained from extensive surveys among key stakeholders in the UAE.
Download

Paper Nr: 139
Title:

Towards an Agile Lifecycle in Operation Research Projects

Authors:

Melina Vidoni, Maria Laura Cunico and Aldo Vecchietti

Abstract: Often, Operation Research (OR) interventions focus more on solving a specific problem than addressing the project as a whole. Even more, developers do not acknowledge OR models as systems that are part of an organisation. The lack of a methodology guiding the project complicates the introduction of changes in the model due to alterations in the requirements. However, these issues have already been acknowledged, addressed and solved in the Software Engineering (SE) discipline. Thus, considering the current contributions from SE to OR projects, and the solutions offered by the first, this article analyses more deeply the similarities existing in the lifecycles of projects aiming to narrow the gap that exists in OR research, due to the lack of project methodologies. A proposal is made regarding the flow of information refinement and lifecycle phases predominant in OR projects; an initial theoretical adaptation of Feature Driven Development showcases their potential and possibilities. After this, current limitations and future works are discussed.
Download

Paper Nr: 170
Title:

Use of Genetic Algorithm for Spatial Layout of Indoor Light Sources

Authors:

Pedro Henrique Gouvea Coelho, J. F. M. do Amaral and K. P. Guimarães

Abstract: People spend many hours inside buildings that are naturally and artificially illuminated. Since mankind has been able to tame the fire and use it to illuminate, the natural condition of nighttime darkness has been modified. With the advent of electric lighting this has been intensified. The problem of indoor lighting presents several options according to the specific purpose of the lighting. There is room for some heuristic choices and genetic algorithms have been chosen as a computational intelligence technique that allows multi-objective solutions and the inclusion of heuristics and versatility in specific situations that occur in many particular applications. In this way, the main objective of this article is to optimize the number of light sources in indoor environments with the aid of genetic algorithms to obtain a suitable light intensity with the smallest number of light sources. One of the paramount reasons for using such algorithm is that it returns an acceptable solution to an optimization problem with infinite possibilities in a finite number of trials. A case study is presented in which the applicability of genetic algorithms to the problem is discussed, and the results indicate the viability of the method.
Download

Area 3 - Information Systems Analysis and Specification

Full Papers
Paper Nr: 19
Title:

Influence Factors for Knowledge Management Initiatives - A Systematic Mapping Study

Authors:

Jacilane Rabelo and Tayana Conte

Abstract: Context: Knowledge Management (KM) is becoming critical in software organizations due to the increasing demands of the market. Despite the importance of KM, there is no consensus on which factors can influence KM initiatives in software organizations. Aim: The goal of this paper is to investigate what are the factors that influence KM in software organizations. Method: we performed a systematic mapping study on influencing factors for knowledge management in software organizations. Results: From a set of 1028 publications, 147 publications were analyzed and 10 were selected in this mapping, which helped us identify the influence factors that were most cited by the authors. Among the selected publications, the following factors were the most cited: Organizational Culture, Leadership, Information Technology and Social Network of Knowledge. Conclusion: There is a shortage of papers that address this issue of influencing factors for software organizations, and how to assess these factors in software organizations. Most studies show statistical data on the relationship between KM and the factors, but do not show how these factors can be evaluated in the organization. These aspects need to be addressed in the influence factors in order to improve knowledge management initiatives in software organizations.
Download

Paper Nr: 71
Title:

Distributed and Resource-Aware Load Testing of WS-BPEL Compositions

Authors:

Afef Jmal Maâlej, Mariam Lahami, Moez Krichen and Mohamed Jmaïel

Abstract: One important type of testing Web services compositions is load testing, as such applications solicit concurrent access by multiple users simultaneously. In this context, load testing of these applications seems an important task in order to detect problems under elevated loads. For this purpose, we propose a distributed and resource aware test architecture aiming to study the behavior of WS-BPEL compositions considering load conditions. The major contribution of this paper consists of (i) looking for the best node hosting the execution of each tester instance, then (ii) running a load test during which the composition under test is monitored and performance data are recorded and finally (iii) analyzing in a distributed manner the resulting test logs in order to identify problems under load. We also illustrate our approach by means of a case study in the healthcare domain considering the context of resource aware load testing.
Download

Paper Nr: 80
Title:

An Agile Framework for Modeling Smart City Business Ecosystems

Authors:

Anne Faber, Adrian Hernandez-Mendez, Sven-Volker Rehm and Florian Matthes

Abstract: Modeling business ecosystems enables ecosystem stakeholders to take better-informed decisions. In this paper we present an agile framework for modeling a smart city business ecosystem. We follow a design science research approach to conceptualize the agile approach to manage ecosystem models and present the architecture of our framework as design artefact. During the design process, we evaluated ecosystem models that need to adapt with the emerging structures of the business ecosystem. The platform aims at a collaborative modeling process, which empowers end-users to manage the business ecosystem models and underlying data. The evaluation of the platform was conducted with industry partners as part of the presented smart city initative, indicating it usefulness when fulfilling modeling related tasks.
Download

Paper Nr: 83
Title:

Temporal Evolution of Vehicular Network Simulators: Challenges and Perspectives

Authors:

Mauricio J. Silva, Genilson I. Silva, Fernando A. Teixeira and Ricardo A. Oliveira

Abstract: New proposals of applications and protocols for vehicular networks appear every day. Its crucial to evaluate, test and validate these proposals on a large scale before deploying them in the real world. Simulation is by far the preferred method by the community when conducting the evaluation. In this paper we survey the main simulators for vehicular networks and show how they evolved over time. Thus, we provide information that leads to an understanding of how, and how long does it take for the scientific community to absorb a new simulator proposal. Additionally, valuable insights are presented to help researchers make better choices when selecting the appropriate simulator to evaluate new proposals.
Download

Paper Nr: 91
Title:

Software Process Improvement through the Combination of Data Provenance, Ontologies and Complex Networks

Authors:

Maria Luiza Furtuozo Falci, Regina Braga, Victor Ströele and José Maria David

Abstract: Software development is a complex and unpredictable activity. The idea of using knowledge from previous software processes can be a good idea to improve the quality of future executions. The aim of this work is to propose a software process improvement architecture named OntoComplex, which helps software managers in decision making about software projects. Among the technologies used in OntoComplex we highlight the use of ontologies and complex networks, associated with a provenance data model (ProvONE). By monitoring and analyzing process execution data using these technologies, we can extract implicit knowledge that can help project managers to make decisions. An initial evaluation of OntoComplex was conducted using data from a Brazilian medium size software company.
Download

Paper Nr: 108
Title:

Managing Graph Modeling Alternatives for Link Prediction

Authors:

Silas P. Lima Filho, Maria Claudia Cavalcanti and Claudia Marcela Justel

Abstract: The importance of bringing the relational data to other models and technologies has been widely debated, as for example their representation as graphs. This model allows to perform topological analysis such as social analysis, link predictions or recommendations. There are already initiatives to map from a relational database to graph representation. However, they do not take into account the different ways to generate such graphs from data stored in relational databases, specially when the goal is to perform topological analysis. This work discusses how graph modeling alternatives from data stored in relational datasets may lead to useful results. However, this is not an easy task. The main contribution of this paper is towards managing such alternatives, taking into account that the graph model choice and the topological analysis to be used, depend on the links the user intends to predict. Experiments are reported and show interesting results, including modeling heuristics to guide the user on the graph model choice.
Download

Paper Nr: 110
Title:

Automatic Generation of Ontologies from Business Process Models

Authors:

Lukas Riehl Figueiredo and Hilda Carvalho de Oliveira

Abstract: Business process models are used in organizational environments for a better understanding of the interactions between the different sectors and the interdependencies between processes. However, business process models may present legibility problems and navigation difficulties as they become extensive. The representation of implicit knowledge is complex, as well as the interdependencies are not always easy to be understood. The use of ontologies has opened a complementary perspective to provide processes with machine-accessible semantics. Ontologies contribute to the conceptualization and organization of the embedded and unstructured information that is present in the business processes and that must be explored. The ontologies are used to structure the implicit knowledge that is present in the business processes, allowing the understanding by machine. They also facilitate the sharing and reusing of knowledge by various agents, human or artificial. In this context, this work presents a systematic process to generate an ontology from a business process model in BPMN, allowing to query information about the model. For this, the PM2ONTO tool was developed, aiming to generate the ontology in OWL automatically and to provide predefined queries, elaborated with SPARQL.
Download

Paper Nr: 134
Title:

People Management in Agile Development

Authors:

Pedro Thiago Rocha de Alcântara, Edna Dias Canedo and Ruyther Parente da Costa

Abstract: The People Management (PM) is a fundamental part of managing software projects in perspective of the development process dependent on the people they perform. The methods are focused on people and their interactions in order to maximize the success of software projects. However, most projects still suffer from unsuccessful risks. Given the importance of PM and its complexity, this work aims to build a PM model for software development approaches. A Systematic Review of Literature (SRL) was carried out in order to gather data about the state of the art in agile development. From the data collected in the SRL was proposed the PM model. The proposed model was generically constructed to serve as a guide in PM in agile projects, independent of the characteristics of the organization and the time it is implemented.
Download

Paper Nr: 158
Title:

Customer Involvement in the Scaled Agile Framework - Results from a Case Study in an Industrial Company

Authors:

Joseph Trienekens, Rob Kusters, Hatta B. Himawan and Jan van Moll

Abstract: The Scaled Agile Framework (SAFe) has emerged over the last years as an approach which supports the improvement of software and systems development. SAFe claimes solutions for business challenges, such as shortening cycle’s times, improving product quality, increasing team members’ satisfaction, and involving the customer in product development. However, regarding customer involvement, there is limited research, both in SAFe and in real-life agile software development projects. In previous work we developed an initial conceptual customer involvement model for the SAFe domain in Philips Medical Systems. In this paper this initial model will be extended and enriched on the basis of a case study in an industrial company.
Download

Paper Nr: 165
Title:

Security Tests for Smart Toys

Authors:

Luciano Gonçalves de Carvalho and Marcelo Medeiros Eler

Abstract: Smart toys are becoming more and more common in many homes. As smart toys can gather data on the context of the user’s activities (e.g., voice, walking, photo, etc.) through camera, microphone, GPS and various sensors and store personalized and confidential information (e.g., location, biography information, activities pattern, etc.), security measures are required to assure their reliability, specially because they are mainly used by vulnerable users, children. In fact, several security flaws have been reported on smart toys available in the market. Security incidents include information leakage, toys used as spies and outsiders interacting with children via unauthorized connections. Some researchers have investigated smart toys vulnerabilities and risks when it comes to security issues, many of them have studied how to assure privacy policies compliance, and one researcher proposed general security requirements for smart toys. However, no work has proposed general security analysis and tests to assure security requirements have been met. In this context, this paper discusses security issues, threats and requirements in the context of smart toys and presents general security analysis and tests for smart toys, all identified based on the Microsoft Security Development Lifecycle (SDL) process. We believe this work contributes to this field by providing manufacturers, developers and researchers with a general guideline on how to handle security aspects when designing and developing smart toys.
Download

Paper Nr: 167
Title:

A Systematic Review of Concolic Testing with Aplication of Test Criteria

Authors:

Lucilia Y. Araki and Leticia M. Peres

Abstract: We present in this paper a systematic review, using (Biolchini et al., 2005) approach, of methods, techniques and tools regarding to concolic testing with application of test criteria. The test activity is the process of running a program with the intent of discovering defects. The search for test cases to increase the coverage of structural tests is being addressed by approaches that generate test cases using symbolic and concolic execution. Concolic testing is an effective technique for automated software testing, that aims to generate test inputs to locate failures of implementation in a program. Application of a test criterion is very important to ensure the quality of the test cases used. The number of elements exercised provides a measure of coverage that can be used to evaluate the test data set and consider the test activity to be closed.
Download

Paper Nr: 172
Title:

A Semantic Data Value Vocabulary Supporting Data Value Assessment and Measurement Integration

Authors:

Judie Attard and Rob Brennan

Abstract: In this paper we define the Data Value Vocabulary (DaVe) that allows for the comprehensive representation of data value. This vocabulary enables users to extend it using data value dimensions as required in the context at hand. DaVe caters for the lack of consensus on what characterises data value, and also how to model it. This vocabulary will allow users to monitor and asses data value throughout any value creating or data exploitation efforts, therefore laying the basis for effective management of value and efficient value exploitation. It also allows for the integration of diverse metrics that span many data value dimensions and which most likely pertain to a range of different tools in different formats. This data value vocabulary is based on requirements extracted from a number of value assessment use cases extracted from literature, and is evaluated using Gruber’s ontology design criteria, and by instantiating it in a deployment case study.
Download

Paper Nr: 174
Title:

Optimized Feature Selection for Initial Launch in Dynamic Software Product Lines

Authors:

Ismayle de Sousa Santos, Evilasio Costa Junior, Rossana Maria de Castro Andrade, Pedro de Alcântara dos Santos Neto, Leonardo Sampaio Rocha, Claudia Maria Lima Werner and Jerffeson Texeira de Souza

Abstract: A Dynamic Software Product Line (DSPL) allows the generation of products that can adapt dynamically according to changes in requirements or environment at runtime. This runtime adaptation is often made by the activation and deactivation of features, introducing a cost (e.g., an overhead regarding resource consumption). To reduce this cost, a solution is the partial product configuration at the static binding time. Thus, in DSPLs, one challenge is the feature selection to define which features should be bound permanently before the initial launch and which features should be bound at runtime. In this paper, we address this challenge presenting a graph model formulation to the feature selection problem for the initial launch in DSPLs that considers both static and dynamic binding. This model allows the application of efficient optimization algorithms to solve the problem. We also present a proof of concept showing that the model can be used to generate optimized solutions to the feature selection problem for initial launch in DSPLs.
Download

Paper Nr: 257
Title:

Variability Specification and Resolution of Textual Requirements

Authors:

Alberto Rodrigues da Silva and João Costa Fernandes

Abstract: Since software product lines emerged, various techniques have provided for commonality and variability modelling of functionally similar products within a given domain. However, so far the emphasis of variability modelling proposals has mostly been on the solution rather than the requirements level, which is mainly due to stakeholders often associating variability with the software implementation instead of the problem analysis. Taken into consideration the positive impact that a high-quality system requirements specification plays within a software project, this paper proposes and evaluates an innovative approach for the modelling and management of variability at the requirements level, based on the Common Variability Language (CVL), the OMG proposal for a domain-independent variability modelling standard. This approach has been implemented as a core feature of the ITBox system, a Web-based collaborative platform for the management of technical documentation.
Download

Short Papers
Paper Nr: 12
Title:

Semantic Analysis and Complex Networks as Conjugated Techniques Supporting Decision Making

Authors:

Pedro Ivo Lancellotta, Victor Ströele, Regina Braga, José Maria N. David and Fernanda Campos

Abstract: The expansion scenario in information technology has led to the need for the analysis of huge amounts of data to manage information for quick decision making. This paper presents an architecture for data visualization that uses semantic and structural interpretation as conjugated techniques for data analysis in different domains, through an interface which supports visualization strategies. A case study was carried out with a specialist in real-world agricultural context using data from dairy cattle to answer our research question. The results demonstrate the feasibility of the proposal.
Download

Paper Nr: 34
Title:

Introducing an Information Logistics Approach to Support the Development of Future Energy Grid Management

Authors:

Steffen Nienke, Oliver Zöllner, Violett Zeller and Günther Schuh

Abstract: Due to the drastically increasing amount of data, decision making in companies heavily relies on having the right data available. Also because of an increasing complexity of structures and processes, quick and precise flows of information become more important. This paper introduces a new approach for modelling information flows, creating a basis for an efficient information management. It can be used to structure the information requirements and identify gaps within the information processing. To display its benefits, the proposed Information Logistics Notation (ILN) is applied to the information logistics of todays and future energy market and grid stability management, both processes of increasing complexity.
Download

Paper Nr: 39
Title:

Knowledge Processes in Virtual Teams - Tacit Knowledge

Authors:

Birgit Großer, Sara Kepplinger, Cathrin Vogel and Ulrike Baumöl

Abstract: The deployment of virtual teamwork superseding traditional work structures provides ample opportunities for organizations regarding e.g., cost efficiency and employee retention. Many organizations embrace the potentials of virtual teamwork, being it modern enterprises such as start-ups or traditionally set companies integrating more virtual solutions along their evolution. Virtual teams create value by processing knowledge through the creation, transfer, retention and application of knowledge. Knowledge consists of explicit knowledge and hard to capture tacit knowledge. As tacit knowledge cannot always be easily converted to explicit knowledge in form of written documents, the knowledge processes for virtual teams are constituted differently regarding tacit knowledge. The reliance on information and communication technology for processing tacit knowledge introduces further challenges but also opens up new approaches, e.g., by working in three dimensional virtual environments. The paper at hand presents an exploratory case study about how knowledge processes regarding tacit knowledge manifest themselves in virtual teams and what technological solutions are relevant as support. A case study is performed and implications for the implementation and technological support of knowledge processes for tacit knowledge are derived.
Download

Paper Nr: 49
Title:

Using Context Elements and Data Provenance to Support Reuse in Scientific Software Ecosystem Platform

Authors:

Lenita M. Ambrósio, José Maria N. David, Regina Braga, Fernanda Campos, Victor Ströele and Marco Antônio Araújo

Abstract: [Background] Managing contextual elements and provenance information plays a key role in the context of scientific experiments. Currently the scientific experimentation process requires support for collaborative and distributed activities. Detailed logging of the steps to produce results, as well as the environment context information could allow scientists to reuse these results in future experiments and reuse the experiment or parts of it in another context. [Objectives] The goal of this paper is to present a provenance and context metadata management approach that support researchers to reuse experiments in a collaborative and distributed platform. [Method] First, the context and provenance management life cycle phases were analyzed, considering existing models. Then it was proposed a conceptual framework to support the analysis of contextual elements and provenance data of scientific experiments. An ontology capable of extracting implicit knowledge in this domain was specified. This approach was implemented in a scientific ecosystem platform. [Results] An initial evaluation shown evidences that this architecture is able to help researchers during the reuse and reproduction of scientific experiments. [Conclusions] Context elements and data provenance, associated with inference mechanisms, can be used to support the reuse in scientific experimentation process.
Download

Paper Nr: 87
Title:

Health Evaluation in Software Ecosystems

Authors:

Iuri Carvalho, Fernanda Campos, Regina Braga, José Maria David, Victor Stroele and Marco Antônio Araújo

Abstract: Context: The quality of a Software Ecosystem (SECO) platform and its available products are important characteristics to ensure its success. However, this concept goes beyond the traditional approaches of quality assurance, including concepts such as SECO´s health. Objectives: The aim of this study is propose an evaluation process to application of health metrics. In addition, this metrics were formalized to make feasible your application and improve the obtained results. Method: A systematic mapping was conducted with the aim of analyzing the SECO quality research area, highlighting the state of the art and identifying its main characteristics. In addition, the main approaches and metrics present in the literature for SECO quality and health evaluation are detailed. This work presents an observational study used to define relevant heath metrics considering an evaluation process. Results: The metrics were formalized and evaluated by specialists. A health evaluation process was developed to applicate this metrics. This process is supported by an architecture named HEAL ME.
Download

Paper Nr: 119
Title:

Ontology for SEAM Service Models

Authors:

Gorica Tapandjieva and Alain Wegmann

Abstract: A service system is a popular concept in academia and industry. At the same time, it is a challenging concept to represent, due to its recursive nature and difficulty to relate it to entities in reality. In this paper we present an ontology for modeling service systems with the SEAM systemic method. As part of the ontology, we provide a meta-model, well-formedness rules and formalization in the Alloy language. The ontology we propose represents an updated and minimalistic version of the existing SEAM modeling language ontology that puts an emphasis on the behavior. We validated the ontology by modeling around 20 case studies. The running example we use throughout this paper is one of these case studies.
Download

Paper Nr: 151
Title:

Is My Office 365 GDPR Compliant? - Security Issues in Authentication and Administration

Authors:

Nestori Syynimaa and Tessa Viitanen

Abstract: The General Data Protection Regulation, commonly referred as GDPR, will be enforced in all European Union countries in May 2018. GDPR sets requirements for processing EU citizens’ personal data regardless of the physical location of the organisation processing the data. Over 40 percent of European organisations are using Office 365. Microsoft claims that Office 365 service is GDPR compliant, and has provided tools to help Office 365 customers to ensure their GDPR compliancy. In this paper, we present some security issues related to the very foundation of Office 365 service, namely Azure Active Directory and administrative tools, and assess their GDPR compliancy. Our findings reveal that personal data stored in Office 365 is subject to undetectable security breaches, preventing organisations to be GDPR compliant. We also propose actions to take to minimise the impact of the security issues.
Download

Paper Nr: 156
Title:

Patterns for Modelling and Composing Flexible Workflows from Cloud Services

Authors:

Imen Ben Fraj, Yousra BenDaly Hlaoui and Leila Jemni BenAyed

Abstract: In this paper, we propose a Model-Driven Approach for the specification and the execution of cloud service flexible workflow applications. We define two flexibility patterns based on BPMN that deals with changes of resource requirements for workflow. The workflows are built on an abstract level, using a BPMN model for the specification of the cloud service workflow structure based on flexibility patterns, and the state-chart diagram for the specification of the cloud service workflow behaviour. The execution process is supervised by a control system which is responsible for making decisions on the execution of the workflow based on the behaviour defined by the state-chart diagram.
Download

Paper Nr: 190
Title:

Semantic Interoperability among Industrial Product Data Standards using an Ontology Network

Authors:

Alvaro Luis Fraga, Marcela Vegetti and Horacio Pascual Leone

Abstract: Globalization impacts on the competitive capacity of industries forcing them to integrate their productive processes with other facilities geographically distributed. So, information systems supporting such processes should interoperate. Standards have been seen for many years as a way to reach interoperability. In particular, the committee 184 subcommittee 4 of the International Standard Organization (ISO) focus on the definition of industrial product data standards. However, they still suffer from semantic inconsistencies when the standards are put to work together. In this article, we propose an ontology network as a semantic bridge among standards for product representation, as a solution to reach interoperability among information system in manufacturing industries.
Download

Paper Nr: 191
Title:

Organizational Patterns between Developers and Testers - Investigating Testers’ Autonomy and Role Identity

Authors:

Michal Doležel and Michael Felderer

Abstract: This paper deals with organizational patterns (configurations, set-ups) between developers/programmers and testers. We firstly discuss the key differences between these two Information Systems Development (ISD) occupations. Highlighting the origin of inevitable disagreements between them, we reflect on the nature of the software testing field that currently undergoes an essential change under the increasing influence of agile ISD approaches and methods. We also deal with the ongoing professionalization of software testing. More specifically, we propose that the concept of role identity anchored in (social) identity theory can be applied to the profession of software testers, and their activities studied accordingly. Furthermore, we conceptualize three organizational patterns (i.e. isolated testers, embedded testers, and eradicated testers) based on our selective literature review of research and practice sources in Information Systems (IS) and Software Engineering (SE) disciplines. After summarizing the key industrial challenges of these patterns, we conclude the paper by calling for more research evidence that would demonstrate the viability of the recently introduced novel organizational models. We also argue that especially the organizational model of “combined software engineering”, where the roles of programmers and testers are reunited into a single role of “software engineer”, deserves a closer attention of IS and SE researchers in the future.
Download

Paper Nr: 219
Title:

Building Information Modeling for Quality Management

Authors:

Ying-Mei Cheng

Abstract: Building Information Modeling (BIM) has grown tremendously and used in each stage of construction project life cycle. This research focus on integrating BIM with quality management for innovative development and improve performance efficiency of the quality management system in the construction stage. First, this study proposes an application framework for BIM in the AEC (Architecture/Engineering/Construction) filed. Basically, this framework emphasizes the application of different BIM models with different special requirements during different phases of the project life cycle. Second, based on this framework, a QC (Quality Control) model system prototype is established. The QC model is utilized in the construction stage with Autodesk Revit API (Application Programming Interface) to code the add-ins, which can record onsite quality defects immediately and display the 3D elements of these defects. Moreover, users can also print the QC reports with this system or use A360 to produce the panorama or stereo panorama to check the positions of the onsite quality defects using mobile devices. The system efficiently documents construction quality defects while improving communications regarding quality information.
Download

Paper Nr: 222
Title:

Testing Practices of Software in Safety Critical Systems: Industrial Survey

Authors:

Mohamad Kassab

Abstract: The software becomes increasingly a core integrated part of the safety-critical systems. Unfortunately, little contemporary data exists to document the actual practices used by software professionals for software testing and quality assurance activities for software in safety-critical systems. To remedy the deficiency of lack of data, we conducted a comprehensive survey of software professionals to attempt to discover these practices. In this paper we report on our findings from this survey on the state of practice of testing software for safety-critical systems in respect to three areas: 1) The integration of the testing activities within the software development life cycle; 2) Testing methods and techniques; 3) Testing metrics and defects management. We also provide some comparison with testing software for non-safety-critical systems.
Download

Paper Nr: 235
Title:

Context-Aware System Analysis: Introduction of a Process Model for Industrial Applications

Authors:

Patrick Rosenberger, Detlef Gerhard and Philipp Rosenberger

Abstract: The ability to record environmental conditions with sensors enables the development of applications that can react to changing situations. These context-aware systems provide major benefits for their users, as they allow the customization of their functionalities while decreasing the need for user interaction at the same time. Despite these advantages, context-awareness still plays a minor role in industrial applications due to reasons such as the increased efforts required during the development or the absence of tools to handle the inherent complexity. To facilitate the implementation of context-aware systems, this publication introduces a process model for analysing contextual requirements and defining contextual functionalities. The approach allows the integration of context-awareness into systems and can be used in combination with all common software development methodologies. Further, a method is proposed that reduces the complexity of context-aware systems to a manageable level. To demonstrate how the presented approaches can be applied, this paper finishes by showing the development a context-aware information system for the workers at the shop-floor of an injection moulding company.
Download

Paper Nr: 240
Title:

PBLOntology: A Domain Ontology with Context Elements for Problem-based Learning

Authors:

Adriana Silva Souza, Adolfo Duran and Vaninha Almeida

Abstract: In education, ontologies have been proved useful for structuring intelligent tutors, collaborative learning, creation of learning models, semantic search for recommendation of learning material, personification and adaption of educational content based on the student’s context. Problem-Based Learning (PBL) is a pedagogical methodology that is regarded as an alternative to traditional learning for skills development. However, the use of web-based technologies to support learning in the PBL methodology is still recent. A systematic review was conducted and it has shown the lack of formal representation of the PBL concepts based on ontology language. Thus, this paper proposes a reference ontology for PBL called PBLOntology, which uses context elements of the methodology. For conception of the ontology, a research was conducted in a computer-engineering course that adopts the PBL methodology. To assess the PBLOntology, we defined relevant criteria regarded as fundamental for ontologies: testing activities and evaluation with experts. Although most of the experts stated that the definitions satisfied or partially satisfied, their feedback allowed us adjust some definitions, improving the ontology.
Download

Paper Nr: 262
Title:

Validation and Extension of the Smart City Ontology

Authors:

Petr Štěpánek and Mouzhi Ge

Abstract: Over the last decade, the concept of the Smart City has been extensively studied with the development of modern societies. However, due to the complexity of Smart City, there does not exist a widely accepted definition for the Smart City. More recently, Ramaprasad et al. in 2017 have proposed a Smart City ontology that connects its relevant concepts with specified relations. This ontology thus can offer various paths by which theory and practice contribute to the development and understanding of a Smart City. However, this ontology is still lacking practical validations to verify its applicability, Therefore, in this paper, we select a set of critical Smart City papers and validate this ontology by fitting the papers into this ontology. Based on the validations, we also further propose and discuss the possible extensions and consolidations for this Smart City Ontology.
Download

Paper Nr: 4
Title:

Enhancing the Effectiveness of Asset Management through Development of License Management System on the Basis of SCCM 2012 Program by Microsoft Company

Authors:

Ekaterina Kurbanova, Olga Korableva and Olga Kalimullina

Abstract: In modern organizations specific software is widely used. This fact is the background for one of the most urgent issues in the activity of any organization: whether currently used software complies with the terms of license agreement. In many companies the set of licenses bought for the use of software doesn’t not correlate to actual installations on users’ computers. This is the very reason why companies are not protected from possible discrepancy of bought and installed licenses, in spite of considerable budget investments. Such a discrepancy causes financial, legal and other risks. This study reveals the deficiencies of available approaches to the construction of license management system. It contains method guides developed for removal of identified deficiencies in the available approaches to the construction of license management systems. Integrated innovative system of management and accounting of licenses was developed, which corresponds to all methodological recommendations, developed on the previous stage. This system makes it possible to build transparent interrelations between software installed in the company and bought license for the use of software under consideration, which gives us the opportunity to observe strictly intellectual property law. The developed system has been implemented and frequently used in a large Russian company since June 2016 by this day. The article shows the results of exploitation period and confirms consistency of the developed system as the management tool of the licenses for the use of software.
Download

Paper Nr: 7
Title:

A Parser and a Software Visualization Environment to Support the Comprehension of MATLAB/Octave Programs

Authors:

Thiago de Lima Mariano, Glauco de Figueiredo Carneiro, Miguel Pessoa Monteiro, Fernando Brito e Abreu and Ethan Munson

Abstract: Software comprehension and analysis of MATLAB and Octave programs are not trivial tasks. Programmers have to devote considerable effort to obtain relevant data from source code and related artifacts. Tools that provide support for software comprehension activities usually rely on parsers to obtain data from source code. The problem in the MATLAB/Octave case is the limited number of available parsers and the difficult to build an extensible solution with them. In this paper, we describe the development of a parser that converts MATLAB and Octave program codes into instances of the Knowledge Discovery Metamodel (KDM), which can subsequently undergo static analyses to feed different visual representations. The goal of these representations is to support software comprehension. We describe our experience in the use of this parser to build a software visualization environment to support the comprehension of MATLAB and Octave programs.
Download

Paper Nr: 10
Title:

A Concept for Comprehensive IT Support for Environmental and Energy Management in SMEs

Authors:

Anna O'Faoláin de Bhróithe, Frank Fuchs-Kittowski, Jörn Freiheit, Detlef Hüttemann, Stefan Voigt and Thomas Dinkel

Abstract: Environmental and energy management (EnvM and EM, respectively) are important aspects in the everyday running of companies. However, there is no single software tool that completely supports companies throughout the entire management process. The aim of the QuiXel project is to develop an integrated data and information platform for evolving and collaborative EnvM and EM in small and medium-sized enterprises (SMEs). The platform will provide comprehensive software support for the complete management process and will simplify tasks such as planning goals and targets, data structuring, data acquisition, data analysis, report generation, and documentation. In addition, the platform will provide an integrated manual providing instructions and guidelines for the best practice of EnvM/EM in accordance with the ISO 14001 and ISO 50001 norms. The requirements for such a platform are presented in this paper, along with an overview of the system concept.
Download

Paper Nr: 18
Title:

Similarities Building a Network between Researchers based on the Curriculum Lattes Platform

Authors:

Sérgio Antonio Andrade Freitas, Edna Dias Canedo, Edgard Costa Oliveira and Dionlan Alves de Jesus

Abstract: Using inference machines is one resource used to assist the decision-making process in data processing and interpretation, which allows attributing knowledge to a set of information items. In this sense this work implements a similarity algorithm that calculates the percentage of adherence found amongst academic profiles at the University of Bras´ılia (UnB). The domain base use to provide the data for the work is that of the Lattes platform. This platform holds data on the scientific production of registered university scholars. The calculation provides a rating of the individuals and the approximations between their academic production. This is achieved by taking into account a base profile which is compared to one or more destination profiles. To run this procedure, the data held in each Curriculum Lattes is extracted, and an ontology of concepts is created that holds the data on the production to supply the information needed by the comparison task. These comparisons are made in each term of the name, for all the bibliographical production for both profiles compared. Each term can have a set of synonyms that are also taken into consideration in the comparison. And at the end the results are compiled and presented in a spreadsheet that holds the summaries for all adherence percentages that were compared. Applying the algorithm determines which people in a set have more or less proximity and a semantic link with the academic output when compared to other individuals. And that produces a similarity percentage.
Download

Paper Nr: 24
Title:

Identifying Anomalies in SBVR-based Business Rules using Directed Graphs and SMT-LIBv2

Authors:

Sayandeep Mitra, Kritika Anand and Pavan Kumar Chittimalli

Abstract: In modern times, business rules have grown exponentially with enterprises becoming more complex in diverse fields. Due to this growth, different forms of anomalies creep into the business rules, causing business enterprise to take wrong decisions, which can impact it’s performance and reputation. It is time and resource consuming to examine the rules manually due to the large number of rules intermingled with each other. The process of manual verification is also not free of human induced errors. Thus, automatic verification of business rules is the need of the hour. We present a tool to detect different anomalies in business rules represented in SBVR format. The tool uses a combination of Directed Graphs and SMT solvers to perform the verification task. We show the implementation of our tool along with it’s evaluation on industry level benchmarks.
Download

Paper Nr: 25
Title:

Predictive Maintenance in the Context of Service - A State-of-the-Art Analysis of Predictive Models and the Role of Social Media Data in this Context

Authors:

Jens Grambau, Arno Hitzges and Boris Otto

Abstract: The aim of this study is to identify existing Predictive Maintenance methods in the context of service and the role of Social Media data in this context. With the help of a Systematic Literature Review eleven researches on notable Predictive Maintenance methods are identified and classified according to their focus, data sources, key challenges, and assets. It can be revealed that existing methods use different Prediction technologies and are mainly focused on industries with highly critical products. Existing methods provide value for B2B and B2C as well as products and services. Moreover, the majority is using heterogenous data that was generated automatically. However, it can be perceived that the consideration of Social Media data offers benefits for Prediction methods through identifying and using personal user data, the current usage is rare and only in the B2C sector recognizable. Thus, this research shows a gap in current literature as no universal Predictive Maintenance solution is available, that enables organizations to enhance their services by using the full potential of Social Media. Thus, future research needs to focus on the integration of Social Media data in Prediction methods for the B2C sector. With this it is deeply interesting how Social Media data has to be gathered and processed and if existing Predictive algorithms can be extended by Social Media data.
Download

Paper Nr: 33
Title:

OdysseyProcessReuse - A Component-based Software Process Line Approach

Authors:

Eldânae Nogueira Teixeira, Aline Vasconcelos and Cláudia Werner

Abstract: It is expected that managing process variations and organizing process domain knowledge in a reusable way can provide support to handle complexity in software process definition. In this context, the purpose of this paper is to describe a systematic software process reuse methodology, by combining process reuse techniques, such as Software Process Line and Component Based Process Definition, aiming to increase reuse possibilities. SPrL approach manages the variability aspect inherent to software process domain and CBPD focuses on modularizing the domain process information into process components. The proposed SPrL modelling metamodel and notation address reusable process elements, explicitly representing the variability concept in both process domain structure and behaviour. Based on the results of the evaluation studies, it was possible to get evidences of the approach feasibility, with a higher expressiveness when using the process variability notation proposed, which allow that more semantic concepts inherent to SPrL scenarios can be graphically described. Also, the set of heuristics to support mappings among artefacts in distinct abstraction levels was considered useful to keep the traceability of variability properties, relationships and restrictions. Further research is being conducted to explore ways to support project managers during the decision-making in new software process definitions.
Download

Paper Nr: 95
Title:

Elihu: A Project to Model-Driven Development with Naked Objects and Domain-Driven Design

Authors:

Samuel Alves Soares and Mariela Cortés

Abstract: The model-driven development is a approach to creating software through well-defined models containing the information needed to generate the application. However, the software modeling in this approach requires the definition of application infrastructure artifacts in the model, such as user interface technologies and data persistence scheme, in order to transform modeling in final application. This makes the modeling complex, difficult to understand and maintain since new artifacts need to be added, failing to keep the focus on application business domain. To resolve this problem, we propose the Elihu project, a solution based on Naked Objects Pattern, Domain-Driven Design and software design patterns where the developer models just business objects and their characteristics related to the application domain. The full application is generated based on these software patterns and a Naked Objects Pattern framework is responsible for the application infrastructure code and the display of objects to users. The proposed solution benefits the creation of less complex models, that support evolution and modification of requirements along the development and the generation of full applications without manual intervention in the generated code.
Download

Paper Nr: 98
Title:

Problem-Oriented Conceptual Model and Ontology for Enterprise e-Recruitment

Authors:

Saleh Alamro, Huseyin Dogan, Deniz Cetinkaya and Nan Jiang

Abstract: Internet-led labour market has become so competitive forcing many organisations from different sectors to embrace e-recruitment. However, realising the value of the e-recruitment from a Requirements Engineering (RE) analysis perspective is challenging. The research is motivated by the results of a failed e-recruitment project as a case study by focusing on the difficulty of scoping and representing recruitment problem knowledge to systematically inform the RE process towards an e-recruitment solution specification. In this paper, a Problem-Oriented Conceptual Model (POCM) supported by an Ontology for Recruitment Problem Definition (Onto-RPD) for contextualisation of the enterprise e-recruitment problem space is presented. Inspired by Soft Systems Methodology (SSM), the POCM and Onto-RPD are produced based on the detailed analysis of three case studies: (1) Secureland Army Enlistment, (2) British Army Regular Enlistment, and (3) UK Undergraduate Universities and Colleges Admissions Service (UCAS). The POCM and the ontology are demonstrated and evaluated by a focus group against a set of criteria. The evaluation showed a valuable contribution of the POCM in representing and understanding the recruitment problem and its complexity.
Download

Paper Nr: 180
Title:

Quality Management in Service Desk - How Does Service Desk Managers Define and Measure Quality

Authors:

Maiju Hjelt and Nestori Syynimaa

Abstract: Many public and private sector organisations are depending on IT services provided by external service providers. The quality of the service affects the customer satisfaction and consequently the customer behaviour. The concept of quality has many meanings in the literature. In this paper, we study how service desk managers perceive the concept of quality and how to manage it in an organisation which has adopted ITIL. Our findings indicate that the quality is seen only in terms of how the agreed service levels are achieved. This view excludes the quality of the processes used to deliver IT services. Quality measurements are reflecting the perception of the concept of quality.
Download

Paper Nr: 187
Title:

Generating Persistence Structures for the Integration of Data and Control Aspects in Business Process Monitoring

Authors:

Eladio Domínguez, Beatriz Pérez, Ángel L. Rubio, María A. Zapata, Alberto Allué and Antonio López

Abstract: Today’s organizations have to monitor increasingly complex business processes that handle large amounts of data. In this context, it is essential to design working frameworks that seamlessly integrate both control flow and data perspectives. Such an integration can be eased by automatically generating the infrastructures for storing data and control aspects. Towards this goal, we propose an automatic process for synthesizing persistence structures for control flow and data storage. In particular, based on an approach centered on the concept of Occurrence, in this paper we present a proposal by means of which, after applying several translation patterns to a business process model, we automatically generate the persistence structures that integrate both data and control aspects of such model. The feasibility of this proposal is demonstrated by developing a prototype and evaluating its application to different examples taken from the literature as a benchmark.
Download

Paper Nr: 216
Title:

Essence: Reference Architecture for Software Engineering - Representing Essence in Archimate Notation

Authors:

Nestori Syynimaa

Abstract: Essence is a standard for working with methods in software engineering. As such, it can be seen as the reference architecture for software engineering. The Essence consists of the Kernel, and a notation called the Language. This representation is not widely known and likely hinders the adoption of the Essence. This paper represents the work-in-progress of representing the Essence using ArchiMate, the de facto notation for enterprise architecture. Our purpose is to help organisations to adopt Essence by representing it in the language already understood by different stakeholders.
Download

Paper Nr: 244
Title:

DEMO Construction Model Generation Process from Business Model Canvas

Authors:

Novandra Rhezza Pratama and Junichi Iijima

Abstract: Enterprise engineering is a discipline aspect of enterprise, including designing and modelling a system. In system development process, a construction of Using System is developed into function of Object System, and then continued with development of construction of Object System. Construction can be represented by DEMO Construction Model and function can be represented by Business Model Canvas. To manipulate a function system, we need to define a specification or construction of that system. Therefore we need to be able to generate a construction from a function. This study attempts to create a linkage between Business Model Canvas and DEMO Construction Model as a construction design process. A methodology of generating DEMO Construction Model from Business Model Canvas is proposed. Case study of City Logistics is used to illustrate the proposed methodology. We found the correspondence between Business Model Canvas and DEMO Construction Model, and the proposed methodology proved to be able to create DEMO Construction Model from Business Model Canvas through step-by-step process.
Download

Paper Nr: 253
Title:

On the Capabilities of Digimaterial Artifacts - Structures, Symbols, Actions

Authors:

Lars Bækgaard

Abstract: The purpose of the paper is to propose and discuss three types of capabilities of digimaterial artifacts like laptop computers, cameras, cars, robots etc. Digimaterial artifacts are material artifacts that combine digital and non-digital elements by bearing one or more digital artifacts. Digital artifacts are linguistic expressions like, say, binary sequences of 0's and 1's. Software and databases are examples of digital artifacts. Paper pieces with digital inscriptions and cars with data and software are examples of digimaterial artifacts. Digimaterial artifacts can bear, and potentially manipulate, digital artifacts. We describe and discuss digimaterial structures and the capabilities that are enabled by these structures. And we describe and discuss the plastic nature of such structures and capabilities. We expect that our work can be used to understand digimaterial capabilities and to analyse and design digimaterial structures that possess a relevant set of capabilities.
Download

Paper Nr: 256
Title:

Specification of Personal Data Protection Requirements - Analysis of Legal Requirements from the GDPR Regulation

Authors:

Mário Fernandes, Alberto Rodrigues Silva and António Gonçalves

Abstract: The European Union establishes in the Regulation 2016/679, or GDPR (General Data Protection Regulation), a set of legal dispositions to achieve the protection of natural persons in what personal data processing and the free movement of such data is concerned. When those dispositions are considered in the development of information systems, the later become attainable for legal approval within that scope. This paper presents the methodology we are following to elaborate a reusable catalogue of personal data protection requirements aligned with the GDPR. Following a separation-of-concerns approach, the catalogue shall serve the purpose of constructing information systems able to communicate with those that process individuals’ personal data, to materialize the regulatory data protection capabilities disposed in the GDPR. In that context, the elicitation of system requirements demands for the interpretation of a legal document by business analysts, which consists of a scientifically relevant challenge. This research is contextualized by the RSLingo initiative, a model-driven requirements engineering approach for the rigorous specification of system requirements. In particular this paper discusses the GDPR’s requirements defined as a catalogue of both business goals and system goals.
Download

Area 4 - Software Agents and Internet Computing

Full Papers
Paper Nr: 1
Title:

A Framework Supporting Literacy in Mathematics and Software Programming - Addressing Some Challenges in STEM Education

Authors:

Georg Peters, Tom Rueckert and Jan Seruga

Abstract: The second half of the last century was characterised by a shift from manufacturing to services, particularly in mature economies. This transformation has accelerated in the past decade, due to rapid progress in information technology. Excellence in the so-called STEM subjects (science, technology, engineering and mathematics) is crucial if countries are to remain competitive. Mathematics as a universally applicable method is of special significance, as is IT, which impacts on virtually all industries and can dramatically change economies. Literacy in mathematics and computers, therefore, is more important than ever for individuals, companies and countries. We propose a framework based on R to support the training of students in these crucial areas. We discuss its features, including platform neutrality, costs and specialization flexibility in our paper.
Download

Paper Nr: 9
Title:

Searching and Ranking Educational Resources based on Terms Clustering

Authors:

Marina A. Hoshiba Pimentel, Israel Barreto Sant'Anna and Marcos Didonet Del Fabro

Abstract: Open Educational Resources (OER) are important digital assets used for teaching and learning. There exists different repositories, but searching for such items is often a difficult task. On one hand, most part of the solutions implement engines with syntactic search based on term frequency metrics, or using the only item's metadata. On the other hand, the utilization of terms clustering (TC) have been used in other search and ranking contexts and they have shown to be effective. In this paper, we present an approach for searching and ranking for Open Educational Resources within a repository of objects, defining a set of tasks and an hybrid metric that integrates different ranking metrics obtained through terms clustering with the results of existing search engines (SE). We present an extensive implementation and experiments to validate our approach. The results empirically showed that our approach is effective to rank relevant OERs.
Download

Paper Nr: 22
Title:

X9: An Obfuscation Resilient Approach for Source Code Plagiarism Detection in Virtual Learning Environments

Authors:

Bruno Prado, Kalil Bispo and Raul Andrade

Abstract: In computer programming courses programming assignments are almost mandatory, especially in a virtual classroom environment. However, the source code plagiarism is a major issue in evaluation of students, since it prevents a fair assessment of their programming skills. This paper proposes an obfuscation resilient approach based on the static and dynamic source code analysis in order to detect and discourage plagiarized solutions. Rather than focusing on the programming language syntax which is susceptible to lexical and structural refactoring, an instruction and an execution flow semantic analysis is performed to compare the behavior of source code. Experiments were based on case studies from real graduation projects and automatic obfuscation methods, showing a high accuracy and robustness in plagiarism assessments.
Download

Paper Nr: 72
Title:

Popularity Metrics’ Normalization for Social Media Entities

Authors:

Hiba Sebei, Mohamed Ali Hadj Taieb and Mohamed Ben Aouicha

Abstract: With the spread of online social media websites, a huge amount of online content is continuously provided. However, some contents gain an important attention from users while other contents are completely ignored. This highlights the analysis of popularity relative to different social content. The popularity is expressed through measures and features that act as factors expressing and influencing the popularity. Those features vary from an online social media website to another as it depends on the type of social entity. This paper tries to create a normalized view of the popularity metrics independent of the online social media and in relation with specific social entities that are user and media content (i.e. text, image, and video). We propose a Service Provider Interface (SPI) as a contract between users. The SPI offers a variety of interfaces for implementing services related to the quantification of social entities popularity independently of the online social media they belong to.
Download

Paper Nr: 82
Title:

A Multipurpose System for Gamified Experiences

Authors:

Samuel Moreira Timbó, Juliana de Melo Bezerra and Celso Massaki Hirata

Abstract: Gamification is the application of game elements and game design techniques to non-gaming contexts, aiming to provide incentives to people overcome obstacles towards a desired engagement and behavior. Nowadays gamification is applied in different areas such as Education, Business, Human Resources, Health, and Entertainment. Generally, existent applications are tied to a specific context, making it hard to replicate ideas and to adapt to new scenarios. Here, we present a multipurpose system where users are responsible for creating their own gamified experiences. The system is based in a generalized gamification process and it allows customizations due to its platform with predefined game elements. We conducted an experiment where we confirmed the applicability of the proposed system by investigating aspects as potential to motivate users, flexibility to be applied in distinct contexts, and overall usability.
Download

Paper Nr: 164
Title:

An Efficiency Frontier based Model for Cloud Computing Provider Selection and Ranking

Authors:

Lucas Borges de Moraes, Pedro Cirne, Fernando Matos, Rafael Stubs Parpinelli and Adriano Fiorese

Abstract: Cloud computing become a successful service model that allows hosting and distribution of computational resources all around the world, via Internet and on demand. This success leveraged and popularized its adoption into all major IT companies. Based on this success, a large number of new companies were competitively created as providers of cloud computing services. This fact can difficult the customer ability to choose among those several cloud providers the most appropriate one to attend their requirements and computing needs. Therefore, this work aims to propose a model capable of selecting and ranking cloud providers according the analysis on the most efficient ones using a popular Multicriteria Decision Analysis (MCDA) method called Data Envelopment Analysis (DEA), that calculates efficiency using Linear Programming (LP) techniques. To accomplish that, the efficiency modeling is based on the analysis of each Performance Indicator (PI) values desired by the customer and the available ones in the cloud provider database. An example of the method’s usage is given to illustrate the model operation, selection results and final provider ranking for five hypothetical customer requests and for ten providers.
Download

Paper Nr: 224
Title:

Face-based Passive Customer Identification Combined with Multimodal Context-aware Payment Authorization: Evaluation at Point of Sale

Authors:

Adam Wójtowicz and Jacek Chmielewski

Abstract: In smart environments, fast passive transaction authorization is a key requirement for routine, recurring transactions. In our earlier work technical feasibility of multimodal multidevice system based on context-aware payment authorization model has been proved. Its main features are: passive user identification with face recognition followed by multi-criteria selection of transaction authorization methods, which jointly modify traditional customer service procedure. In the presented work, real-world evaluation of the new approach based on proposed multimodal payment authorization is described. Empirical tests at existing point of sale have been performed, the usage data have been collected, statistically analyzed and confronted with formulated research hypotheses. The research goal is to determine to what extent the approach simplifies payment process assuming the security level required for a given context is maintained. The evaluation confirms that proposed approach can be effective in a real environment.
Download

Short Papers
Paper Nr: 47
Title:

A Methodology for Identifying Influencers and their Products Perception on Twitter

Authors:

Ermelinda Oro, Clara Pizzuti and Massimo Ruffolo

Abstract: The massive amount of information posted by twitterers is attracting growing interest because of the several applications fields it can be utilized, such as, for instance, e-commerce. In fact, tweets enable users to express opinions about products and to influence other users. Thus, the identification of social network key influencers with their products perception and preferences is crucial to enable marketers to apply effective techniques of viral marketing and recommendation. In this paper, we propose a methodology, based on multilinear algebra, that combines topological and contextual information to identify the most influential twitterers of specific topics or products along with their perceptions and opinions about them. Experiments on a real use case regarding smartphones show the ability of the proposed methodology to find users that are authoritative in the social network in expressing their views about products and to identify the most relevant products for these users, along with the opinions they express.
Download

Paper Nr: 61
Title:

A Novel Tool to Predict the Impact of Adopting a Serious Game on a Learning Process

Authors:

Ibtissem Daoudi, Raoudha Chebil and Wided Lejouad Chaari

Abstract: In recent years, the rapid development in information and communication technologies has provided the learning field with a variety of new teaching methods to motivate students and improve their skills. Serious Game (SG) is an example of these new forms of learning which constitutes an attractive way supposed to replace the classical boring courses. However, the use of SGs in classroom teaching is still limited since the choice of the adapted SG to a specific learning environment remains a challenging task that makes teachers unwilling to adopt this concept.Face to this finding, our aim is to propose a multi-agent-based simulator to predict the effect of a SG adoption in a learning environment given several game and players characteristics. As results, the simulator gives intensities of several emotional aspects characterizing learners reactions to the SG adoption. Experimentation demonstrates that the results given by the proposed tool are close to real feedbacks. This work is supposed to encourage the use of SGs by giving an expectation of its impact on e-learning processes.
Download

Paper Nr: 76
Title:

SEPL: An IoT Platform for Value-added Services in the Energy Domain - Architectural Concept and Software Prototype

Authors:

Theo Zschörnig, Robert Wehlitz, Ingo Rößner and Bogdan Franczyk

Abstract: The Internet of Things (IoT) is based on ubiquitous smart devices equipped with sensors, actuators and tags which are connected to the Internet. Combining their sensing and actuation capabilities yields large possibilities for businesses to provide customers with value-added services in terms of energy management, entertainment, security and convenience. Businesses may use IoT data to develop innovative business models thus further increasing the value and usefulness of smart devices. Despite these potentials, providers of IoT platforms struggle to tackle some challenges which come with the design and operation of the platforms. In this paper, we propose an architectural concept which aims to bridge this gap and provides an integrated environment for smart device integration, data and analytics as well as IoT-aware processes. We also present the Smart Energy Platform (SEPL) as an instantiation of the proposed concept to evaluate it in terms of its functional feasibility and show that it addresses the issues current IoT platforms face.
Download

Paper Nr: 131
Title:

Transaction Document Management: Case Study of International Passenger Carrier

Authors:

Vladimirs Salajevs, Gatis Vitols, Nikolajs Bumanis and Irina Arhipova

Abstract: As m-commerce services availability is rapidly increasing, more solutions are required to support business processes. One field is a management of transaction documents that appear during payment transaction, such as transportation tickets, receipts, etc. This position paper address transaction document management issue with application of previously developed improvements of payment procedures using micropayment company payment processing procedures for case study of international passenger carrier. Introduced improvements include the document structure and versioning, the improved interaction setup for the involved parties and the definition of smartlet content management engine. Improvements were included in prototype that was successfully developed following the OSGI standard approach for modular development using the Apache ServiceMix software stack. The proposed transaction document management approach can be implemented in other logistics companies after testing with larger amount of transactions and considered for possible application for other business fields.
Download

Paper Nr: 188
Title:

Energy Monitoring IoT Application using Stream Reasoning

Authors:

Varun Shah, Suman Datta, Debraj Pal, Prateep Misra and Debnath Mukherjee

Abstract: We consider the application of stream reasoning to the problem of monitoring energy consumption of a premises with buildings, each building having multiple floors. The floors have energy meters in several categories such as AC, UPS and Lighting. The objective is to compute the real-time aggregate energy consumption and alert whenever energy consumption thresholds are crossed, at the building, floor or meter-type level, thus determining whether there is overloading. We also want to have a solution that can be easily applied to a large number of floors and buildings. We show how just a few continuous SPARQL queries and performance enhancing rules can implement the solution. Finally we compare the performance of queries with and without the HAVING clause and with and without using entailments from rules.
Download

Paper Nr: 194
Title:

MOOCs Recommender System using Ontology and Memory-based Collaborative Filtering

Authors:

Kahina Rabahallah, Latifa Mahdaoui and Faiçal Azouaou

Abstract: With Massive Open Online Courses (MOOCs) proliferation, online learners are exposed to various challenges. Therefore, the lack of personalized recommendation of MOOCs can drive learners to choose irrelevant MOOCs and then lose their motivation and surrender the learning process. Recommender System (RS) plays an important role in assisting learners to find appropriate MOOCs to improve learners’ engagements and their satisfaction/completion rates. In this paper, we propose a MOOCs recommender system combining memory-based Collaborative Filtering (CF) techniques and ontology to recommend personalized MOOCs to online learners. In our recommendation approach, Ontology is used to provide a semantic description of learner and MOOC which will be incorporated into the recommendation process to improve the personalization of learner recommendations whereas CF computes predictions and generates recommendation. Furthermore, our hybrid approach can relieve the cold-start problem by making use of ontological knowledge before the initial data to work on are available in the recommender system.
Download

Paper Nr: 196
Title:

On the Performance of Cloud-based Spreadsheets as a Backend for View-only Web Applications

Authors:

Andrea Schwertner Charão, Felipe Marin, João Carlos D. Lima, Cristiano Cortez da Rocha and Luiz Angelo Steffenel

Abstract: The wide offering of cloud-based services brings alternatives to traditional approaches for developing modern information systems. In this work, we examine cloud spreadsheet services as an alternative for data backend in small scale, view-only web applications. We review 6 cloud-based spreadsheets offering data access APIs to third-party applications, then we present a set of performance tests over spreadsheets hosted on Google Sheets. Preliminary findings show a performance penalty for transferring JSON-formatted data and an expressive failed request rate for many simultaneous accesses.
Download

Paper Nr: 212
Title:

Mobile Gift Recommendation Framework - A COREL Framework Approach

Authors:

Caíque de Paula Pereira, Ruyther Parente da Costa and Edna Dias Canedo

Abstract: This paper proposes a recommendation algorithm for mobile devices based on the COREL framework. In this context, the mobile application market and m-commerce sales have grown steadily, along with the growth of studies and product recommendation solutions implemented in e-commerce systems. The proposed recommendation algorithm is a customization of the COREL framework, based on the complexity of the implementation associated with iOS mobile applications. Therefore, this work aims to customize a gift recommendation algorithm in the context of mobile devices using as main input the user preferences for the gifts recommendation in the Giftr application. This algorithm has been tested through three cycles of tests and improved during it, the results suggest that the algorithm presents a good performance and gifts results based on the user preferences.
Download

Paper Nr: 225
Title:

A Systematic Review of Scheduling Algorithms and Resource Management in Context-aware Applications: A Meta-analytic Approach

Authors:

Fernando Emilio Puntel, Andrea Charão, Maria Helena Franciscatto, João Carlos Damasceno Lima and Cristiano Cortez da Rocha

Abstract: Computer resource management and scheduling algorithms have been exploited in order to run applications in an efficient and effective way. Since this area is heavily exploited and has a range of application domains, it is possible to apply various techniques from other distributed computing areas, such as context-aware applications. In this paper, we present a systematic literature review that addresses context-aware applications with resource management and scheduling algorithms. In total, 11 studies were selected from years 2000 to 2017, which covered the inclusion criteria of this systematic review, presenting techniques and improvements for the area. From the analyzes of the selected studies, it was found a diversity of application domains using a variety of technologies.
Download

Paper Nr: 227
Title:

A Semantic-based Approach for Facilitating Arbovirus Data Usage

Authors:

Aparecida Santiago, André Alencar, Amanda Souza, Erika Araruna, Isabel Fernandes and Damires Souza

Abstract: Today’s continuous growth for healthcare information entails an increasing need for using large amounts of data. Particularly, the incidence of arboviruses has been on the rise in some countries, what causes specific needs for studies and definitions of public strategies. In this light, providing a computational platform for usage and reuse on arboviruses related data may help matters. The idea is that different applications and users can make use of that data in diverse ways. In this work, we propose a semantic-based approach for facilitating use and reuse of arboviruses related data. We present the definitions underlying our approach, examples illustrating how it works, and some promising results we have obtained.
Download

Paper Nr: 13
Title:

Fostering Collaboration on Decision Processes

Authors:

Jorge Augusto Pessatto Mondadori and Juliana de Melo Bezerra

Abstract: Due to intrinsic complexity and uncertainty, decision problems require involvement of stakeholders with distinct backgrounds and points of view. Collaboration among stakeholders is then essential to identify the problem and find solutions. We propose a framework with guidelines to aid decision-makers, together with a facilitator, to structure and solve a problem collaboratively in a virtual environment. The framework points out how collaboration takes place during the whole decision process, including phases to structure the problem, to apply the multi-criteria decision analyses, and to explore the sensitivity analyses in order to reach the final result. We conducted an empirical evaluation, where decision makers reported benefits of the framework to engage stakeholders, to congregate ideas, and to reduce the duration of the decision process.
Download

Paper Nr: 93
Title:

Modular Automation Design for Equipment Management

Authors:

Fabiano Santos, Bruno Prado, Daniel Dantas and Kalil Bispo

Abstract: This paper presents a modular and low cost residential automation solution, using the Internet of Things (IoT) concept, for control and monitoring electronic equipment. The development of this work is divided in two parts: the configuration of electronic devices and use of Home Assistant platform. The Home Assistant platform allows the control and monitoring of electronic equipment over the Internet. It uses the MQTT communication protocol for integration with the devices. An auxiliary configuration application has been developed to configure dynamically the pins of the sensors and actuators. An electronic device was developed which is responsible for reading and controlling a set of sensors and actuators. The results of the proposed project were positive. It supports a larger number of sensors and has the lowest cost, of US$1.78.
Download

Paper Nr: 140
Title:

An Efficient Approach for Service Function Chain Deployment

Authors:

Dan Liao, Guangyang Zhu, Yayu Li, Gang Sun and Victor Chang

Abstract: Since the popularity and development of Cloud Computing, Network Function Virtualization (NFV) and Service Function Chain (SFC) provisioning have attracted more and more attentions from researchers. With the increasing of the number of users and demands for network resources, network resources are becoming extremely valuable. Therefore, it is necessary for designing an efficient algorithm to provision the SFC with the minimum consumption of bandwidth resources. In this paper, we study the problem of cost efficient deploying for SFCs to reduce the consumption of bandwidth resources. We propose an efficient algorithm for SFC deployment based on the strategies of layering physical network and evaluating physical network nodes to minimize the bandwidth resource consumption (SFCD-LEMB). It aims at deploying the Virtualization Network Functions (VNFs) of the SFC onto appropriate nodes and mapping the SFC onto reasonable path by layering the physical network. Simulation results show that the average gains on bandwidth consumption, acceptance ratio and time efficiency of our algorithm are 50%, 15% and 60%, respectively.
Download

Paper Nr: 154
Title:

Novel IoT Applications Enabled by TCNet: Trellis Coded Network

Authors:

Diogo F. Lima Filho and José Roberto Amazonas

Abstract: This work presents new results in routing in Wireless Sensor Networks, an important Infrastructure for the Internet of Things architecture, using the new concept of Trellis Coded Network - TCNet. The TCNet is based on the concept of convolutional codes and trellis decoder, that allow routing of data collected by randomly distributed micro sensors in ad hoc networks scenarios. This model uses Mealy Machines or low complexity Finite State Machines network nodes (“XOR” gates and shift registers), eliminating the use of any routing tables enabling the implementation of important IoT applications as Sensor Network Virtualization and in scenarios where clusters of nodes allow covering large areas of interest where the sensors are distributed. The application of TCNet algorithm concepts in cases as VSNs and clustering is facilitated due to the flexibility of TCNet to implement route management, becoming a tool to be adopted by Sensor Infrastructure Providers aiming to deploy, for example, QoS-aware end-to-end services.
Download

Paper Nr: 203
Title:

Cloud Computing - Design of a Management Model for Service Migration using ITIL as Knowledge Manager

Authors:

Henry F. López G., Oscar G. Paredes C. and Freddy M. Tapia L.

Abstract: Nowadays, the IT departments face numerous problems which considerably affect their performance. Several factors such as technological obsolescence that shortens the useful life of equipment, amount of data that demands storage (Anaya, Díaz, y Bárcenas, 2017), the high equipment costs, skilled labour as well as the timely update of the IT infrastructure, endangers the information. (Vega, 2012). To solve the problems, mentioned before Cloud Services were developed due to the boom in terms of use and ease of access to the technology they promote, as well as flexibility in storage capacity, ample deployment of resources and security in case of recovery in the event of loss of continuity of service. All this based on an optimization or reduction of costs for companies, thus allowing a great competitive advantage at a technological level. (Ávila, 2011). The present study, through the analysis of security standards with several subprocesses of the ITIL V3 reference framework and an analysis of infrastructure costs "On-Premisses versus IaaS”, allowed the development of a Methodological Proposal to migrate Services to the Cloud, based on the use of good practices, optimizing resources according to the needs of each organization.
Download

Area 5 - Human-Computer Interaction

Full Papers
Paper Nr: 27
Title:

A Scientometric Approach for Personalizing Research Paper Retrieval

Authors:

Nedra Ibrahim, Anja Habacha Chaibi and Henda Ben Ghézala

Abstract: Scientific researchers are a special kind of users which know their objective. One of the challenges facing todays’ researchers is how to find qualitative information that meets their needs. One potential method for assisting scientific researcher is to employ a personalized definition of quality to focus information search results. Scientific quality is measured by the mean of a set of scientometric indicators. This paper presents a personalized information retrieval approach based on scientometric indicators. The proposed approach includes a scientometric document annotator, a scientometric user model, a scientometric retrieval model and a scientometric ranking method. We discuss the feasibility of this approach by performing different experimentations on its different parts. The incorporation of scientometric indicators into the different parts of our approach has significantly improved retrieval performance which is rated for 41.66%. An important implication of this finding is the existence of correlation between research paper quality and paper relevance. The revelation of this correlation implies better retrieval performance.
Download

Paper Nr: 42
Title:

Contextual Design in Industrial Settings: Experiences and Recommendations

Authors:

Mirjam Augstein, Thomas Neumayr, Sebastian Pimminger, Christine Ebner, Josef Altmann and Werner Kurschl

Abstract: The Contextual Design (CD) methodology offers a framework for planning and implementing a user-centered design process throughout all project phases. It is team-based and was designed especially for interdisciplinary teams. The application of CD is particularly profitable in projects confronted with implicit requirements and hidden factors of influence. In contrast to many other design and evaluation methods such as focus groups or usability tests, CD does not take users out of their everyday setting and more easily reveals important design issues and contextual influences like users’ motivation, values, emotions or real-time interruptions. Despite these advantages, CD is often not used due to high costs in terms of time and effort. This paper provides a report on experiences with CD in two research projects in the industry domain. It is intended to help other researchers to plan and implement a CD process in industrial settings and benefit from our lessons learned.
Download

Paper Nr: 57
Title:

Hybrid Model of Emotions Inference - An Approach based on Fusion of Physical and Cognitive Informations

Authors:

Ernani Gottardo and Andrey Ricardo Pimentel

Abstract: Adapting to users’ affective state is a key feature for building a new generation of more user-friendly, engaging and interactive software. In the educational context this feature is especially important considering the intrinsic relationship between emotions and learning. So, this paper presents as its main contribution the proposal of a hybrid model of learning related emotion inference. The model combines physical and cognitive elements involved in the process of generation and control of emotions. In this model, the facial expressions are used to identify students’ physical emotional reactions, while events occurring in the software interface provide information for the cognitive component. Initial results obtained with the model execution demonstrate the feasibility of this proposal and also indicate some promising results. In a first experiment with eight students an overall emotion inference accuracy rate of 60% was achieved while students used a game based educational software. Furthermore, using the model’s inferences it was possible to build a pattern of students’ learning related affective states. This pattern should be used to guide automatic tutorial intervention or application of specific pedagogical techniques to soften negative learning states like frustration or boredom, trying to keep the student engaged on the activity
Download

Paper Nr: 78
Title:

NExPlay - Playability Assessment for Non-experts Evaluators

Authors:

Felipe Sonntag Manzoni, Bruna Moraes Ferreira and Tayana Uchoa Conte

Abstract: Playability is related to the overall quality of the gameplay in a video game. It’s important to evaluate the Playability to support games so they can fulfill the expectations of every player. However, given the high demand of game producers, limited budgets and deadlines, this type of evaluation is not always conducted, affecting the experience of players. One possible solution to evaluate Playability is through heuristic evaluations. Some researchers have conducted studies to develop heuristic sets that can satisfy the existing variety of games. However, given its broad comprehensiveness, those heuristic sets are too extensive and this may affect the feasibility of the assessment. Therefore, this work proposes NExPlay (Non-Expert Playability) Heuristic Set, which has the objective of minimizing the time and cost needed for the Playability assessment. Also, we aim to formulate a set that can be used by non-experts in Playability. The proposed heuristic set is assessed using a controlled experiment aimed at measuring the efficiency and effectiveness of our set, in comparison to another heuristic set. The results indicated that NExPlay identified problems with a more objective description. Furthermore, participants were able to understand the description of the heuristics presented in the NExPlay set more easily.
Download

Paper Nr: 94
Title:

Towards an Approach for Incorporating Usability Requirements into Context-Aware Environments

Authors:

Dorra Zaibi, Meriem Riahi and Faouzi Moussa

Abstract: With the considerable advancement of technologies and the proliferation of mobile devices, evaluating the usability of software applications has become an emerging research area. Hence, improving the quality of software applications is crucial in context-aware environments. For that reason an increasing attention is drawn towards the development and the adoption of appropriate research proposals able to evaluate the mobile application usability. This article is a contribution to proposing a methodology for the development of context-aware systems based on usability requirements during the user interface design stage. In particular, this approach focuses on how to infer consistent context-based usability requirements and how to incorporate these requirements into a user interface development process. As a proof of the proposal concept, we have applied our methodology to an illustrative case study. More, experiments with end users have been carried out.
Download

Paper Nr: 160
Title:

Identifying a Medical Department based on Unstructured Data - A Big Data Application in Healthcare

Authors:

Veena Bansal, Abhishek Poddar and R. Ghosh-Roy

Abstract: Health is an individual’s most precious asset and healthcare is one of the vehicles for preserving it. The Indian government’s spend on healthcare system is relatively low (1.2% of GDP). Consequently, Secondary and Tertiary government healthcare centers in India (that are presumed to be of above average ratings) are always rowded. In Tertiary healthcare centers, like AIIMS, patients are often unable to articulate correctly their problems to the healthcare center’s Reception staff for these patients to be directed to the correct healthcare department. In this paper, we propose a system based on Big Data and Machine Learning to direct the patient to the most relevant department .We have implemented and tested parts of this system wherein a patient enters his symptoms and/or provisional diagnosis; the system suggests a department based on this user input. Our system suggests the correct department 68.05% of the time. Our system presently makes its suggestions using gradient boosting algorithm that has been trained using two information repositories- symptoms and disease data, functional description of each medical department. It is our informed assumption that, once we have incorporated medicine information and diagnostics imaging data to train the system and the complete medical history of the patient, performance of the system will improve significantly.
Download

Paper Nr: 184
Title:

Usability Heuristics for Mobile Applications - A Systematic Review

Authors:

Marcos Antonio Durães Dourado and Edna Dias Canedo

Abstract: Usability is one of the factors that most affects a software quality. The increasing adoption of mobile devices brings new usability challenges, as well as a need for specific standards for this type of product. This paper aims to conduct a systematic review of the literature, complemented by a manual and snowballing search to obtain usability heuristics and heuristic evaluations for mobile applications. The result of the study was a set of thirteen usability heuristics, specific to smartphones, related to the ten Nielsen's heuristics. In addition, five possible ways of evaluating the usability of mobile applications are described. The specification of the heuristics found shows that they can be used both for the evaluation of already developed applications and for the prototyping of new applications, which helps developers achieve their goals regarding product quality. The main contributions of this work is the compilation of desktop usability heuristics in a new, more specific set of heuristics adapted to the mobile paradigm.
Download

Short Papers
Paper Nr: 48
Title:

The Effect of Trust, Perceived Risk and Security on the Adoption of Mobile Banking in Morocco

Authors:

Younes Lafraxo, Fadoua Hadri, Hamza Amhal and Amine Rossafi

Abstract: This short paper shows an acceptability model developed based on UTAUT (Unified Theory of Acceptance and Use of Technology) and three additional factors namely “Perceived risk”, “Security” and “Trust”. The model was tested using 460 responses obtained from the almost 720mobile banking application users from five banks such as CIH, BP, AWB, CM, SGMB in Marrakech, Morocco. The first replies analysis, reveals that Performance expectancy, Effort Expectancy, Social influence and Security in Mobile banking show a significant positive impact on the users’ behavioural intention to accept mobile banking services. However, Trust, facilitating conditions and Perceived risk in the mobile application does not influence positively the behavioural intention. Note also that the resulting model of this study, still in progress, explains almost 62% of users’ intention to use mobile banking.
Download

Paper Nr: 101
Title:

A New Approach to Visualise Accessibility Problems of Mobile Apps in Source Code

Authors:

Johannes Feiner, Elmar Krainz and Keith Andrews

Abstract: A wide range of software development is moving to the direction and domain of mobile applications. Single developer or small teams create apps for smartphones. Too often, they have not the capacity or know-how to check for usability problems and do not care for accessibility. We propose a novel workflow to bring usability issues into the development process: A quick accessibility evaluation (QAC) with 15 predefined metrics allow to collect issues. These issues are further condensed into formalised (UsabML) and the issues are tagged with the location in the source code. A dashboard view (RepoVis) showing the source code from a repository allows to spot and interactively inspect code and related issues simultaneously.
Download

Paper Nr: 157
Title:

The Use of Electroencephalogram and Electrodermal Signals in Reinforcement Learning of a Brain-Computer Interface

Authors:

Werley de Oliveira Gonçalves, Gizelle Kupac Vianna and Luiz Maltar Castello Branco

Abstract: The objective of this work is to compare the performance of two brain-computer interfaces developed by our research group. Both interfaces collect the electrical signals produced by the human body while a person try to move a cursor on a digital screen, using only his thought. The collected signals are classified using the artificial neural networks paradigm, where the first interface uses electroencephalogram signals, collected from the scalp, to classify the mental command, and the second uses the electrodermal signal, collected from any right-hand finger. Besides analysing the performance of the two approaches, this research contributes to reduce the training time achieved by similar systems, reported in the literature as being in an average of 45 days, to about only 40 minutes. Our motivation is to facilitate the accessibility of people with temporary or permanent physical limitations. In addition, we have developed a low-cost signal collection platform, providing a solution that can help a large group of people.
Download

Paper Nr: 77
Title:

How Can Visualization Affect Security?

Authors:

Joana Muchagata and Ana Ferreira

Abstract: Technology like computers and especially mobile devices have changed the way people see and interact with the world. Many of our everyday tasks are only completed using technology supported by different platforms (desktop computers, laptops, tablets and smartphones) so the visualization of content is presented differently depending on the used device and type of information requested. However, even with user-adaptive systems, which can adjust interface content according to individual’s needs and context, data privacy can be at risk, as these techniques do not aim to protect them or even identify the presence of vulnerabilities. The main goal of this paper is to analyse what techniques are available to adapt visualization to users’ needs and context of each interaction with different devices and analyse which can be applied to improve security and privacy of visualized data. Two use-cases are presented to compare traditional access and access using visualization techniques to improve security and mitigate privacy vulnerabilities of healthcare data. More research is needed to define and validate security visualization techniques integrated into human mobile interactions, to better provide for the security and privacy of sensitive data.
Download

Paper Nr: 81
Title:

SocialCount - Detecting Social Interactions on Mobile Devices

Authors:

Isadora Vasconcellos e Souza, João Carlos Damasceno Lima, Benhur de Oliveira Stein and Cristiano Cortez da Rocha

Abstract: With mobile devices increasingly powerful and accessible to the majority of the population, applications have begun to become increasingly intelligent, customizable and adaptable to users’ needs. To do this, context-aware applications are developed. In this work, we create an approach to infer social interactions through the identification of the user’s voice and to recognize their social context. Data from the social context of the user has been useful in many real-life situations, such as identifying and controlling infectious disease epidemics.
Download

Paper Nr: 137
Title:

Designing Interactions with Furniture - Towards Multi-sensorial Interaction Design Processes for Interactive Furniture

Authors:

Pedro Campos, Nils Ehrenberg and Miguel Campos

Abstract: In this paper, we argue for novel user experience design methods, in the context of reimagining ergonomics of interactive furniture. There is a need for improving both creativity and productivity at the workplace, and there is ample room for scientific advancements brought by embedded systems, sensors and actuators which can now be part of future pieces of furniture. Creative industries’ workers are becoming more prominent as countries move towards intellectual-based economies. Consequently, the workplace needs to be reconfigured so that creativity and productivity can be better promoted at these spaces. This position paper presents several directions that can shed light on how we can better design interactive furniture for the workplace. In particular, we argue for a multisensorial approach as a promising way of achieving the above-mentioned goals.
Download

Paper Nr: 183
Title:

Impact of Culture Dimensions Model on Cross-Cultural Website Development

Authors:

Gatis Vitols and Yukako Vitols-Hirata

Abstract: In the cross-cultural website design literature, three strategies are often mentioned: globalization, internationalization and localization. Most cited are the localization and the internationalization. To develop localised websites for different cultures two models are widely applied. One is the culture marker model, and the other is the culture dimensions model. Marker model identify system elements (i.e. calendar, language, date formats) that require modifications. Since introduction of this model, authors have widely applied marker identification for cross-cultural system design. Culture dimensions model includes multiple subordinate models or cultural dimension models that have been derived from previously published cultural META models. With the culture models and dimensions included in these models, authors try to analyse and compare various cultures in order to acquire internal characteristics of target cultures. However culture dimensions model application in information and communication technology field for system development is still questionable. There is a need to perform more research on application of this model for development of methods for more usable and accessible website design. The aim of this article is to perform literature review on impact of culture dimensions model on cross-cultural website development for further development of application methodologies. It can be concluded that analysis of the culture dimensions (particularly Hofstede model) facilitates the process of gathering culture preferences and identification of evaluation methods for target users and can be applied for cross-cultural website design. Culture dimensions affect website’s graphical information, design of navigation, design of text, creation of interaction elements, and design of input elements.
Download

Paper Nr: 197
Title:

You Are Okay - Towards User Interfaces for Improving Well-being

Authors:

Pedro F. Campos

Abstract: Well-being is a relatively broad concept which can be succinctly described as the state of being happy, healthy or successful. Interesting things happen when bridging user interface design with the psychology of human well-being. This position paper aims at providing a short on reflection the challenges and opportunities in this context and presents concrete examples on how to tackle these challenges and exploit the existing design opportunities.
Download

Paper Nr: 200
Title:

Offline Speech Recognition Development - A Systematic Review of the Literature

Authors:

Lucas Debatin, Aluizio Haendchen Filho and Rudimar L. S. Dazzi

Abstract: This paper aims to present the state-of-the-art of speech recognition from a systematic review of the literature. For this, 222 papers from four digital repositories were examined. The research followed a methodology composed of questions of search, expression of search and criteria of inclusion and exclusion. After reading the abstract, introduction and conclusion, nine papers were selected. Based on the analysis of the selected papers, we observed that the research prioritizes the following topics: (i) solutions to reduce the error rate; (ii) neural networks for language models; and (iii) n-gram statistical models. However, no solution was offered to provide offline voice recognition on Android mobile devices. The information obtained is very useful in order to acquire knowledge to be used in the development of offline voice recognition in mobile devices. The techniques provide guidelines for the application of the best neural networks and mechanisms for reducing error rates.
Download

Paper Nr: 233
Title:

Aspects of User Experience Maturity Evolution of Small and Medium Organizations in Brazil

Authors:

Angela Lima Peres and Alex Sandro Gomes

Abstract: This paper investigates aspects of evolution of user experience design practices in small and medium Brazilian organizations and the relation to dimensions of User Experience Maturity Models. A qualitative approach was carried out. Eight user experience managers or analysts were asked about the evolution process of incorporate User Experience practices and strategies adopted to deal with the limitations of small and medium software organizations. A semi-structured interview script was developed specifically for this study. Data collection was carried out through interviews with the Skype® tool, and qualitative analysis was performed with the aid of MAXQDA® software. Through content analysis, the study presents and discusses the strategies adopted by eight User Experience designers and the relation to dimensions of User Experience Maturity Models. The difficulties faced by small and medium organizations are discussed, and some alternatives that are adapted to small budgets and human resources are presented.
Download

Area 6 - Enterprise Architecture

Full Papers
Paper Nr: 58
Title:

Management of Data Value Chains, a Value Monitoring Capability Maturity Model

Authors:

Rob Brennan, Judie Attard and Markus Helfert

Abstract: This paper identifies management capabilities for data value chains as a gap in current data value research. It specifies a data value management capability framework and a first data value monitoring capability maturity model (CMM). This framework and CMM will enable organisations to identify and measure the current state of their data value monitoring processes, and show how to take steps to enhance value monitoring in order to exploit the full data value potential in their organisation. This new approach to data value management is needed since, despite the success of Big Data and the appeal of the data-driven enterprise, there is little evidence-based guidance for maximising data value creation. To date, most data value optimisation has focused on technological gains such as data platforms or analytics, without bridging the gap to organisational knowledge or human factors research. The evidence of best practice gathered here from the state of the art shows that there is a hierarchy of data value dimensions for data value monitoring, starting with cost and peaking with utility (understanding value creation). The models are validated by a case study of three organisations that are managing data value and using it to support strategic decision-making.
Download

Paper Nr: 114
Title:

A MapReduce Approach for Mining Multi-Perspective Declarative Process Models

Authors:

Christian Sturm, Stefan Schönig and Stefan Jablonski

Abstract: Automated process discovery aims at generating a process model from an event log. Such models can be represented as a set of declarative constraints where temporal coherencies can also be intertwined with dependencies upon value ranges of data parameters and resource characteristics. Existing mining tools do not support multi-perspective constraint discovery or are not efficient enough. In this paper, we propose an efficient mining framework for discovering multi-perspective declarative models that builds upon the distributed processing method MapReduce. Mining performance and effectiveness have been tested on several real-life event logs.
Download

Short Papers
Paper Nr: 60
Title:

Enterprise Architecture - To Business or Not to Business? That Is The Question!

Authors:

Nestori Syynimaa

Abstract: The concept of enterprise architecture (EA) is widely known in Information Systems (IS) field. Traditionally EA is categorized as an IS issue, focusing mainly on information and communications technology (ICT) aspects. Recently some researchers have insisted that the scholars and practitioners should pay more attention to the business aspects EA. This scoping study seeks to find out the current status of EA research in Management Science (MS) field. For this purpose, we reviewed the top MS journals to find out if and how the concepts related to EA are researched by MS scholars. The results revealed that EA concepts are researched by MS scholars and reported in top MS literature. However, although conceptually same, the vocabulary used in EA and MS fields are different.
Download

Paper Nr: 85
Title:

How the LEGO Group Is Embarking on Architectural Path Constitution to Transform Its Information Infrastructure into a Digital Platform

Authors:

Robert Lorenz Törmer

Abstract: Traditional companies are increasingly turning towards platform strategies to gain speed in the development of digital value propositions and prepare for the challenges arising from digitalization. This paper reports on the digitalization journey of the LEGO Group to elaborate how brick-and-mortar companies can break away from a drifting information infrastructure and trigger its transformation into a digital platform. Conceptualizing information infrastructure evolution as path-dependent process, the case study explores how mindful deviations by enterprise architects guide installed base cultivation through collective action and trigger the creation of a new ‘platformization’ path. Additionally, the findings portrait Enterprise Architecture management as a process of socio-technical path constitution that is equally shaped by deliberate human interventions and emergent forces through path dependencies.
Download

Paper Nr: 97
Title:

Enterprise Architecture for International Agreements in Social Security Institutions

Authors:

Salvador Otón, Antonio Moratilla, José Amelio Medina, Francisco Delgado and Raúl Ruggia

Abstract: This paper analyzes the problems associated with the implementation of international agreements in social security institutions. International social security agreements aim at protecting social rights of migrant workers by enabling the portability of social benefits, which involve managing billions of dollars paid worldwide by the signatory countries. This involves significant cross-border data exchange and back-office information processing. The effective and reliable implementation of agreements, therefore, requires an intensive application of information and communication technology (ICT) to ensure the integrity of the process. In this paper, a series of enterprise architectures were presented that will help the designers of the systems of social security institutions to carry out the international agreements.
Download

Paper Nr: 99
Title:

Mapping IT Governance to Software Development Process: From COBIT 5 to GI-Tropos

Authors:

Vu H.A. Nguyen, Manuel Kolp, Yves Wautelet and Samedi Heng

Abstract: Mapping IT Governance principles from frameworks like COBIT 5 to Requirements-Driven Software Processes such as (GI-) Tropos or even RUP-based ones allows IT managers to propose governance and management rules for software development to cope with stakeholders’ requirements. On the one hand, IT Governance in software engineering has to ensure that software organization business processes meet strategic requirements of the organization. On the other hand, requirements-driven software methods are development processes using high-level social-oriented models to drive the software life cycle both in terms of project management and deductive iterative engineering techniques. Typically, such methods are well-suited for the inclusion and adaptation of governance principles immediately into the software development life cycle. To consolidate both perspectives, this paper proposes a generic framework allowing mapping IT governance principles to the GI-Tropos software processes.
Download

Paper Nr: 102
Title:

A New Approach for SBPM based on Competencies Management

Authors:

Wafa Triaa, Lilia Gzara and Hervé Verjus

Abstract: In such continuous changing business work environment, traditional BPM has two principal issues: firstly the model-reality-divide, the typical separation between processs design and execution. Secondly, the loss of innovation associated to the lack of internal performers implication. To overcome these issues and to stress continuous adaptation and rapid innovation, BPM has to be agile. Otherwise, an agile enterprise is basically an enterprise of knowledge and skills. Human dimension the key element of an agile enterprise was and stills not taken into consideration within BPM. One of the recent solutions to support BPM agility is the integration of Social Software (SS) principles within BPM leading to the emergence of Social BPM (SBPM). Although the importance and the innovative ideas of the proposed approaches, they are not able to address all the identified issues of traditional BPM and to support all the phases of its lifecycle. Thus, in our approach, we integrate competency management to answer how stakeholders can find the right performers at the right time for the right type of contribution. It is mainly based on three phases: 1) identification of the required competencies to fulfil a specific need. Based on a semantic analysis, the system will be able to identify the required competencies and automatically extract the possible candidates. 2) Then the identified candidates will be evaluated against our defined criteria (related to time dimension, human dimension, cost dimension, etc.) to select the relevant ones. 3) Finally, after selecting the relevant performers, the process model will be adjusted based on the identified competencies. In this paper, we will typically present the first phase.
Download

Paper Nr: 116
Title:

Repurposing Zachman Framework Principles for "Enterprise Model"-Driven Engineering

Authors:

Alisa Harkai, Mihai Cinpoeru and Robert Andrei Buchmann

Abstract: The paper proposes an agile modelling tool which implements a domain-specific modelling method. As a motivational starting point for the development of this modelling tool, we employ the Zachman Framework - an ontology which conceptualises an enterprise across a variety of abstractions and facets. We conducted our work with respect to the Zachman Framework in order to cover several of these facets and to suggest the possibility of further employing Agile Modelling Method Engineering to extend this coverage, with the tool providing the ability to create hyperlinks between models expressing different enterprise views. The agile modelling tool developed as a proof-of-concept is further coupled with semantic technology to make models available to semantic queries and machine reasoning in the context of model-driven software engineering.
Download

Paper Nr: 133
Title:

Generating Process Entity Hierarchies from XPDL Process Models

Authors:

Hyun Ahn, Kyoungsook Kim and Kwanghoon Pio Kim

Abstract: Business process intelligence enables us to discover a variety of deep insights about business process execution, and it provides a set of useful methods for related decision-making activities. The hierarchical information that this paper focuses on is an important sort of information and it ought to be used in analyzing hierarchical properties of business processes. In this paper, we present a useful hierarchy generator to make it easier to perform analytics of hierarchical properties among business process entities. To this end, we define an abstracted meta-model that represents hierarchical relations among entity types in XPDL process models. According to the relational rules of the meta-model, a process entity hierarchy can be organized, analyzed, and visualized.
Download

Paper Nr: 204
Title:

A Classification Taxonomy for Public Services in Iran

Authors:

Fereidoon Shams Aliee, Reza Bagheriasl, Amir Mahjoorian, Maziar Mobasheri, Faezeh Hosieni and Delaram Golpayegani

Abstract: These days public sector provides numerous services to citizens. Identifying and managing these services is needed for establishing a national Business Reference Model (BRM). Classifying services according to their functionality provides a great view of the current state of public services and facilitates the government policy-making. This classification taxonomy can be considered as a part of the BRM. In this paper, we propose a functional classification taxonomy of the Iranian public services including government-to-government (G2G), government-to-business (G2B), and government-to-citizens (G2C) services. All of the services provided by Iranian public agencies fit into this classification. Up to now, more than two thousand of these services are classified.
Download

Paper Nr: 210
Title:

Microservices for Redevelopment of Enterprise Information Systems and Business Processes Optimization

Authors:

Robert Stricker, Daniel Müssig and Jörg Lässig

Abstract: Due to cost pressure and static technological development, the lifecycle of large enterprise information systems in operation is coming to an end. At the same time and as part of possible solutions, the demands for cloud systems in the enterprise context is continuously growing. Although microservices have become an established architectural pattern used by well-known companies, many especially smaller corporations are shying away from using them. In this paper we present the positive and negative effects of converting legacy applications into cloud-based microservice architectures. In addition to technical aspects such as maintainability and scalability, organizational consequences are considered and analyzed. Furthermore, the positive effects on existing business processes, especially ITIL Service Management Processes, are addressed and it is demonstrated how ITIL metrics such as MTRS, MRTT or TRD can be optimized by using microservices. We show advantages of a microservice architecture in the optimization of existing business fields and how new business areas can be opened up easier compared to conventional enterprise architectures. Even if microservices are not a silver bullet, they should be considered and evaluated as an opportunity for a new software lifecycle of a legacy enterprise application or as an architectural pattern for profound redevelopment.
Download

Paper Nr: 2
Title:

Alignment between Organization Projects and Strategic Objectives

Authors:

Inês Garcia, André Vasconcelos and Bruno Fragoso

Abstract: For many organizations having strategic objectives defined means to have their business strategy completed. However, the best-laid strategies can be useless without the proper implementation. By aligning an organization’s strategic objectives with its projects, we are able to have a greater understanding of projects and their contribution to achieve strategic objectives. Enterprise Architecture (EA) provides a path between strategy and execution, by addressing stakeholder’s concerns and relating strategic and business concepts. ArchiMate is the standard language for modelling EA and enables enterprise architects to describe, analyse and visualize the relationships among business domains. In order to identify the alignment between an organization’s projects and strategic objectives, we propose a five steps solution: 1) Identify an organization’s strategic objectives, 2) Identify each outcome expected value to the organization, i.e. the importance an achieved target has to the organization, 3) Identify the organization’s projects, 4) Represent projects and their expected value and 5) Identify the alignment. By following the proposed method, organizations are able to identify which projects contribute to the achievement of their strategic objectives. The solution proposal was demonstrated in a government owned company.

Paper Nr: 51
Title:

Strengthen the Architecture Principle Definition and Its Characteristics - A Survey Encompassing 27 Years of Architecture Principle Literature

Authors:

Michiel Borgers and Frank Harmsen

Abstract: Although architecture principles are important in the implementation of information systems requirements, empirical evidence of the effect of architecture principles is lacking. Before actually conducting the empirical research, it is important to have a solid definition and description of the research object, i.e. the architecture principle. In this paper, we strengthen both the definition of the architecture principle and the description of its characteristics. With a model based analysis we investigated 27 years of literature on architecture principles and eliminated inaccuracies and incompleteness. This definition and description provides a basis for determining the impact of using architecture principles during the implementation of the information systems requirements in our next step of research.
Download

Paper Nr: 62
Title:

Extrinsic Dependencies in Business Process Management Systems

Authors:

Radhwan Mahdi, Stefan Jablonski and Stefan Schönig

Abstract: The demand for supporting the flexibility in business processes has been increasing due to dynamic business environments and technological progress. That led to the challenge of designing business processes so as to take context changes into consideration. A context refers to any circumstance of a process and includes factors which impact process execution steps. To overcome this challenge and better fit business processes to customers expectations, this paper conceptualizes contextual factors relevant to the business process description. It defines a model that explains how the relevant contextual factors could be identified and computed in a structured way. To verify the applicability of the identified approach, a prototype is set up for running the experiments. It examines the approach with real information in different real-life scenarios.
Download

Paper Nr: 70
Title:

Understanding Enterprise Architecture with Topic Modeling - Preliminary Research based on Journal Articles

Authors:

Marco Nardello, Charles Møller and John Gøtze

Abstract: The next 3 years will be more important than the last 50 due to the digital transformation across industries. Enterprise Architecture (EA), the discipline that should lead enterprise responses to disruptive forces, is far from ready to drive the next wave of change. The state of the art in the discipline is not clear and the understanding among researchers and practitioners is not aligned. To address these problems, we developed a topic model to help structure the field and enable EA to evolve coherently. In this preliminary study, we present the 360 identified topics in EA literature and their evolution over time. Our study supports and combines the findings from previous research and provides both a deeper analysis and more detailed findings.
Download

Paper Nr: 176
Title:

Business-IT Alignment within the Management of Business Informatics Model

Authors:

Alena Buchalcevova and Jan Pour

Abstract: The paper focuses on a current highly discussed subject of interest related to Business-IT alignment. The objective of this research is to examine recently developed framework for IT management, the Management of Business Informatics (MBI) model, from the Business-IT alignment point of view and show that by using the MBI model Business-IT alignment can be better supported. As a result of the analysis several areas addressing Business-IT alignment within the MBI model are presented, e.g. communication and cooperation between company managers and IT managers, tasks aimed at the definition of the relations between the IT and business management, definition of metrics, development and deployment of analytical application.
Download

Paper Nr: 193
Title:

R2SMA - A Middleware Architecture to Access Legacy Enterprise Web Services using Lightweight REST APIs

Authors:

Jan Königsberger and Bernhard Mitschang

Abstract: This paper presents the SOAP-to-REST Middleware Architecture which provides an abstraction layer for conventional web services. This layer semi-automatically creates REST API (REpresentational State Transfer) proxies for existing enterprise web services that allows companies to provide flexible and lightweight access to exiting web services. Therefore, REST APIs can be offered for existing web services without the need to adapt them, which allows for flexible and fast integration scenarios. The architecture also provides additional enterprise-grade functionality such as caching and security.
Download

Paper Nr: 238
Title:

Formal Modelling Approach of Enterprise Architecture - Hypergraph based Representation of Business Information Systems

Authors:

Dóra Öri, Bálint Molnár and Zoltán Szabó

Abstract: The complexity of strategic alignment is an overwhelming issue in the digital age for organizations. Enterprise Architecture Management (EAM) is a major tool that facilitate the alignment efforts, providing several methods for planning and analysis. There are several methodologies that need a formal and systematic approach. The artefacts describing an Enterprise Architecture can be perceived as documents that can be represented in hypergraphs. The graph-based approach lays the groundwork for formal analysis that can assist to identify discrepancies, gaps, security, integrity and consistency issues. The paper depicts a high-level model for artefacts representing Enterprise Architecture in a hypergraph formalism. This approach can be a promising solution for EAM-based analysis of information systems and their organizational context.
Download

Paper Nr: 242
Title:

Application of Microservices for Digital Transformation of Data-Intensive Business Processes

Authors:

Janis Grabis and Janis Kampars

Abstract: Business processes are redesigned as a part of business process management lifecycle and data intensive activities such as image processing, prediction and classification are increasingly incorporated into business processes. Data intensive activities often involve usage of data analysis models. It is argued that successful development and execution of data intensive business processes requires synchronization of business process redesign and data analysis models development activities. The business process architecture integrating core business process with data analysis model setup and updating sub-processes is developed. Business process transformation stages for incorporating data-intensive activities are outlined. The process redesign and execution is supported by the technical architecture based on microservices. An example of business process redesign is discussed.
Download