ICEIS 2015 Abstracts


Area 1 - Databases and Information Systems Integration

Full Papers
Paper Nr: 14
Title:

Analysing Business-IT Alignment in the IT Outsourcing Context - A Case Study Approach

Authors:

Ekaterina Sabelnikova, Claudia-Melania Chituc and Jos Trienekens

Abstract: Information technology plays an increasingly important role in developing business strategies. Consequently, it is vital for organizations to ensure an alignment between IT and business strategies and goals. Usually business-IT alignment (BIA) is understood at the organizational level between business and IT teams, whereas IT outsourcing (ITO) extends organizational boundaries and implies inclusion of service providers. In the case of ITO, BIA is harder to achieve and manage. In this paper it is proposed to measure ITO maturity through a set of factors that influence the success of ITO activities, while a selected BIA maturity framework has been extended with two additional dimensions. The combined BIA-ITO model has been applied in a case study, with the purpose to empirically validate the model as well as to gain valuable insights on the BIA-ITO relationship in practice. The results indicate that IT outsourcing can have a positive impact on business-IT alignment.
Download

Paper Nr: 23
Title:

Applying Ensemble-based Online Learning Techniques on Crime Forecasting

Authors:

Anderson José de Souza, André Pinz Borges, Heitor Murilo Gomes, Jean Paul Barddal and Fabrício Enembreck

Abstract: Traditional prediction algorithms assume that the underlying concept is stationary, i.e., no changes are expected to happen during the deployment of an algorithm that would render it obsolete. Although, for many real world scenarios changes in the data distribution, namely concept drifts, are expected to occur due to variations in the hidden context, e.g., new government regulations, climatic changes, or adversary adaptation. In this paper, we analyze the problem of predicting the most susceptible types of victims of crimes occurred in a large city of Brazil. It is expected that criminals change their victims’ types to counter police methods and vice-versa. Therefore, the challenge is to obtain a model capable of adapting rapidly to the current preferred criminal victims, such that police resources can be allocated accordingly. In this type of problem the most appropriate learning models are provided by data stream mining, since the learning algorithms from this domain assume that concept drifts may occur over time, and are ready to adapt to them. In this paper we apply ensemble-based data stream methods, since they provide good accuracy and the ability to adapt to concept drifts. Results show that the application of these ensemble-based algorithms (Leveraging Bagging, SFNClassifier, ADWIN Bagging and Online Bagging) reach feasible accuracy for this task.
Download

Paper Nr: 33
Title:

A Centroid-based Approach for Hierarchical Classification

Authors:

Mauri Ferrandin, Fabrício Enembreck, Julio César Nievola, Edson Emílio Scalabrin and Bráulio Coelho Ávila

Abstract: Classification is a common task in Machine Learning and Data Mining. Some classification problems need to take into account a hierarchical taxonomy establishing an order between involved classes and are called hierarchical classification problems. The protein function prediction can be considered a hierarchical classification problem because their functions may be arranged in a hierarchical taxonomy of classes. This paper presents an algorithm for hierarchical classification using a centroid-based approach with two versions named HCCS and HCCSic respectively. Centroid-based techniques have been widely used to text classification and in this work we explore it’s adoption to a hierarchical classification scenario. The proposed algorithm was evaluated in eight real datasets and compared against two other recent algorithms from the literature. Preliminary results showed that the proposed approach is an alternative for hierarchical classification, having as main advantage the simplicity and low computational complexity with good accuracy.
Download

Paper Nr: 40
Title:

Techniques for Effective and Efficient Fire Detection from Social Media Images

Authors:

Marcos V. N. Bedo, Gustavo Blanco, Willian D. Oliveira, Mirela T. Cazzolato, Alceu F. Costa, Jose F. Rodrigues Jr., Agma J. M. Traina and Caetano Traina Jr.

Abstract: Crowdsourcing and social media could provide valuable information to support decision making in crisis management, such as in accidents, explosions and fires. However, much of the data from social media are images, which are uploaded in a rate that makes it impossible for human beings to analyze them. Despite the many works on image analysis, there are no fire detection studies on social media. To fill this gap, we propose the use and evaluation of a broad set of content-based image retrieval and classification techniques for fire detection. Our main contributions are: (i) the development of the Fast-Fire Detection method (FFireDt), which combines feature extractor and evaluation functions to support instance-based learning; (ii) the construction of an annotated set of images with ground-truth depicting fire occurrences – the Flickr-Fire dataset; and (iii) the evaluation of 36 efficient image descriptors for fire detection. Using real data from Flickr, our results showed that FFireDt was able to achieve a precision for fire detection that was comparable to that of human annotators. Therefore, our work shall provide a solid basis for further developments on monitoring images from social media and crowdsourcing.
Download

Paper Nr: 46
Title:

Tracking and Tracing of Global Supply Chain Network - Case Study from a Finnish Company

Authors:

Ahm Shamsuzzoha, Michael Ehrs, Richard Addo-Tengkorang and Petri Helo

Abstract: Supply chain and logistics network tracking and tracing is an important element of customer service in the transportation industry. Existing technologies are mostly suitable for single channel supply chain and are not suitable for multi-channel supply network. The objective of this research study is therefore to outline technological knowhow and possibilities related to tracking and tracking items within distributed supply chain and logistics network. This research has focused to implement a novel tracking system applicable for total supply network both inbound and outbound shipments. This study is validated within the boundary of how the available tracking technologies can be useful for a Finnish case company to manage its geographically dispersed projects and hundreds of suppliers, transport companies and warehouse operators. Both hybrid and cloud enable online-based tracking systems are proposed in this research. The application of the proposed tracking technologies provides the case company with real-time visibility on its current logistics assets.
Download

Paper Nr: 64
Title:

TIDAQL - A Query Language Enabling on-Line Analytical Processing of Time Interval Data

Authors:

Philipp Meisen, Diane Keng, Tobias Meisen, Marco Recchioni and Sabina Jeschke

Abstract: Nowadays, time interval data is ubiquitous. The requirement of analyzing such data using known techniques like on-line analytical processing arises more and more frequently. Nevertheless, the usage of approved multidimensional models and established systems is not sufficient, because of modeling, querying and processing limitations. Even though recent research and requests from various types of industry indicate that the handling and analyzing of time interval data is an important task, a definition of a query language to enable on-line analytical processing and a suitable implementation are, to the best of our knowledge, neither introduced nor realized. In this paper, we present a query language based on requirements stated by business analysts from different domains that enables the analysis of time interval data in an on-line analytical manner. In addition, we introduce our query processing, established using a bitmap-based implementation. Finally, we present a performance analysis and discuss the language, the processing as well as the results critically.
Download

Paper Nr: 73
Title:

Decision Guidance Analytics Language (DGAL) - Toward Reusable Knowledge Base Centric Modeling

Authors:

Alexander Brodsky and Juan Luo

Abstract: Decision guidance systems are a class of decision support systems that are geared toward producing actionable recommendations, typically based on formal analytical models and techniques. This paper proposes the Decision Guidance Analytics Language (DGAL) for easy iterative development of decision guidance systems. DGAL allows the creation of modular, reusable and composable models that are stored in the analytical knowledge base independently of the tasks and tools that use them. Based on these unified models, DGAL supports declarative queries of (1) data manipulation and computation, (2) what-if prediction analysis, (3) deterministic and stochastic decision optimization, and (4) machine learning, all through formal reduction to specialized models and tools, and in the presence of uncertainty.
Download

Paper Nr: 85
Title:

Knowledge Management Framework using Wiki-based Front-end Modules

Authors:

Catarina Marques-Lucena, Carlos Agostinho, Sotiris Koussouris and João Sarraipa

Abstract: Nowadays organizations have been pushed to speed up the rate of industrial transformation to high value products and services. The capability to agilely respond to new market demands became a strategic pillar for innovation, and knowledge management could support organizations to achieve that goal. However, such knowledge management approaches tend to be over complex or too academic, with interfaces difficult to manage, even more if cooperative handling is required. Nevertheless, in an ideal framework, both tacit and explicit knowledge management should be addressed to achieve knowledge handling with precise and semantically meaningful definitions. Contributing towards this direction, this paper proposes a framework capable of gathering the knowledge held by domain experts through a widespread wiki look interface, and transforming it into explicit ontologies. This enables to build tools with advanced reasoning capacities that may support enterprises decision-making processes.
Download

Paper Nr: 87
Title:

Using Petri Nets to Enable the Simulation of Application Integration Solutions Conceptual Models

Authors:

Fabricia Roos-Frantz, Manuel Binelo, Rafael Z. Frantz, Sandro Sawicki and Vitor Basto-Fernandes

Abstract: Enterprise application integration concerns with the use of methodologies and tools to design and implement integration solutions to integrate a set of heterogeneous enterprise applications. Amongst the integration technologies to design and implement integration solutions is Guaraná. This technology provides a domain-specific language that enables the design of conceptual models. The quality of these models is essential to ensure proper integration. Discovering whether an integration solution can fail and in which conditions it is more likely to happen is a costly, risky, and time-consuming task, since current approaches require software engineers to construct the real solution. Generally, simulation is recommended when problems are impossible or expensive to be solved by actual experimentation. Guaraná conceptual models can be classified as stochastic, dynamic, and discrete, and thus it can be simulated taking the advantage of well-established techniques and tools for discrete-event simulation. Therefore, this paper proposes a simulation of Guaraná solutions by using Petri nets, in order to analyse such solutions based only on the conceptual models. It shows that an integration solution conceptual model designed with Guaraná can be translated into a formal model structured as a Stochastic Petri net. The equivalence of both models is verified by comparing the operation of the Guaraná runtime system with the behaviour of a Petri net execution process.
Download

Paper Nr: 92
Title:

ROBE - Knitting a Tight Hub for Shortest Path Discovery in Large Social Graphs

Authors:

Lixin Fu and Jing Deng

Abstract: Scalable and efficient algorithms are needed to compute shortest paths between any pair of vertices in large social graphs. In this work, we propose a novel ROBE scheme to estimate the shortest distances. ROBE is based on a hub serving as the skeleton of the large graph. In order to stretch the hub into every corner in the network, we first choose representative nodes with highest degrees that are at least two hops away from each other. Then bridge nodes are selected to connect the representative nodes. Extension nodes are also added to the hub to ensure that the originally connected parts in the large graph are not separated in the hub graph. To improve performance, we compress the hub through chain collapsing, tentacle retracting, and clique compression techniques. A query evaluation algorithm based on the compressed hub is given. We compare our approach with other state-of-the-art techniques and evaluate their performance with respect to miss rate, error rate, as well as construction time through extensive simulations. ROBE is demonstrated to be two orders faster and has more accurate estimations than two recent algorithms, allowing it to scale very well in large social graphs.
Download

Paper Nr: 94
Title:

Conceptual Framework of Anything Relationship Management

Authors:

Jonathan Philip Knoblauch and Rebecca Bulander

Abstract: An increasing interconnectedness of people, physical objects and virtual objects through ICT (information and communication technology) has been observable for years. This is reflected in various fields such as business contacts (e.g. LinkedIn and Xing), social media (e.g. Facebook, WhatsApp and Twitter) or the emerging Internet of Everything (IoE). Particularly companies and organizations have a variety of relationships with their stakeholders, as well as other physical things (cars, machines etc.) and virtual objects (cloud services, documents etc.) today. All those things have to be managed with appropriate approaches. xRM can be used for this purpose as a further development of Customer Relationship Management (CRM), allowing the management of any kind of objects with appropriate mechanisms on a information technology (IT) platform. This document summarizes the results of a research project, whose aims were to develop a conceptual framework for Anything Relationship Management (xRM). Some basic background about xRM, the difference between xRM and CRM and some theoretical foundations of management concepts are described for this purpose. Additionally, the main objectives and principles of xRM will be explained. The development of a conceptual framework for xRM as well as the different components is explained on this basis. Finally, the conceptual framework for xRM gets validated through an implemented example.
Download

Paper Nr: 97
Title:

GOTA - Using the Google Similarity Distance for OLAP Textual Aggregation

Authors:

Mustapha Bouakkaz, Sabine Loudcher and Youcef Ouinten

Abstract: With the tremendous growth of unstructured data in the Business Intelligence, there is a need for incorporating textual data into data warehouses, to provide an appropriate multidimensional analysis (OLAP) and develop new approaches that take into account the textual content of data. This will provide textual measures to users who wish to analyse documents online. In this paper, we propose a new aggregation function for textual data in an OLAP context. For aggregating keywords, our contribution is to use a data mining technique, such as kmeans, but with a distance based on the Google similarity distance. Thus our approach considers the semantic similarity of keywords for their aggregation. The performance of our approach is analyzed and compared to another method using the k-bisecting clustering algorithm and based on the Jensen-Shannon divergence for the probability distributions. The experimental study shows that our approach achieves better performances in terms of recall, precision,F-measure complexity and runtime.
Download

Paper Nr: 134
Title:

PM-DB: Partition-based Multi-instance Database System for Multicore Platforms

Authors:

Fang Xi, Takeshi Mishima and Haruo Yokota

Abstract: The continued evolution of modern hardware has brought several new challenges to database management systems (DBMSs). Multicore CPUs are now mainstream, while the future lies in massively parallel computing performed on many-core processors. However, because they were developed originally for single-core processors, DBMSs cannot take full advantage of the parallel computing that uses so many cores. Several components in the traditional database engines become new bottlenecks on multicore platforms. In this paper, we analyze the bottlenecks in existing database engines on a modern multicore platform using the mixed workload of the TPC-W benchmark and describe strategies for higher scalability and throughput for existing DBMSs on multicore platforms. First, we show how to overcome the limitations of the database engine by introducing a partition-based multi-instance database system on a single multicore platform without any modification of existing DBMSs. Second, we analyze the possibility of further improving the performance by optimizing the cache performance of concurrent queries. Implemented by middleware, our proposed PM-DB can avoid the challenging work of modifying existing database engines. Performance evaluation using the TPC-W benchmark revealed that our proposal can achieve at most 2.5 times higher throughput than the existing engine of PostgreSQL.
Download

Paper Nr: 137
Title:

A Hybrid Memory Data Cube Approach for High Dimension Relations

Authors:

Rodrigo Rocha Silva, Celso Massaki Hirata and Joubert de Castro Lima

Abstract: Approaches based on inverted indexes, such as Frag-Cubing, are considered efficient in terms of runtime and main memory usage for high dimension cube computation and query. These approaches do not compute all aggregations a priori. They index information about occurrences of attributes in a manner that it is time efficient to answer multidimensional queries. As any other main memory based cube solution, Frag-Cubing is limited to main memory available, thus if the size of the cube exceeds main memory capacity, external memory is required. The challenge of using external memory is to define criteria to select which fragments of the cube should be in main memory. In this paper, we implement and test an approach that is an extension of Frag-Cubing, named H-Frag, which selects fragments of the cube, according to attribute frequencies and dimension cardinalities, to be stored in main memory. In our experiment, H-Frag outperforms Frag-Cubing in both query response time and main memory usage. A massive cube with 60 dimensions and 109 tuples was computed by H-Frag sequentially using 110 GB of RAM and 286 GB of external memory, taking 64 hours. This data cube answers complex queries in less than 40 seconds. Frag- Cubing could not compute such a cube in the same machine.
Download

Paper Nr: 160
Title:

Access Prediction for Knowledge Workers in Enterprise Data Repositories

Authors:

Chetan Verma, Michael Hart, Sandeep Bhatkar, Aleatha Parker-Wood and Sujit Dey

Abstract: The data which knowledge workers need to conduct their work is stored across an increasing number of repositories and grows annually at a significant rate. It is therefore unreasonable to expect that knowledge workers can efficiently search and identify what they need across a myriad of locations where upwards of hundreds of thousands of items can be created daily. This paper describes a system which can observe user activity and train models to predict which items a user will access in order to help knowledge workers discover content. We specifically investigate network file systems and determine how well we can predict future access to newly created or modified content. Utilizing file metadata to construct access prediction models, we show how the performance of these models can be improved for shares demonstrating high collaboration among its users. Experiments on eight enterprise shares reveal that models based on file metadata can achieve F scores upwards of 99%. Furthermore, on an average, collaboration aware models can correctly predict nearly half of new file accesses by users while ensuring a precision of 75%, thus validating that the proposed system can be utilized to help knowledge workers discover new or modified content.
Download

Paper Nr: 173
Title:

ERP in Healthcare

Authors:

Martin Mucheleka and Raija Halonen

Abstract: Attempts to improve healthcare services have increased worldwide and the role of information technology (IT) is increasing to find solutions for various issues facing the healthcare sector. The purpose of this study was to find out how enterprise resource planning systems (ERP) have been used in the healthcare sector and how these systems could be used to improve healthcare services. The field of IT now encompasses all industries, including the healthcare sector, which is currently going through fundamental changes. Based on the literature reviewed in this study, the use of ERP systems in the healthcare sector has not been widely reported. However, some findings showed that ERP systems could be used in the healthcare sector to improve the quality of services. Based on these findings, if ERP systems were successfully implemented in healthcare organisations, they would promote significant changes in certain areas such as in finance, human resources and capacity, revenue and admission resources. ERP systems could also improve both the profitability and services of healthcare organisations. Because of the lack of research in this area, further studies should investigate the usage of ERP in healthcare organisations.
Download

Paper Nr: 198
Title:

Implementing Multidimensional Data Warehouses into NoSQL

Authors:

Max Chevalier, Mohammed El Malki, Arlind Kopliku, Olivier Teste and Ronan Tournier

Abstract: Not only SQL (NoSQL) databases are becoming increasingly popular and have some interesting strengths such as scalability and flexibility. In this paper, we investigate on the use of NoSQL systems for implementing OLAP (On-Line Analytical Processing) systems. More precisely, we are interested in instantiating OLAP systems (from the conceptual level to the logical level) and instantiating an aggregation lattice (optimization). We define a set of rules to map star schemas into two NoSQL models: column-oriented and document-oriented. The experimental part is carried out using the reference benchmark TPC. Our experiments show that our rules can effectively instantiate such systems (star schema and lattice). We also analyze differences between the two NoSQL systems considered. In our experiments, HBase (column-oriented) happens to be faster than MongoDB (document-oriented) in terms of loading time.
Download

Paper Nr: 229
Title:

What if Multiusers Wish to Reconcile Their Data?

Authors:

Dayse Silveira de Almeida, Carmem Satie Hara and Cristina Dutra de Aguiar Ciferri

Abstract: Reconciliation is the process of providing a consistent view of the data imported from different sources. Despite some efforts reported in the literature for providing data reconciliation solutions with asynchronous collaboration, the challenge of reconciling data when multiple users work asynchronously over local copies of the same imported data has received less attention. In this paper, we propose AcCORD, an asynchronous collaborative data reconciliation model based on data provenance. AcCORD is innovative because it supports applications in which all users are required to agree on the data integration in order to provide a single consistent view to all of them, as well as applications that allow users to disagree on the correct data value, but promote collaboration by sharing updates. We also introduce different policies based on provenance for solving conflicts among multiusers’ updates. An experimental study investigates the main characteristics of the policies, showing the efficacy of AcCORD.
Download

Paper Nr: 301
Title:

Collaborative Teaching of ERP Systems in International Context

Authors:

Jānis Grabis, Kurt Sandkuhl and Dirk Stamer

Abstract: ERP systems are characterized by a high degree of complexity what is challenging to replicate in the classroom environment. However, there is a strong industry demand for students having ERP training during their studies at universities. This paper reports a joint effort of University of Rostock and Riga Technical University to enhance introductory ERP training by introducing an internationalization dimension in the standard curriculum. Both universities collaborated to develop an international ERP case study as an extension of the SAP ERP Global Bikes Incorporated case study. The training approach, study materials and appropriate technical environment have been developed. The international ERP case study is performed at both universities where students work collaboratively on running business processes in the SAP ERP system. Students’ teams at each university are responsible for business process activities assigned to them and they are jointly responsible for completing the process. The case study execution observations and students’ evaluations suggest that the international ERP provides a good insight on the real-life challenges associated in using the ERP systems in the international context.
Download

Paper Nr: 339
Title:

On the Discovery of Explainable and Accurate Behavioral Models for Complex Lowly-structured Business Processes

Authors:

Francesco Folino, Massimo Guarascio and Luigi Pontieri

Abstract: Process discovery (i.e. the automated induction of a behavioral process model from execution logs) is an important tool for business process analysts/managers, who can exploit the extracted knowledge in key process improvement and (re-)design tasks. Unfortunately, when directly applied to the logs of complex and/or lowly-structured processes, such techniques tend to produce low-quality workflow schemas, featuring both poor readability ("spaghetti-like") and low fitness (i.e. low ability to reproduce log traces). Trace clustering methods alleviate this problem, by helping detect different execution scenarios, for which simpler and more fitting workflow schemas can be eventually discovered. However, most of these methods just focus on the sequence of activities performed in each log trace, without fully exploiting all non-structural data (such as cases data and environmental variables) available in many real logs, which might well help discover more meaningful (context-related) process variants. In order to overcome these limitations, we propose a two-phase clustering-based process discovery approach, where the clusters are inherently defined through logical decision rules over context data, ensuring a satisfactory trade-off is between the readability/explainability of the discovered clusters, and the behavioral fitness of the workflow schemas eventually extracted from them. The approach has been implemented in a system prototype, which supports the discovery, evaluation and reuse of such multi-variant process models. Experimental results on a real-life log confirmed its capability to achieve compelling performances w.r.t. state-of-the-art clustering approaches, in terms of both fitness and explainability.
Download

Short Papers
Paper Nr: 20
Title:

Risk Management in Project of Information Systems Integration During Merger of Companies

Authors:

E. Abakumov, D. Agulova and A. Volgin

Abstract: As of today, reorganization of companies is one of the challenges that require close attention of administrators. Integration of businesses cannot be accomplished without integration of information systems. Project management is a tool needed to implement such integration efforts. Risk management is one of the components of project management. A risk event occurs at a random nature, so estimating the probability of change in potential timeframe of project completion taking into account the estimated probability of various risk events is an important task. This paper gives an overview of the standard list of risks for integration of information systems of merged companies. Note that this list of risks has been developed for the Russian research-and-production instrument-making enterprises under government ownership and can be viewed as an example of implementing risk-oriented approach to information management. Besides, the list can be used to assess integration risks and elaborate ways to eliminate (minimize) losses related to implementation of these risks.
Download

Paper Nr: 67
Title:

On-premise ERP Organizational Post-implementation Practices - Comparison between Large Enterprises and Small and Medium-Sized Enterprises

Authors:

Victoria Hasheela

Abstract: This paper presents a multiple case study, which was aimed at identifying similarities and differences on how companies of different sizes operate after ERP system go live (post implementation phase). The study has found several differences and similarities and concluded that the differences are caused by the differences in company structures, sizes, financial constraints and decision making processes. Large Enterprises (LEs) often have in-house competence which Small and Medium-Sized Companies (SMEs) usually lack, and this leads to SMEs to depend on external sources, which makes the operations slightly different. SMEs also focus on their technical operations, often disregarding strategic planning, and this leads to higher risks.
Download

Paper Nr: 70
Title:

Function-based Case Classification for Improving Business Process Mining

Authors:

Yaguang Sun and Bernhard Bauer

Abstract: In the last years business process mining has become a wide research area. However, existing process mining techniques encounter challenges while dealing with event logs stemming from highly flexible environments because such logs contain a large amount of different behaviors. As a result, inaccurate and wrong analysis results might be obtained. In this paper we propose a case (a case is an instance of the business process) classification technique which is able to combine domain experts knowledge for classifying cases so that each group is calculated containing the cases with similar behaviors. By applying existing process mining techniques on the cases for each group, more meaningful and accurate analysis results can be obtained.
Download

Paper Nr: 80
Title:

Context-sensitive Indexes in RDBMS for Performance Optimization of SQL Queries in Multi-tenant/Multi-application Environments

Authors:

Arjun K. Sirohi and Vidushi Sharma

Abstract: With the recent shift towards cloud-based applications and Software as a Service (SaaS) environments, relational databases support multi-tenant and multi-application workloads that query the same set of data stored in common tables, using SQL queries. These SQL queries have very different query constructs and data-access requirements leading to different optimization needs. However, the business-users' expect sub-second response times in getting the data that they requested. The current RDBMS architectures where indexes “belong” to a table without any object privileges of their own, and, therefore, must be considered and used by the optimizer for all SQLs referencing the table(s), pose multiple challenges for the optimizer as well as application architects and performance tuning experts, especially as the number of such indexes grows. In this paper, we make the case for “Context-Sensitive Indexes”, whereby applications and tenants could define their own indexes on the shared, transactional database tables to optimize the execution of their SQL queries, while at the same time having the optimizer keep such indexes isolated from other applications and tenants/users for the purposes of query optimization.
Download

Paper Nr: 98
Title:

Graph-based ETL Processes for Warehousing Statistical Open Data

Authors:

Alain Berro, Imen Megdiche and Olivier Teste

Abstract: Warehousing is a promising mean to cross and analyse Statistical Open Data (SOD). But extracting structures, integrating and defining multidimensional schema from several scattered and heterogeneous tables in the SOD are major problems challenging the traditional ETL (Extract-Transform-Load) processes. In this paper, we present a three step ETL processes which rely on RDF graphs to meet all these problems. In the first step, we automatically extract tables structures and values using a table anatomy ontology. This phase converts structurally heterogeneous tables into a unified RDF graph representation. The second step performs a holistic integration of several semantically heterogeneous RDF graphs. The optimal integration is performed through an Integer Linear Program (ILP). In the third step, system interacts with users to incrementally transform the integrated RDF graph into a multidimensional schema.
Download

Paper Nr: 106
Title:

Genetic Mapping of Diseases through Big Data Techniques

Authors:

Julio Cesar Santos dos Anjos, Bruno Reckziegel Filho, Junior F. Barros, Raffael B. Schemmer, Claudio Geyer and Ursula Matte

Abstract: The development of sophisticated sequencing machines and DNA techniques has enabled advances to be made in the medical field of genetics research. However, due to the large amount of data that sequencers produce, new methods and programs are required to allow an efficient and rapid analysis of the data. MapReduce is a data-intensive computing model that handles large volumes that are easy to program by means of two basic functions (Map and Reduce). This work introduces GMS, a genetic mapping system that can assist doctors in the clinical diagnosis of patients by conducting an analysis of the genetic mutations contained in their DNA. As a result, the model can offer a good method for analyzing the data generated by sequencers, by providing a scalable system that can handle a large amount of data. The use of several medical databases at the same time makes it possible to determine susceptibilities to diseases through big data analysis mechanisms. The results show scalability and offer a possible diagnosis that can improve the genetic diagnosis with a powerful tool for health professionals.
Download

Paper Nr: 135
Title:

An IFC4-based Middleware for Data Interoperability in Energy Building Operation

Authors:

José L. Hernández, Susana Martín and César Valmaseda

Abstract: This paper addresses the existing gap in data interoperability among heterogeneous resources for energy service systems of building automation. In this sense, the middleware is the core of the communication between heterogeneous data samples and the application services. This kind of solutions integrates the multiple building data resources to gather the information within the context of energy and buildings and couples the data in a single signal. The middleware also manages the data in an harmonized way by means of the representation of the information in a well-established data-model as IFC4 which is widely used in the building topic. This kind of harmonic communication allows the exchange of information among the entities in complex platforms by common formats in order to ease the interpretation of data. Then, interoperability is a key factor for achieving connectivity that is reached in the present middleware though the event-driven communication mechanisms and the well-known interfaces.
Download

Paper Nr: 159
Title:

An Empirical Study of Recommendations in OLAP Reporting Tool

Authors:

Natalija Kozmina

Abstract: This paper presents the results of the experimental study that was performed in laboratory settings in the context of the OLAP reporting tool developed and put to operation at the University. The study was targeted to explore which of the modes for generating recommendations in the OLAP reporting tool has a deeper impact on users (i.e. produces more accurate recommendations). Each of the modes of the recommendation component – report structure, user activity, and semantic – employs a separate content-based method that takes advantage of OLAP schema metadata and aggregate functions. Gained data are assessed (i) quantitatively by means of the precision/recall and other metrics from the log-table analysis and certain statistical tools, and (ii) qualitatively by means of the user survey and feedback given in a free form.
Download

Paper Nr: 171
Title:

Modelspace - Cooperative Document Information Extraction in Flexible Hierarchies

Authors:

Daniel Schuster, Daniel Esser, Klemens Muthmann and Alexander Schill

Abstract: Business document indexing for ordered filing of documents is a crucial task for every company. Since this is a tedious error prone work, automatic or at least semi-automatic approaches have a high value. One approach for semi-automated indexing of business documents uses self-learning information extraction methods based on user feedback. While these methods require no management of complex indexing rules, learning by user feedback requires each user to first provide a number of correct extractions before getting appropriate automatic results. To eliminate this cold start problem we propose a cooperative approach to document information extraction involving dynamic hierarchies of extraction services. We provide strategies for making the decision when to contact another information extraction service within the hierarchy, methods to combine results from different sources, as well as aging and split strategies to reduce the size of cooperatively used indexes. An evaluation with a large number of real-world business documents shows the benefits of our approach.
Download

Paper Nr: 190
Title:

Internet of Things Applications in Production Systems

Authors:

A. Boza, B. Cortés, L. Cuenca and F. Alarcón

Abstract: The Internet of Things field has been applied in industries for different purposes. This paper presents a literature review of Internet of Things applications in the production system. A taxonomy with five categories has been employed in this review: Sector, Technology, Production Phase, Practical Application and Benefit. The sectors, technology and production phase where IoT is being introduced practically or theoretically have been identified, and the benefits of IoT in production systems have been collected and classified. This research presents the advantages of applying Internet of Things in production systems, which helps not only production systems managers in practical implementations, but also researchers to identify research gaps for future research.
Download

Paper Nr: 225
Title:

Towards Data Warehouse Schema Design from Social Networks - Dynamic Discovery of Multidimensional Concepts

Authors:

Rania Yangui, Ahlem Nabli and Faiez Gargouri

Abstract: This research work is conducting as part of the project BWEC (Business for Women in Women of Emerging Country) that aims to improve the socio-economic situation of handicraft women by providing true technological means. In fact, since few years, the Web has been transformed into an exchange platform where users have become the main suppliers of information through social media. User-generated data are usually rich and thus need to be analyzed to enhance decision. The storage and the centralization of these data in a data warehouse (DW) are highly required. Nevertheless, the growing complexity and volumes of the data to be analyzed impose new requirements on DW. In order to address these issues, in this paper, we propose four stages methodology to define a DW schema from social networks. Firstly we design the initial DW schema based on the existing approaches. Secondly, we apply a set of transformation rules to prepare the creation of the NOSQL(Not Only SQL) data warehouse. Then, based on user’s requirement, clustering of social networks profiling data will be performed which allows the dynamic discovery of multidimensional concepts. Finally, the enrichment of the NoSQL DW schema by the discovered MC will be realized to ensure the DW schema evolution.
Download

Paper Nr: 259
Title:

Discretization Method for the Detection of Local Extrema and Trends in Non-discrete Time Series

Authors:

Konstantinos F. Xylogiannopoulos, Panagiotis Karampelas and Reda Alhajj

Abstract: Mining, analysis and trend detection in time series is a very important problem for forecasting purposes. Many researchers have developed different methodologies applying techniques from different fields of science in order to perform such analysis. In this paper, we propose a new discretization method that allows the detection of local extrema and trends inside time series. The method uses sliding linear regression of specific time intervals to produce a new time series from the angle of each regression line. The new time series produced allows the detection of local extrema and trends in the original time series. We have conducted several experiments on financial time series in order to discover trends as well as pattern and periodicity detection to forecast future behavior of Dow Jones Industrial Average 30 Index.
Download

Paper Nr: 271
Title:

Discovering Business Models for Software Process Management - An Approach for Integrating Time and Resource Perspectives from Legacy Information Systems

Authors:

C. Arevalo, I. Ramos and M. J. Escalona

Abstract: Business Process Management (BPM) is becoming the modern core to support business in all type of organizations and software business is not an exception. Software companies are often involved in important and complex collaborative projects carried out by many stakeholders. Each actor (customers, suppliers or government instances, among others) works with individual and shared processes. Everyone needs dynamic and evolving approaches for managing their software projects lifecycle. Nevertheless, many companies still use systems that are out of the scope of BPM for planning and control projects and managing enterprise content (Enterprise Content Management, ECM) as well as all kinds of resources (ERP). Somehow systems include scattered artifacts that are related to BPM perspectives: control and data flow, time, resource and case, for example. It is aimed to get interoperable BPM models from these classical Legacy Information Systems (LIS). Model-Driven Engineering (MDE) allows going from application code to higher-level of abstraction models. Particularly, there are standards and proposals for reverse engineering LIS. This paper illustrates LIS cases for software project planning and ECM, looking at time and resource perspectives. To conclude, we will propose a MDE-based approach for taking out business models in the context of software process management.
Download

Paper Nr: 289
Title:

Natural Language Processing Techniques for Document Classification in IT Benchmarking - Automated Identification of Domain Specific Terms

Authors:

Matthias Pfaff and Helmut Krcmar

Abstract: In the domain of IT benchmarking collected data are often stored in natural language text and therefore intrinsically unstructured. To ease data analysis and data evaluations across different types of IT benchmarking approaches a semantic representation of this information is crucial. Thus, the identification of conceptual (semantical) similarities is the first step in the development of an integrative data management in this domain. As an ontology is a specification of such a conceptualization an association of terms, relations between terms and related instances must be developed. Building on previous research we present an approach for an automated term extraction by the use of natural language processing (NLP) techniques. Terms are automatically extracted out of existing IT benchmarking documents leading to a domain specific dictionary. These extracted terms are representative for each document and describe the purpose and content of each file and server as a basis for the ontology development process in the domain of IT benchmarking.
Download

Paper Nr: 296
Title:

The mqr-tree for Very Large Object Sets

Authors:

Wendy Osborn and Marc Moreau

Abstract: This paper presents an evaluation of the mqr-tree for indexing a database containing a very large number of objects. Many spatial access methods have been proposed for handling either point and/or region data, with the vast majority able to handle a limited number of instances of these data types efficiently. However, many established and emerging application areas, such as recommender systems, require the management and indexing of very large object sets, such as a million places of interest that are each represented with a point. Using between one and five million points and objects, a comparison of both index construction and spatial query evaluation is performed versus a benchmark spatial indexing strategy. We show that the mqr-tree achieves significantly lower overlap and overcoverage when used to index a very large collection of objects. Also, the mqr-tree achieves significantly improved query processing performance in many cases. Therefore, the mqr-tree is a significant candidate for handling very large object sets for emerging applications.
Download

Paper Nr: 297
Title:

Towards Principled Data Science Assessment - The Personal Data Science Process (PdsP)

Authors:

Ismael Caballero, Laure Berti-Equille and Mario Piattini

Abstract: With the Unstoppable Advance of Big Data, the Role of Data Scientist Is Becoming More Important than Ever before, in This Position Paper, We Argue That Scientists Should Be Able to Acknowledge the Importance of Data Quality Management in Data Science and Rely on a Principled Methodology When Performing Tasks Related to Data Management, in Order to Quantify How Much a Data Scientist Is Able to Perform the Core of Data Management Activities We Propose the Personal Data Science Process (PdsP), Which Includes Five Staged Qualifications for Data Science Professionals, the Qualifications Are based on Two Dimensions: Personal Data Management Maturity (PDMM) and Personal Data Science Performance (PDSPf), the First One Is Defined According to Dgmr, a Data Management Maturity Model, Which Include Processes Related to the Areas of Data Management: Data Governance, Data Management, and Data Quality Management, the Second One, PDSPf, Is Grounded on PSP (Personal Software Process) and Cover the Personal Skills and Knowledge of Data Scientist When Participating in a Data Science Project, These Dimensions Will Allow to Developing a Measure of How Well a Data Scientist Can Contribute to the Success of the Organization in Terms of Performance and Skills Appraisal.
Download

Paper Nr: 314
Title:

CORE - A Context-based Approach for Rewriting User Queries

Authors:

Antonio Mendonça, Paulo Maciel, Damires Souza and Ana Carolina Salgado

Abstract: When users access data-oriented applications, they aim to obtain useful information. Sometimes, however, the user needs to reformulate the submitted queries several times and go through many answers until a satisfactory set of answers is achieved. In this scenario, the user may be in different contexts, and these contexts may change frequently. For instance, the place where the user submits a given query may be taken into account and thereby may change the query itself and its results. In this work, we address the issue of personalizing query answers in data-oriented applications considering the context acquired at query submission time. To this end, we propose a query rewriting approach, which makes use of context-based rules to produce new related expanded or relaxed queries. In this paper, we present our approach and some experimental results we have accomplished. These results show that, by considering the acquired user context, it really enhances the precision and recall of the obtained answers.
Download

Paper Nr: 39
Title:

Streaming Networks Sampling using top-K Networks

Authors:

Rui Sarmento, Mário Cordeiro and João Gama

Abstract: The combination of top-K network representation of the data stream with community detection is a novel approach to streaming networks sampling. Keeping an always up-to-date sample of the full network, the advantage of this method, compared to previous, is that it preserves larger communities and original network distribution. Empirically, it will also be shown that these techniques, in conjunction with community detection, provide effective ways to perform sampling and analysis of large scale streaming networks with power law distributions.
Download

Paper Nr: 48
Title:

A Framework for Analysing Dynamic Communities in Large-scale Social Networks

Authors:

Vítor Cerqueira, Márcia Oliveira and João Gama

Abstract: Telecommunications companies must process large-scale social networks that reveal the communication patterns among their customers. These networks are dynamic in nature as new customers appear, old customers leave, and the interaction among customers changes over time. One way to uncover the evolution patterns of such entities is by monitoring the evolution of the communities they belong to. Large-scale networks typically comprise thousands, or hundreds of thousands, of communities and not all of them are worth monitoring, or interesting from the business perspective. Several methods have been proposed for tracking the evolution of groups of entities in dynamic networks but these methods lack strategies to effectively extract knowledge and insight from the analysis. In this paper we tackle this problem by proposing an integrated business-oriented framework to track and interpret the evolution of communities in very large networks. The framework encompasses several steps such as network sampling, community detection, community selection, monitoring of dynamic communities and rule-based interpretation of community evolutionary profiles. The usefulness of the proposed framework is illustrated using a real-world large-scale social network from a major telecommunications company.
Download

Paper Nr: 138
Title:

Dimensionality Reduction for Supervised Learning in Link Prediction Problems

Authors:

Antonio Pecli, Bruno Giovanini, Carla C. Pacheco, Carlos Moreira, Fernando Ferreira, Frederico Tosta, Júlio Tesolin, Marcio Vinicius Dias, Silas Filho, Maria Claudia Cavalcanti and Ronaldo Goldschmidt

Abstract: In recent years, a considerable amount of attention has been devoted to research on complex networks and their properties. Collaborative environments, social networks and recommender systems are popular examples of complex networks that emerged recently and are object of interest in academy and industry. Many studies model complex networks as graphs and tackle the link prediction problem, one major open question in network evolution. It consists in predicting the likelihood of an association between two not interconnected nodes in a graph to appear. One of the approaches to such problem is based on binary classification supervised learning. Although the curse of dimensionality is a historical obstacle in machine learning, little effort has been applied to deal with it in the link prediction scenario. So, this paper evaluates the effects of dimensionality reduction as a preprocessing stage to the binary classifier construction in link prediction applications. Two dimensionality reduction strategies are experimented: Principal Component Analysis (PCA) and Forward Feature Selection (FFS). The results of experiments with three different datasets and four traditional machine learning algorithms show that dimensionality reduction with PCA and FFS can improve model precision in this kind of problem.
Download

Paper Nr: 169
Title:

Sharding by Hash Partitioning - A Database Scalability Pattern to Achieve Evenly Sharded Database Clusters

Authors:

Caio H. Costa, João Vianney B. M. Filho, Paulo Henrique M. Maia and Francisco Carlos M. B. Oliveira

Abstract: With the beginning of the 21st century, web applications requirements dramatically increased in scale. Applications like social networks, ecommerce, and media sharing, started to generate lots of data traffic, and companies started to track this valuable data. The database systems responsible for storing all this information had to scale in order to handle the huge load. With the emergence of cloud computing, scaling out a database system has became an affordable solution, making data sharding a viable scalability option. But to benefit from data sharding, database designers have to identify the best manner to distribute data among the nodes of shared cluster. This paper discusses database sharding distribution models, specifically a technique known as hash partitioning. Our objective is to catalog in the format of a Database Scalability Pattern the best practice that consists in sharding the data among the nodes of a database cluster using the hash partitioning technique to nicely balance the load between the database servers. This way, we intend to make the mapping between the scenario and its solution publicly available, helping developers to identify when to adopt the pattern instead of other sharding techniques.
Download

Paper Nr: 298
Title:

Mining Big Data - Challenges and Opportunities

Authors:

Zaher Al Aghbari

Abstract: Nowadays, the daily amount of generated data is measured in exabytes. Such huge data is now referred to as Big Data. Big data mining leads to the discovery of the useful information from huge data repositories. However, this huge amount of data hinders existing data mining tools and thus creates new research challenges that open the door for new research opportunities. In this paper, we provide an overview of the research challenges and opportunities of big data mining. We present the technologies and platforms that are required for mining big data. A number of applications that can benefit from mining big data are also discussed. We discuss the status of big data mining, current efforts and future research directions in the UAE.
Download

Paper Nr: 299
Title:

A Hybrid Genetic based Approach for Real-time Reconfigurable Scheduling of OS Tasks in Uniprocessor Embedded Systems

Authors:

Ibrahim Gharbi, Hamza Gharsellaoui and Sadok Bouamama

Abstract: This paper deals with the problem of scheduling uniprocessor real-time tasks by a hybrid genetic based scheduling algorithm. Nevertheless, when such a scenario is applied to save the system at the occurrence of hardware-software faults, or to improve its performance, some real-time properties can be violated at runtime. We propose a hybrid genetic based scheduling approach that automatically checks the systems feasibility after any reconfiguration scenario was applied on an embedded system. Indeed, if the system is unfeasible, the proposed approach operates directly in a highly dynamic and unpredictable environment and improves a rescheduling performance. This proposed approach which is based on a genetic algorithm (GA) combined with a tabu search (TS) algorithm is implemented which can find an optimized scheduling strategy to reschedule the embedded system after any system disturbance was happened. We mean by a system disturbance any automatic reconfiguration which is assumed to be applied at run-time: Addition-Removal of tasks or just modifications of their temporal parameters: WCET and/or deadlines. An example used as a benchmark is given, and the experimental results demonstrate the effectiveness of proposed genetic based scheduling approach over others such as a classical genetic algorithm approach.
Download

Paper Nr: 330
Title:

Graph Database Application using Neo4j - Railroad Planner Simulation

Authors:

Steve Ataky Tsham Mpinda, Luis Gustavo Maschietto, Marilde Terezinha Santos Prado and Marcela Xavier Ribeiro

Abstract: Such as relational databases, most graphs databases are OLTP databases (online transaction processing) of generic use and can be used to produce a wide range of solutions. That said, they shine particularly when the solution depends, first, on our understanding of how things are connected. This is more common than one may think. And in many cases it is not only how things are connected but often one wants to know something about the different relationships in our field - their names, qualities, weight and so on. Briefly, connectivity is the key. The graphs are the best abstraction one has to model and query the connectivity; databases graphs in turn give developers and the data specialists the ability to apply this abstraction to their specific problems. For this purpose, in this paper one used this approach to simulate the route planner application, capable of querying connected data. Merely having keys and values is not enough; no more having data partially connected through joins semantically poor. We need both the connectivity and contextual richness to operate these solutions. The case study herein simulates a railway network railway stations connected with one another where each connection between two stations may have some properties. And one answers the question: how to find the optimized route (path) and know whether a station is reachable from one station or not and in which depth.
Download

Paper Nr: 340
Title:

Entity Identification Problem in Big and Open Data

Authors:

J. G. Enríquez, Vivian Lee, Masatomo Goto, F. J. Domínguez-Mayo and M. J. Escalona

Abstract: Big and Open Data provide great opportunities to businesses to enhance their competitive advantages if utilized properly. However, during past few years’ research in Big and Open Data process, we have encountered big challenge in entity identification reconciliation, when trying to establish accurate relationships between entities from different data sources. In this paper, we present our innovative Intelligent Reconciliation Platform and Virtual Graphs solution that addresses this issue. With this solution, we are able to efficiently extract Big and Open Data from heterogeneous source, and integrate them into a common analysable format. Further enhanced with the Virtual Graphs technology, entity identification reconciliation is processed dynamically to produce more accurate result at system runtime. Moreover, we believe that our technology can be applied to a wide diversity of entity identification problems in several domains, e.g., e- Health, cultural heritage, and company identities in financial world.
Download

Paper Nr: 343
Title:

Supporting Competitive Intelligence with Linked Enterprise Data

Authors:

Vitor Afonso Pinto, Guilherme Sousa Bastos, Fabricio Ziviani and Fernando Silva Parreiras

Abstract: Competitive Intelligence is a process which involves retrieving, analyzing and packaging information to offer a final product that responds to the intelligence needs of a particular decision maker or community of decision makers. Internet-based information sources are becoming increasingly important in this process because most of the contents available on the Web are available free of charge. In this work the following research question was addressed: What are the concepts and technologies related to linked data which allow gathering, integration and sharing of information to support competitive intelligence? To answer this question, firstly, the literature was reviewed in order to outline the conceptual framework. Next, some competency questions were defined through a focus group in a study object. Finally, DB4Trading tool was built as a prototype able to validate the conceptual framework. Results point out that adoption of Semantic Web technologies enable to obtain the data needed for the analysis of external environments. Besides that, results indicate that companies use Semantic Web technologies to support its operations despite consider these technologies as complex. This work adds to the decision-making process, specially in the context of competitive intelligence. This work also contributes to reducing costs to obtain information beyond organization boundaries by using Semantic Web technologies.
Download

Area 2 - Artificial Intelligence and Decision Support Systems

Full Papers
Paper Nr: 16
Title:

International Standard ISO 9001 an Artificial Intelligence View

Authors:

José Neves, Ana Fernandes, Guida Gomes, Mariana Neves, António Abelha and Henrique Vicente

Abstract: ISO 9001 is recognized as a Quality Management Systems standard, i.e., it is the primary phase of a process of constant enhancement that will provide an organisation with the necessary management tools to improve working practices. Indeed, it provides a framework and a set of principles aimed at ensuring a common sense approach to the management of an organization in order to consistently satisfy customers and other stakeholders. Therefore, and in order to add value to ISO 9001, this work focuses on the development of a decision support system, which will allow companies to be able to meet the needs of customers by fulfilling requirements that reflect either the effectiveness or the non-effectiveness of an organization. The procedures for knowledge representation and reasoning used are based on an extension to the Logic Programming language, allowing the handling of incomplete, contradictory and even forbidden data, information and/or knowledge. The computational framework is centred on Artificial Neural Networks to evaluate customer’s satisfaction and the degree of confidence that one has on such a happening.
Download

Paper Nr: 56
Title:

Deadlock Avoidance in Interorganizational Business Processes using a Possibilistic WorkFlow Net

Authors:

Leiliane Rezende and Stéphane Julia

Abstract: In this paper, an approach based on Siphon structures, possibilistic Petri nets and interorganizational WorkFlow nets is proposed to deal with deadlock situations in interorganizational business processes. A deadlock situation is characterized by an insufficiently marked Siphon. Possibilistic Petri nets with uncertainty on the marking and on the transition firing are used to ensure the existence of at least one transition firing sequence enabling the completion of the process without encountering the deadlock situation. Routing patterns and communication protocols that exist in business processes are modeled by interorganizational WorkFlow nets. Combining both formalisms, a kind of possibilistic WorkFlow net is obtained.
Download

Paper Nr: 65
Title:

An Economic Approach for Generation of Train Driving Plans using Continuous Case-based Planning

Authors:

André P. Borges, Osmar B. Dordal, Richardson Ribeiro, Bráulio C. Ávila and Edson E. Scalabrin

Abstract: We present an approach for reusing and sharing train driving plans P using continuous (or without human intervention) Case-Based Planning (CBP). P is formed by a set of actions, which when applied, can move a train in a stretch of railroad. This is a complex task due to the variations in the (i) composition of the train, (ii) environmental conditions, and (iii) stretches travelled. To overcome these difficulties we provide to the driver a support system to help the driver in this complex task. CBP was chosen because it allows directly reuse the human drivers experience as well as from other sources. The main steps of the CBP are distributed among specialized agents with different roles: Planner and Executor. Our approach was evaluated by different metrics: (i) accuracy of the case recovery task, (ii) efficiency of task adaptation and application of such cases in realistic scenarios and (iii) fuel consumption. We show that the inclusion of new experiences reduces the efforts of both the Planner and the Executor, reduces significantly the fuel consumption and allow the reuse of the obtained experiences in similar scenarios with low effort.
Download

Paper Nr: 78
Title:

Indirect Normative Conflict - Conflict that Depends on the Application Domain

Authors:

Viviane Torres da Silva, Christiano Braga and Jean de Oliveira Zahn

Abstract: Norms are being used as a mechanism to regulate the behavior of autonomous, heterogeneous and independently designed agents. Norms describe what can be performed, what must be performed, and what cannot be performed in the multi-agent systems. Due to the number of norms specified to govern a multi-agent system, one important issue that has been considered by several approaches is the checking for normative conflicts. Two norms are said to be in conflict when the fulfillment of one norm violates the other and vice-versa. In this paper, we formally define the concept of an indirect normative conflict as a conflict between two norms that not necessarily have contradictory or contrary deontic modalities and that may govern (different but) related behaviors of (different but) related entities on (different but) related contexts. Finally, we present an ontology-based indirect norm conflict checker that automatically identifies direct and indirect norm conflicts on an ontology describing a set of norms and a set of relationships between the elements identified in the norms (behavior, entity and context).
Download

Paper Nr: 101
Title:

A Variable Neighbourhood Search for Nurse Scheduling with Balanced Preference Satisfaction

Authors:

Ademir Aparecido Constantino, Everton Tozzo, Rodrigo Lankaites Pinheiro, Dario Landa-Silva and Wesley Romão

Abstract: The nurse scheduling problem (NSP) is a combinatorial optimisation problem widely tackled in the literature. Recently, a new variant of this problem was proposed, called nurse scheduling problem with balanced preference satisfaction (NSPBPS). This paper further investigates this variant of the NSP as we propose a new algorithm to solve the problem and obtain a better balance of overall preference satisfaction. Initiall, the algorithm converts the problem to a bottleneck assignment problem and solves it to generate an initial feasible solution for the NSPBPS. Posteriorly, the algorithm applies the Variable Neighbourhood Search (VNS) metaheuristic using two sets of search neighbourhoods in order to improve the initial solution. We empirically assess the performance of the algorithm using the NSPLib benchmark instances and we compare our results to other results found in the literature. The proposed VNS algorithm exhibits good performance by achieving solutions that are fairer (in terms of preference satisfaction) for the majority of the scenarios.
Download

Paper Nr: 105
Title:

Fuzzy Resource Allocation Mechanisms in Workflow Nets

Authors:

Joslaine Cristina Jeske de Freitas, Stéphane Julia and Leiliane Pereira de Rezende

Abstract: The purpose of Workflow Management Systems is to execute Workflow processes. Workflow processes represent the sequence of activities that have to be executed within an organization to treat specific cases and to reach a well-defined goal. Therefore, it is to manage in the best possible way time and resources. The proposal of this work is to express in a more realistic way the resource allocation mechanisms when human behavior is considered in Workflow activities. In order to accomplish this, fuzzy sets delimited by possibility distributions will be associated with the Petri net models that represent human type resource allocation mechanisms. Additionally, the duration of activities that appear on the routes (control structure) of the Workflow process, will be represented by fuzzy time intervals produced through a kind of constraint propagation mechanism. New firing rules based on a joint possibility distribution will then be defined.
Download

Paper Nr: 111
Title:

Multi-modal Transportation with Public Transport and Ride-sharing - Multi-modal Transportation using a Path-based Method

Authors:

Sacha Varone and Kamel Aissat

Abstract: This article describes a multi-modal routing problem, which occurs each time a user wants to travel from a point A to a point B, using either ride-sharing or public transportation. The main idea is to start from an itinerary using public transportation, and then substitute part of this itinerary by ride-sharing. We first define a closeness estimation between the user’s itinerary and available drivers. This allows to select a subset of potential drivers. We then compute sets of driving quickest paths, and design a substitution process. Finally, among all admissible solutions, we select the best one based on the earliest arrival time. We provide numerical results using benchmarks based on geographical maps, public transportation timetabling and simulated requests and driving paths. Our numerical experiment shows a running time of a few seconds, suitable for a new real-time transportation application.
Download

Paper Nr: 140
Title:

Supporting the Selection of Prognostic-based Decision Support Methods in Manufacturing

Authors:

Alexandros Bousdekis, Babis Magoutas, Dimitris Apostolou and Gregoris Mentzas

Abstract: In manufacturing enterprises, maintenance is a significant contributor to the total company’s cost. Condition Based Maintenance (CBM) relies on prognostic models and uses them to support maintenance decisions based on the current and predicted health state of equipment. Although decision support for CBM is not an extensively explored area, there exist methods which have been developed in order to deal with specific challenges such as the need to cope with real-time information, to prognose the health state of equipment and to continually update decision recommendations. We propose an approach for supporting analysts selecting the most suitable combination(s) of methods for prognostic-based maintenance decision support according to the requirements of a given maintenance application. Our approach is based on the ID3 decision tree learning algorithm and is applied in a maintenance scenario in the oil and gas industry.
Download

Paper Nr: 152
Title:

A Learning Model for Intelligent Agents Applied to Poultry Farming

Authors:

Richardson Ribeiro, Marcelo Teixeira, André L. Wirth, André P. Borges and Fabrício Enembreck

Abstract: This paper proposes a learning model for taking-decision problems using intelligent agents technologies combined with instance-based machine learning techniques. Our learning model is applied to a real case to support the daily decisions of a poultry farmer. The agent of the system is used to generate action policies, in order to control a set of factors in the daily activities, such as food-meat conversion, amount of food to be consumed, time to rest, weight gain, comfort temperature, water and energy to be consumed, etc. The perception of the agent is ensured by a set of sensors scattered by the physical structure of the poultry. The principal role of the agent is to perform a set of actions in a way to consider aspects such as productivity and profitability without compromising bird welfare. Experimental results have shown that, for the decision-taking process in poultry farming, our model is sound, advantageous and can substantially improve the agent actions in comparison with equivalent decision when taken by a human specialist.
Download

Paper Nr: 206
Title:

Finding Good Compiler Optimization Sets - A Case-based Reasoning Approach

Authors:

Nilton Luiz Queiroz Junior and Anderson Faustino da Silva

Abstract: Case-Based Reasoning have been used for a long times to solve several problems. The first Case-Based Reasoning used to find good compiler optimization sets, for an unseen program, proposed several strategies to tune the system. However, this work did not indicate the best parametrization. In addition, it evaluated the proposed approach using only kernels. Our paper revisit this work, in order to present an detail analysis of an Case-Based Reasoning system, applied in the context of compilers. In adition, we propose new strategies to tune the system. Experiments indicate that Case-Based Reasoning is a good choice to find compiler optimization sets that outperform a well-engineered compiler optimization level. Our Case-Based Reasoning approach achieves an average performance of 4.84% and 7.59% for cBench and SPEC CPU2006, respectively. In addition, experiments also indicate that Case-Based Reasoning outperforms the approach proposed by Purini and Jain, namely Best10.
Download

Paper Nr: 236
Title:

An Efficient and Topologically Correct Map Generalization Heuristic

Authors:

Mauricio G. Gruppi, Salles V. G. de Magalhães, Marcus V. A. Andrade, W. Randolph Franklin and Wenli Li

Abstract: We present TopoVW, an efficient heuristic for map simplification that deals with a variation of the generalization problem where the idea is to simplify the polylines of a map without changing the topological relationships between these polylines or between the lines and control points. This process is important for maintaining clarity of cartographic data, avoiding situations such as high density of map features, inappropriate intersections. In practice, high density of features may be represented by cities condensed into a small space on the map, inappropriate intersections may produce intersections between roads, rivers, and buildings. TopoVW is a strategy based on the Visvalingam-Whyatt algorithm to create simplified geometries with shapes similar to the original map, preserving topological consistency between features in the output. It uses a point ranking strategy, in which line points are ranked by their effective area, a metric that determines the impact a point will cause to the geometry if removed from the line. Points with inferior effective area are eliminated from the original line. The method was able to process a map with 4 million line points and 10 million control points in less than 2 minutes on a Intel Core 2 Duo processor.
Download

Paper Nr: 237
Title:

Un-restricted Common Due-Date Problem with Controllable Processing Times - Linear Algorithm for a Given Job Sequence

Authors:

Abhishek Awasthi, Jörg Lässig and Oliver Kramer

Abstract: This paper considers the un-restricted case of the Common Due-Date (CDD) problem with controllable processing times. The problem consists of scheduling jobs with controllable processing times on a single machine against a common due-date to minimize the overall earliness/tardiness and the compression penalties of the jobs. The objective of the problem is to find the processing sequence of jobs, the optimal reduction in the processing times of the jobs and their completion times. In this work, we first present and prove an essential property for the controllable processing time CDD problem for the un-restricted case along with an exact linear algorithm for optimizing a given job sequence for a single machine with a run-time complexity of O(n), where n is the number of jobs. Henceforth, we implement our polynomial algorithm in conjunction with a modified Simulated Annealing (SA) algorithm and Threshold Accepting (TA) to obtain the optimal/best processing sequence while comparing the two heuristic approaches, as well. The implementation is carried out on appended CDD benchmark instances provided in the OR-library.
Download

Paper Nr: 252
Title:

Optimizing Routine Maintenance Team Routes

Authors:

Francesco Longo, Andrea Rocco Lotronto, Marco Scarpa and Antonio Puliafito

Abstract: Simulated annealing is a metaheuristic approach for the solution of optimization problems inspired to the controlled cooling of a material from a high temperature to a state in which internal defects of the crystals are minimized. In this paper, we apply a simulated annealing approach to the scheduling of geographically distributed routine maintenance interventions. Each intervention has to be assigned to a maintenance team and the choice among the available teams and the order in which interventions are performed by each team are based on team skills, cost of overtime work, and cost of transportation. We compare our solution algorithm versus an exhaustive approach considering a real industrial use case and show several numerical results to analyze the effect of the parameters of the simulated annealing on the accuracy of the solution and on the execution time of the algorithm.
Download

Paper Nr: 262
Title:

Mixed Driven Refinement Design of Multidimensional Models based on Agglomerative Hierarchical Clustering

Authors:

Lucile Sautot, Sandro Bimonte, Ludovic Journaux and Bruno Faivre

Abstract: Data warehouses (DW) and OLAP systems are business intelligence technologies allowing the on-line analysis of huge volume of data according to users’ needs. The success of DW projects essentially depends on the design phase where functional requirements meet data sources (mixed design methodology) (Phipps and Davis, 2002). However, when dealing with complex applications existing design methodologies seem inefficient since decision-makers define functional requirements that cannot be deduced from data sources (data driven approach) and/or they have not sufficient application domain knowledge (user driven approach) (Sautot et al., 2014b). Therefore, in this paper we propose a new mixed refinement design methodology where the classical data-driven approach is enhanced with data mining to create new dimensions hierarchies. A tool implementing our approach is also presented to validate our theoretical proposal.
Download

Short Papers
Paper Nr: 58
Title:

An Algorithm to Compare Computer-security Knowledge from Different Sources

Authors:

Gulnara Yakhyaeva and Olga Yasinkaya

Abstract: In this paper we describe a mathematical apparatus and software implementation of a module of the RiskPanel system, aimed to compare computer-security knowledge learned from various online sources. To describe this process, we use model-theoretic formalism. The knowledge of a particular computer attack obtained from the same source is formalized as an underdetermined algebraic system, which we call a generalized case. The knowledge base is a set of generalized cases. To implement the knowledge comparison, we construct a generalized fuzzy model, the product of all algebraic systems stored in the database. We consider an algorithm for computing consistent truth values and describe a software implementation of the developed methods. The developed algorithm has polynomial complexity.
Download

Paper Nr: 62
Title:

Genetic Algorithm Combined with Tabu Search in a Holonic Multiagent Model for Flexible Job Shop Scheduling Problem

Authors:

Houssem Eddine Nouri, Olfa Belkahla Driss and Khaled Ghédira

Abstract: The Flexible Job Shop scheduling Problem (FJSP) is an extension of the classical Job Shop scheduling Problem (JSP) presenting an additional difficulty caused by the operation assignment problem on one machine out of a set of alternative machines. The FJSP is an NP-hard problem composed by two complementary problems, which are the assignment and the scheduling problems. In this paper, we propose a combination of a genetic algorithm with a tabu search in a holonic multiagent model for the FJSP. In fact, firstly, a scheduler agent applies a genetic algorithm for a global exploration of the search space. Then, secondly, a local search technique is used by a set of cluster agents to guide the research in promising regions of the search space and to improve the quality of the final population. To evaluate our approach, numerical tests are made based on two sets of well known benchmark instances in the literature of the FJSP: Kacem and Brandimarte. The experimental results show that our approach is efficient in comparison to other approaches.
Download

Paper Nr: 114
Title:

CBK-Modes: A Correlation-based Algorithm for Categorical Data Clustering

Authors:

Joel Luis Carbonera and Mara Abel

Abstract: Categorical data sets are often high-dimensional. For handling the high-dimensionality in the clustering process, some works take advantage of the fact that clusters usually occur in a subspace. In soft subspace clustering approaches, different weights are assigned to each attribute in each cluster, for measuring their respective contributions to the formation of each cluster. In this paper, we adopt an approach that uses the correlation among categorical attributes for measuring their relevancies in clustering tasks. We use this approach for developing the CBK-Modes (Correlation-based K-modes); a soft subspace clustering algorithm that extends the basic k-modes by using the correlation-based approach for measuring the relevance of the attributes. We conducted experiments on five real-world datasets, comparing the performance of our algorithm with five state-of-the-art algorithms, using three well-known evaluation metrics: accuracy, f-measure and adjusted Rand index. The results show that the performance of CBK-Modes outperforms the algorithms that were considered in the evaluation, regarding the considered metrics.
Download

Paper Nr: 205
Title:

Decision Tree Transformation for Knowledge Warehousing

Authors:

Rim Ayadi, Yasser Hachaichi, Saleh Alshomrani and Jamel Feki

Abstract: Explicit knowledge extracted from data, formalized tacit knowledge from experts or even knowledge existing in business sources may be in several heterogeneous formal representations and structures: as rules, models, functions, etc. However, a knowledge warehouse should solve this structural heterogeneity before storing knowledge. This requires specific tasks of harmonizing. This paper first presents our proposed definition and architecture of a knowledge warehouse, and then presents some languages for knowledge representations as particular the MOT (Modeling with Object Types) language. In addition, we suggest a metamodel for the MOT, and a metamodel for the explicit knowledge obtained using decision trees technique. As we aim to represent knowledge having different modeling formalisms into MOT, as a unified model, then we suggest a set of transformation rules that assure the move from the decision tree source model into the MOT target model. This work is still in progress, it is currently completed with tranformations for additional.
Download

Paper Nr: 273
Title:

Multi-agent Modelling for a Regulation Support System of Public Transport

Authors:

Nabil Morri, Sameh Hadouaj and Lamjed Ben Said

Abstract: The increasing cost of private transport and the rising pollution of the environment pose serious problems in society, economy and environment. The public transport has become a major challenge of collective and daily life. However, to encourage people to use a public transport system, the offered service have to be with good quality. This paper gives effective solutions to improve the quality of public transport service provided to users. In this paper, we present a Regulation Support System of Public Transport (RSSPT), based on Multi-agents approach that allows supervising and regulating a multimodal public transport. Its purpose is to adjust the vehicle schedules where several disturbances come simultaneously. The adjustment is based on actual traffic conditions. It covers the major criteria that have to be optimized in a traffic regulation: punctuality, regularity and correspondence.
Download

Paper Nr: 283
Title:

Improving Online Marketing Experiments with Drifting Multi-armed Bandits

Authors:

Giuseppe Burtini, Jason Loeppky and Ramon Lawrence

Abstract: Restless bandits model the exploration vs. exploitation trade-off in a changing (non-stationary) world. Restless bandits have been studied in both the context of continuously-changing (drifting) and change-point (sudden) restlessness. In this work, we study specific classes of drifting restless bandits selected for their relevance to modelling an online website optimization process. The contribution in this work is a simple, feasible weighted least squares technique capable of utilizing contextual arm parameters while considering the parameter space drifting non-stationary within reasonable bounds. We produce a reference implementation, then evaluate and compare its performance in several different true world states, finding experimentally that performance is robust to time drifting factors similar to those seen in many real world cases.
Download

Paper Nr: 300
Title:

A Knowledge based Decision Making Tool to Support Cloud Migration Decision Making

Authors:

Abdullah Alhammadi, Clare Stanier and Alan Eardley

Abstract: Cloud computing represents a paradigm shift in the way that IT services are delivered within enterprises. Cloud computing promises to reduce the cost of computing services, provide on-demand computing resources and a pay per use model. However, there are numerous challenges for enterprises planning to migrate to a cloud computing environment as cloud computing impacts multiple aspects of enterprises and the implications of migration to the cloud vary between enterprises. This paper discusses the development of an holistic model to support strategic decision making for cloud computing migration. The proposed model uses a hybrid approach to support decision making, combining the analytical hierarchical approach (AHP) with Case Based Reasoning (CBR) to provide a knowledge based decision support model and takes into account five factors identified from the secondary research as covering all aspects of cloud migration decision making. The paper discusses the different phases of the model and describes the next stage of the research which will include the development of a prototype tool and use of the tool to evaluate the model in a real life context.
Download

Paper Nr: 317
Title:

A Cognition-inspired Knowledge Representation Approach for Knowledge-based Interpretation Systems

Authors:

Joel Luis Carbonera and Mara Abel

Abstract: We propose a hybrid approach for knowledge representation that combines classic representations (such as rules and ontologies) with cognitively plausible representations, such as prototypes and exemplars. The resulting framework can be used for developing knowledge-based systems that combine knowledge-driven and data-driven techniques. We also present how this approach can be used for developing an application for interpretation of depositional processes in Petroleum Geology.
Download

Paper Nr: 319
Title:

A Game-theory based Model for Analyzing E-marketplace Competition

Authors:

Zheng Jianya, Li Weigang and Daniel L. Li

Abstract: The current e-marketplace provides many tools and benefits that bring sellers and buyers together, and promote trading within cyberspace. And due to certain unique features of e-commerce, the competition also takes on characteristics different from those found in traditional commerce. This paper analyses both the competition between sellers, and the stable state in e-marketplace through a proposed model that applies evolutionary game theory. The purpose is to better understand these relations and the current state within e-marketplace, as well as provide a tool for sellers to increase their profits. Here, the sellers are divided into four categories based on their scale (Large, Small) and sales strategy (Aggressive, Conservative). By developing Asymmetrical Competition Game Model in E-Marketplace (ACGME) in Nash Equilibrium, we analyze the composition of different sellers and how this proportion is affected by asymmetry among sellers. Finally, we conduct a simulation experiment to verify the effectiveness of our proposed model.
Download

Paper Nr: 36
Title:

Approaches to Enhancing Efficiency of Production Management on Shop Floor Level

Authors:

E. M. Abakumov and S. B. Kazanbekov

Abstract: The paper presents several approaches to enhancing efficiency of management of multiproduct single-unit and small-batch discrete production on shop-floor level, namely optimization during job scheduling, prediction of schedule execution, and support of decision-making during assignment of activity executor. For every approach, problem statement and example, potential method of solution and benefits of the shop floor level from using these approaches are given.
Download

Paper Nr: 100
Title:

Monitoring and Diagnosis of Faults in Tests of Rational Agents based on Condition-action Rules

Authors:

Francisca Raquel de V. Silveira, Gustavo Augusto L. de Campos and Mariela Cortés

Abstract: In theoretical references available to guide the design of agents, there are few testing techniques to validate them. It is known that this validation depends on the selected test cases, which should generate information that identifies the components of the agent tested that are causing unsatisfactory performance. In this paper, we propose an approach that aims to contribute to the testing of these programs, incorporating the ProMon agent in the testing process of rational agents. This agent monitors the testing and diagnosis of faults present in a tested agent, identifying the subsystem information-processing agent that is causing the faults to the designer. The first experiments are aimed at evaluating the approach by selecting test cases for simple reactive agents with internal states and in partially observable environments.
Download

Paper Nr: 102
Title:

Combining Heuristic and Utility Function for Fair Train Crew Rostering

Authors:

Ademir Aparecido Constantino, Candido Ferreira Xavier de Mendonça, Antonio Galvão Novaes and Allainclair Flausino dos Santos

Abstract: In this paper we address the problem of defining a work assignment for train drivers within a monthly planning horizon with even distribution of satisfaction based on a real-would problem. We propose an utility function, in order to measure the individual satisfaction, and a heuristic approach to construct and assign the rosters. In the first phase we apply stated preference methods to devise a utility function. The second phase we apply a heuristic algorithm which constructs and assigns the rosters based on the previous utility function. The heuristic algorithm constructs a cyclic roster in order to find out a minimum number of train drivers required for the job. The cyclic roster generated is divided into different truncated rosters and assigned to each driver in such way the satisfactions should be evenly distributed among all drivers as much as possible. Computational tests are carried out using real data instance of a Brazilian railway company. Our experiments indicated that the proposed method is feasible to reusing the discrepancies between the individual rosters.
Download

Paper Nr: 126
Title:

Hybrid-Intelligent Mobile Indoor Location Using Wi-Fi Signals - Location Method Using Data Mining Algorithms and Type-2 Fuzzy Logic Systems

Authors:

Manuel Castañón-Puga, Abby Salazar-Corrales, Carelia Gaxiola-Pacheco, Guillermo Licea, Miguel Flores-Parra and Eduardo Ahumada-Tello

Abstract: Technology with situational awareness needs a lot of information of the environment to execute the correct task at the correct moment. Location of the user is typical information to achieve the goal. This work proposes a mobile application that enables the indoor location of smartphones using the potential infrastructure given by Wireless Local Area Networks. This infrastructure goes beyond GPS (Global Position System) where signal is weak or is not available for indoors. This application uses an alternative and unconventional method to indoor location using Wi-Fi RSSI fingerprinting as well as an estimation based on Type-2 fuzzy inference systems provided by the developed framework JT2FIS. Wi-Fi Fingerprinting creates a radio map of a given area based on the RSSI data from several access points (APs) and generates a set of RSSI data for a given zone location. Consequently Data Mining is required for clustering the obtained set of data and generating the structure of a Type-2 Mamdani or Takagi-Sugeno Fuzzy Inference System; thus new RSSI values are introduced to the Type-2 Fuzzy Inference System to obtain an estimation of the user zone location.
Download

Paper Nr: 336
Title:

Radial Basis Function Neural Network Receiver for Wireless Channels

Authors:

Pedro Henrique Gouvêa Coelho and Fabiana Mendes Cesario

Abstract: Artificial Neural Networks have been widely used in several decision devices systems and typical signal processing applications. This paper proposes an equalizer for wireless channels using radial basis function neural networks. An equalizer is a device used in communication systems for compensating the non-ideal characteristics of the channel. The main motivation for such an application is their capability to form complex decision regions which are of paramount importance for estimating the transmitted symbols efficiently. The proposed equalizer is trained by means of an extended Kalman filter guaranteeing a fast training for the radio basis function neural network. Simulation results are presented comparing the proposed equalizer with traditional ones indicating the efficiency of the scheme.
Download

Area 3 - Information Systems Analysis and Specification

Full Papers
Paper Nr: 28
Title:

R2BA - Rationalizing R2RML Mapping by Assertion

Authors:

Rita Berardi, Vania Vidal and Marco A. Casanova

Abstract: The W3C RDB2RDF working group proposed R2RML as a standard mapping language that defines how to publish data stored in relational databases as RDF triples. However, R2RML mappings are sometimes difficult to understand, which may affect the users’ understanding of the transformations the original data suffer until published as RDF triples. To address this problem, this paper extends a semi-automatic method to define R2RML mappings to include design rational, thereby helping publishers to document the design process and final users to consume the published data. The paper also proposes to use the design rationale captured to enrich the representation of the original data in RDF, which ontology matching algorithms may use to find potential links to other existing vocabularies, thereby promoting interoperability.
Download

Paper Nr: 32
Title:

Using Technical-Action-Research to Validate a Framework for Authoring Software Engineering Methods

Authors:

Miguel Morales-Trujillo, Hanna Oktaba and Mario Piattini

Abstract: The validation of proposals has become a fundamental part of the creation of knowledge in Software Engineering. Initiatives like SEMAT have highlighted the need to base the correctness, usefulness and applicability of Software Engineering theories and practices on solid evidence. This paper presents the validation process used for KUALI-BEH, a proposal that became part of an OMG standard. The validation strategy applied was the result of integrating Technical-Action-Research and Case Study methods. After three years of work, we can conclude that TAR is a valuable research method, emphasizing that the main advantages of Technical-Action-Research are continuous feedback and the validation of an artifact, in this case KUALI-BEH, in a real context.
Download

Paper Nr: 43
Title:

Fostering Reuse in Choreography Modeling Through Choreography Fragments

Authors:

Andreas Weiß, Vasilios Andrikopoulos, Michael Hahn and Dimka Karastoyanova

Abstract: The concept of reuse in process models is extensively studied in the literature. Sub-processes, process templates, process variants, and process reference models are employed as reusable elements for process modeling. Additionally, the notion of process fragments has been introduced to capture parts of a process model and store them for later reuse. In contrast, concepts for reuse of processes that cross the boundaries of organizations, i.e., choreographies, have not yet been studied in the appropriate level of detail. In this paper, we introduce the concept of choreography fragments as reusable elements for choreography modeling. Choreography fragments can be extracted from choreography models, adapted, stored, and inserted into new models. We provide a formal model for choreography fragments and identify a set of patterns constituting frequently occurring meaningful choreography fragments.
Download

Paper Nr: 104
Title:

A User-centered Approach for Modeling Web Interactions Using Colored Petri Nets

Authors:

Taffarel Brant-Ribeiro, Rafael Araújo, Igor Mendonça, Michel S. Soares and Renan Cattelan

Abstract: Interactions are communication acts which take place between at least two agents and result in information interchange. To represent these activities, formal methods can be used to model interaction flows and Colored Petri Nets (CPNs) are a handy formal language with graphical notation for modeling systems. This paper introduces wiCPN (Web Interaction Modeling Using Colored Petri Nets), a language based on CPNs for representing Web interactions with improved notation. Our proposal is first presented with its proper refinements over traditional CPNs. Next, we have applied the approach for modeling the interaction of Classroom eXperience’s (CX)Web front-end, a real u-learning environment. As CX is an educational system developed to assist instructors and students during academic activities, we verified the developed model’s reachability to ensure it was able to represent users different access levels. We also validated our proposal with user experiments, comparing it with UML. Our designed model represented CX’s interaction correctly, considering user access levels and maintaining an understandable notation. Results indicate advantages of wiCPN over UML for modeling interactive interfaces. By gathering strengths of Petri Nets with a higher level graphical notation, wiCPN propitiated better understanding of the model, representing interaction in a structured and intuitive way.
Download

Paper Nr: 110
Title:

Deriving a Data Model from a Set of Interrelated Business Process Models

Authors:

Estrela F. Cruz, Ricardo J. Machado and Maribel Y. Santos

Abstract: Business process modeling and management approaches are increasingly used and disclosed between organizations as a means of optimizing and streamlining the business activities. A business process model identifies the activities, resources and data involved in the creation of a product or service, having lots of useful information that can be used to create a data model for the supporting software system. A data model is one of the most important models used in software development. Usually an organization deals with several business processes. As a consequence a software product does not usually support only one business process, but rather a set of business processes. This paper proposes an approach to generate a data model, based on a set of interrelated business processes, modeled in BPMN language. The approach allows aggregating in one data model all the information about persistent data that can be extracted from the set of business process models serving as a basis for the software development.
Download

Paper Nr: 123
Title:

SOAQM: Quality Model for SOA Applications based on ISO 25010

Authors:

Joyce M. S. França and Michel S. Soares

Abstract: Service-Oriented Architecture (SOA) has been widely adopted to develop distributed applications with the promise of legacy systems integration and better agility to build applications by reusing services. Considering the important role of SOA in organizations, quality should be treated as a key issue. By observing the works proposed in the literature, it is possible to notice that there is a need for development of a specific quality model for SOA based on the latest ISO 25010. One of the proposals of this paper is to analyze which important contributions were aggregated into the new ISO 25010 regarding SOA applications when compared with ISO 9126. This paper provides the definition of a specific quality model for SOA based on quality attributes defined by ISO 25010. As a result, most quality attributes proposed by ISO 25010 may be applicable to SOA at some degree level. However, some of these quality attributes should be adapted when applied to SOA projects.
Download

Paper Nr: 131
Title:

Top-down Feature Mining Framework for Software Product Line

Authors:

Yutian Tang and Hareton Leung

Abstract: Software product line engineering is regarded as a promising approach to generate tailored software products by referencing shared software artefacts. However, converting software legacy into a product line is extremely difficult, given the complexity, risk of the task and insufficient tool support. To cope with this, in this paper, we proposed a top-down feature-mining framework to facilitate developers extracting code fragments for features concerned. Our work aims to fulfill the following targets: (1) identify features at a fine granularity, (2) locate code fragments for concerned feature hierarchically and consistently, and (3) combine program analysis techniques and feature location strategies to improve mining performance. From our preliminary case studies, the top-down framework can effectively locate features and performs as good as Christians approach and performs better than the topology feature location approach.
Download

Paper Nr: 143
Title:

Scoping Automation in Software Product Lines

Authors:

Andressa Ianzen, Rafaela Mantovani Fontana, Marco Antonio Paludo, Andreia Malucelli and Sheila Reinehr

Abstract: Software product lines (SPL) are recognized as a way to increase the quality as well as to reduce the cost, delivery time, and mitigate risks of software products. Scoping, an essential step in SPLs, requires time and effort of domain experts; thus, automation initiatives at this stage are invaluable. This paper presents a semi-automatic approach for defining scope in SPLs. Consequently, a method is pro-posed for the semi-automatic identification and classification of product features, along with an approach for evaluating the variabilities and commonalities between the established line and a new product. Experiments conducted to evaluate the approach verify the benefits of the semi-automatization of scoping, including reduction of the time and human effort involved.
Download

Paper Nr: 146
Title:

A Model-driven Approach to Transform SysML Internal Block Diagrams to UML Activity Diagrams

Authors:

Marcel da Silva Melo, Joyce M. S. França, Edson Oliveira Jr. and Michel S. Soares

Abstract: The design of current software systems must take care not only of software but also of other elements, such as processes, hardware and flows. For the software design counterpart, both for structural and dynamic views, UML is currently widely applied. As UML lacks proper means to model systems elements, the Systems Modeling Language (SysML), a UML profile, was introduced by OMG. The proposal of this paper is to create a semi-automatic transformation that generates a UML Activity diagram from a SysML Internal Block Diagram. The hypothesis is that, by using parts, the main block and its flows, it is possible to create a semi-automatic transformation that generates a UML Activity diagram from a SysML Internal Block diagram preserving all information. A mapping describing the relationship between the two diagrams and a semi-automatic model-driven transformation using the ATL language are proposed. The approach is applied to a Distiller system for purifying dirty water, a real-world example described by the SysML team.
Download

Paper Nr: 148
Title:

An Empirical Study about the Influence of Project Manager Personality in Software Project Effort

Authors:

Daniel Tadeu Martínez C. Branco, Edson Cesar Cunha de Oliveira, Leandro Galvão, Rafael Prikladnicki and Tayana Conte

Abstract: Project effort is a main concern on software organizations. The project budget is derived from project effort which in turn is based on the software engineers’ effort cost. Project manager is responsible for planning and controlling this effort estimation. Some researches relate how project manager can influence the project success, specially when considering project manager personality. This research aims to evaluate the influence between project manager personality and teamwork behavior over project’s effort deviation. A case study was performed with 65 real projects collected from a software company dedicated to develop software projects for its local government. Unlike previous researches our results show no statistically significant influence of project manager personality, assesed by MBTI test, over project’s effort deviation. However, our results show the project manager teamwork behavior, assesed by Belbin’s BTRSPI, has a statistically significant influence on the project’s effort deviation.
Download

Paper Nr: 151
Title:

Fixture - A Tool for Automatic Inconsistencies Detection in Context-aware SPL

Authors:

Paulo Alexandre da Silva Costa, Fabiana Gomes Marinho, Rossana Maria de Castro Andrade and Thalisson Oliveira

Abstract: Software Product Lines (SPLs) have been used to provide support to the development of context-aware applications, which use context information to perform customized services aiming to satisfy users needs or environment restrictions. In this scenario, feature models have been also used to guide product adaptation process and to enable systematic reuse. However, a side effect of using those models is the accidental inclusion of inconsistencies that may imply in several errors in the adapted products. Moreover, context-aware applications are exposed to a contextual change flow, which increases the occurrence and effects of such erros. Therefore, mechanisms to check the inconsistencies are necessary before they become errors in the adapted product. Nevertheless, a manual checking is highly error prone. In particular, there are inconsistencies that can be detected only when they arise due to a specific adaptation. For those reasons, it is essencial to identify errors in the context-aware feature model before they yield incorrect adapted products. In this work, we present an Eclipse-based tool that supports the software engineer in the design of context-aware feature models and provides a simulation process to allow anticipating inconsistencies related to the adaptations.
Download

Paper Nr: 154
Title:

Using EVOWAVE to Analyze Software Evolution

Authors:

Rodrigo Magnavita, Renato Novais and Manoel Mendonça

Abstract: Software evolution produces large amounts of data which software engineers need to understand for their daily activities. The use of software visualization constitutes a promising approach to help them comprehend multiple aspects of the evolving software. However, portraying all the data is not an easy task as there are many dimensions to the data (e.g. time, files, properties) to be considered. This paper presents a new software visualization metaphor inspired by concentric waves, which gives information about the software evolution in different levels of detail. This new metaphor is able to portray large amount of data and may also be used to consider different dimensions of the data. It uses the concepts of the formation of concentric waves to map software evolution data generated during the waves formation life cycle. The metaphor is useful for exploring and identifying certain patterns in the software evolution. To evaluate its applicability, we conducted an exploratory study to show how the visualization can quickly answer different questions asked by software engineers when evolving their software.
Download

Paper Nr: 158
Title:

Semantic Annotation of Images Extracted from the Web using RDF Patterns and a Domain Ontology

Authors:

Rim Teyeb Jaouachi, Mouna Torjmen Khemakhem, Nathalie Hernandez, Ollivier Haemmerle and Maher Ben Jemaa

Abstract: Semantic annotation of web resources presents a point of interest for several research communities. The use of this technique improves the retrieval process because it allows one to pass from the traditional web to the semantic web. In this paper, we propose a new method for semantically annotating web images. The main originality of our approach lies in the use of RDF (Resource Description Framework) patterns in order to guide the annotation process with contextual factors of web images. Each pattern presents a group of information to instantiate from contextual factors related to the image to annotate. We compared the generated annotations with annotations made manually. The results we obtain are encouraging.
Download

Paper Nr: 163
Title:

What Are the Main Characteristics of High Performance Teams for Software Development?

Authors:

Alessandra C. S. Dutra, Rafael Prikladnicki and Tayana Conte

Abstract: This paper presents a discussion in relation to current training approaches to software development and their relation to high performance team formation. We performed an ad hoc literature review about training approaches in Software Engineering and a systematic literature review about the characteristics of high performance software development teams. Based on what was found we reflect on the challenges of training high performance teams for software development projects and to what extent the current training approaches overcome such challenges.
Download

Paper Nr: 170
Title:

Applying Knowledge Codification in a Post-mortem Process - A Practical Experience

Authors:

Erivan Souza da Silva Filho, Davi Viana and Tayana Conte

Abstract: In information systems, acquiring experiences in projects can result in new knowledge to people or the organization. Knowledge Management analyzes such experiences as a significant resource to the organization. Through the Post-mortem analysis, people can remember experiences and situations that they had during a software development project. In order to support such analysis, the PABC-pattern structure proposes codifying knowledge, assisting practitioners in registering key elements to facilitate the understanding of that experience. This paper proposes a process of Post-mortem Analysis based on the KJ method. We have integrated the PABC-Pattern approach as a final product in order to record the experiences and gathered information.
Download

Paper Nr: 181
Title:

Engineering and Evaluation of Process Alternatives in Tactical Logistics Planning

Authors:

Michael Glöckner, Stefan Mutke and André Ludwig

Abstract: The objective of tactical planning in logistics is the engineering and evaluation of processes within a given set of possible alternatives. Due to outsourcing and a division of labor, a high number of participants, available services and thus possible process alternatives arises within logistics networks. The additional wide range of service description and annotation methods result in a complex planning process. In order to support planning, a semi-automated approach is presented in this paper that is based on a combined catalog and construction system (for engineering) and a generic simulation approach (for evaluation) that are able to handle the variety of description and annotation methods. The basic concepts are presented and afterward associated by a model-driven approach in order to connect them and make them compatible to work with each other. Finally, a method is developed to foster a semi-automated engineering and evaluation of process alternatives.
Download

Paper Nr: 187
Title:

Testing M2T Transformations - A Systematic Literature Review

Authors:

André Abade, Fabiano Ferrari and Daniel Lucrédio

Abstract: Context: Model-Driven Development (MDD) is about to become a reality in the development of enterprise information systems due to its benefits, such as reduction of development and maintenance costs, and support for controlled evolution. Consequently, testing model transformations, considering their high complexity particularly regarding Model-to-Text (M2T) transformations, plays a key role to increase the confidence in the produced artefacts. Objective: this paper aims to characterize testing approaches and test selection criteria that focus on M2T transformations, in particular white-box approaches. Method: the objective is accomplished through a systematic literature review. We defined research questions regarding the testing of M2T transformations and extracted and analyzed data from a set of primary studies. Results: we identified a variety of incipient white-box testing approaches for this context. They mostly rely on mapping strategies and traceability of artefacts. Most of them focus on well-formedness and correctness of models and source code, although we could notice a change of focus in most recent research. Conclusions: current solutions for testing M2T transformations have begun to change the initial focus on well-formedness and correctness of models. Some approaches involve techniques that establish coverage criteria for testing, whereas others try to solve the testability across many transformations languages.
Download

Paper Nr: 238
Title:

OnTheme/Doc - An Ontology-based Approach for Crosscutting Concern Identification from Software Requirements

Authors:

Paulo Afonso Parreira Júnior and Rosângela Aparecida Dellosso Penteado

Abstract: Context: Aspect-Oriented Requirements Engineering (AORE) is a research field that provides the most appropriate strategies for identification, modularization and composition of CrossCutting Concerns (CCC). Problem: in last years, researchers have developed several AORE approaches. However, some experimental studies have found problems with the accuracy of these approaches, regarding to the CCC identification recall. This mainly occurs, due to: (i) the lack of knowledge presented by the users of these approaches about the crosscutting nature of CCC; and (ii) the lack of resources to support users of these approaches during to CCC identification. Goal: this work aims to improve the values of the recall and precision metrics of a well-known AORE approach, called Theme/Doc, with regard to CCC identification. To do this, we propose an extension of this approach, called OnTheme/Doc, in which the CCC identification activity is supported by ontologies. Experimental results: the data obtained from an experimental study performed on OnTheme/Doc showed a significant increasing of recall, without negative effects on the precision and execution time of the approach.
Download

Paper Nr: 242
Title:

Unveiling the Architecture and Design of Android Applications - An Exploratory Study

Authors:

Edmilson Campos, Uirá Kulesza, Roberta Coelho, Rodrigo Bonifácio and Lucas Mariano

Abstract: This work presents an exploratory study whose goal was to investigate the architectural characteristics of Android’s applications. We selected twelve popular and open-source applications available on the official Android’s store for analysing. Then, we applied techniques of the reverse engineering to each target application in order to investigate three main aspects: (i) architecture of each application; use of the (ii) design patterns; and (iii) expecting handling policies. Support tools were used in order to identify dependencies between architectural components implemented in each target application, and to graphically present those dependencies. Then, based on this analysing, we present a qualitative analysis carried out on the extracted architectures. One of the outcomes consistently detected during this study was an overview of the main architectural choices that have been adopted by Android developers, resulting on formulation of a preliminary conceptual architecture for Android applications.
Download

Paper Nr: 254
Title:

JOPA: Accessing Ontologies in an Object-oriented Way

Authors:

Martin Ledvinka and Petr Křemen

Abstract: Accessing OWL ontologies programmatically by complex IT systems brings many problems stemming from ontology evolution, their open-world nature and expressiveness. This paper presents Java OWL Persistence API (JOPA), a persistence layer that allows using the object-oriented paradigm for accessing semantic web ontologies. Comparing to other approaches, it supports validation of the ontological assumptions on the object level, advanced caching, transactional approach, unification and optimization of repository access through the OntoDriver component, as well as accessing multiple repository contexts at the same time. Additionally, we present a complexity analysis of OntoDriver operations that allows optimizing object-oriented access performance for underlying storage mechanisms. Last but not least, we compare our object-oriented solution to low level Sesame API in terms of efficiency.
Download

Paper Nr: 302
Title:

Monitoring the Development of University Scientific Schools in University Knowledge Management

Authors:

Gulnaz Zhomartkyzy and Tatyana Balova

Abstract: This paper proposes a technological approach to university scientific knowledge management which integrates the ontology based knowledge model and the methods of university scientific resource intellectual processing. The process-oriented On-To-Knowledge methodology is used as the basis for university scientific knowledge management. Some models and methods of university scientific knowledge management have been studied. The developed model of a specialist that reflects the level of scientific activity productivity and overall assessment of the employee's scientific activity has been described. A specialist’s competence in knowledge areas is based on the processing of information resources. The approach to the university scientific school identification based on the clustering of university academic community common interests has been described.
Download

Paper Nr: 329
Title:

Using the Dependence Level Among Requirements to Priorize the Regression Testing Set and Characterize the Complexity of Requirements Change

Authors:

André Di Thommazo, Kamilla Camargo, Elis Hernandes, Gislaine Gonçalves, Jefferson Pedro, Anderson Belgamo and Sandra Fabbri

Abstract: Background: When there are changes in software requirements, other phases of software development are impacted and frequently, extra effort is needed to adjust the previous developed artifacts to new features or changes. However, if the development team has the traceability of requirements, the extra effort could be not an issue. An example is the software quality team, which needs to define effective tests cycles in each software release. Goal: This papers aims to present an approach based on requirements dependence level to support the regression test prioritization and identify the real impact of requirement changes. Method: The designed approach is based on automatic definition of Requirements Traceability Matrix with three different dependence levels. Moreover, dependence between requirement and test case is also defined. A case study in a real software development industry environment was performed to assess the approach. Results: Identifying the dependence level among requirements have allowed the quality assurance team priorize regression tests and, by means of these tests, defects are early identified if compared with tests execution without priorization. Moreover, the requirements changes complexity is also identified with the approach support. Conclusion: Results shows that definition of dependence levels among requirements gives two contributions: (i) allowing test prioritization definition, which become regression test cycle more effective, (ii) allowing characterize impacts of requirements changes, which is commonly requested by stakeholders.
Download

Short Papers
Paper Nr: 44
Title:

Supporting the Validation of Structured Analysis Specifications in the Engineering of Information Systems by Test Path Exploration

Authors:

Torsten Bandyszak, Mark Rzepka, Thorsten Weyer and Klaus Pohl

Abstract: Requirements validation should be carried out early in the development process to assure that the requirements specification correctly reflects stakeholder’s intentions, and to avoid the propagation of defects to subsequent phases. In addition to reviews, early test case creation is a commonly used requirements validation technique. However, manual test case derivation from specifications without formal semantics is costly, and requires experience in testing. This paper focuses on Structured Analysis as a semi-formal technique for specifying information systems requirements, which is part of latest requirements engineering curricula and widely accepted practices in business analysis. However, there is insufficient guidance and tool support for creating test cases without the need for using formal extensions in early development stages. Functional decomposition as a core concept of Structured Analysis, and the resulting distribution of control flow information complicates the identification of dependencies between system inputs and outputs. We propose a technique for automatically identifying test paths in Structured Analysis specifications. These test paths constitute the basis for defining test cases, and support requirements validation by guiding and structuring the review process.
Download

Paper Nr: 55
Title:

On using Markov Decision Processes to Model Integration Solutions for Disparate Resources in Software Ecosystems

Authors:

Rafael Z. Frantz, Sandro Sawicki, Fabricia Roos-Frantz, Iryna Yevseyeva and Michael Emmerich

Abstract: The software ecosystem of an enterprise is usually composed of an heterogeneous set of applications, databases, documents, spreadsheets, and so on. Such resources are involved in the enterprise’s daily activities by supporting its business processes. As a consequence of market change and the enterprise evolution, new business processes emerge and the current ones have to be evolved to tackle the new requirements. It is not a surprise that different resources may be required to collaborate in a business process. However, most of these resources were devised without taking into account their integration with the others, i.e., they represent isolated islands of data and functionality. Thus, the goal of an integration solution is to enable the collaboration of different resources without changing them or increasing their coupling. The analysis of integration solutions to predict their behaviour and find possible performance bottlenecks is an important activity that contributes to increase the quality of the delivered solutions. Software engineers usually follow an approach that requires the construction of the integration solution, the execution of the actual integration solution, and the collection of data from this execution in order to analyse and predict their behaviour. This is a costly, risky, and time-consuming approach. This paper discusses the usage of Markov models for formal modelling of integration solutions aiming at enabling the simulation of the conceptual models of integration solutions still in the design phase. By using well-established simulation techniques and tools at an early development stage, this new approach contributes to reduce cost, risk, development time and improve software quality attributes such as robustness, scalability, and maintenance.
Download

Paper Nr: 86
Title:

A Generic Framework for Modifying and Extending Enterprise Modeling Languages

Authors:

Richard Braun and Werner Esswein

Abstract: Conceptual modeling languages are of great importance within information systems management. During the last decade, a small set of commonly used enterprise modeling languages established and gained broad acceptance in both academia and practice (e.g., BPMN). Due to their dissemination, these languages often need to be extended or adapted for domain-specific or technical requirements. Since most modeling languages provide rather poor extension mechanisms, it is necessary to modify a language meta model directly. However, there is lack of integrated methodical support for these modifications. Within this position paper, we therefore proclaim a generic framework for modifying enterprise modeling languages on the meta model level. The framework is divided into the main parts of a modeling language (abstract syntax, concrete syntax, semantics) and respective operations (add, remove, specify and redefine).
Download

Paper Nr: 112
Title:

Evidence-based SMarty Support for Variability Identification and Representation in Component Models

Authors:

Marcio H. G. Bera, Edson Oliveira Jr. and Thelma E. Colanzi

Abstract: Variability modeling is an essential activity for the success of software product lines. Although existing literature presents several variability management approaches, there is no empirical evidence of their effectiveness for representing variability at component level. SMarty is an UML-based variability management approach that currently supports use case, class, activity, sequence and component models. SMarty 5.1 provides a fully compliant UML profile (SMartyProfile) with stereotypes and tagged-values and a process (SMartyProcess) with a set of guidelines on how to apply such stereotypes towards identifying and representing variabilities. At component level, SMarty 5.1 provides only one stereotype, variable, which means that any classes of a given component have variability. Such a stereotype is clearly not enough to represent the extent of variability modeling in components, ports, interfaces and operations. Therefore, this paper presents how the improved version (5.2) of SMarty can identify and represent variability on such component-related elements, as well as an experimental study that provides evidence of the SMarty effectiveness.
Download

Paper Nr: 120
Title:

Analyzing Distributions of Emails and Commits from OSS Contributors through Mining Software Repositories - An Exploratory Study

Authors:

Mário Farias, Renato Novais, Paulo Ortins, Methanias Colaço and Manoel Mendonça

Abstract: Context: Distributed software development is a modern practice in software industry. This is especially true in Open Source Software (OSS) community. In this context, developers are normally distributed around the world. In addition, most of them work for free and without or with low coordinating. Understanding how developers’ practices are on those projects may guide communities to successfully manage their projects. Goal: We mined two repositories of the Apache Httpd project in order to gather information about its developers’ behavior. Method: We developed an approach to cross data gathered from mail list and source code repository through mining techniques. The approach uses software visualization to analyze the mined data. We conducted an experimental evaluation of the approach to assess the behavioral patterns from OSS development community. Results: Our results show Apache developers’ behavior patterns. In addition, we deepen the analysis of the Preferred Representational System of four top developers presented by Colaço et. al in (Colaço et al., 2010). Conclusion: The use of data mining and software visualization to analyze data from different sources can spot important properties of development processes.
Download

Paper Nr: 125
Title:

Assessing the Quality of User-interface Modeling Languages

Authors:

Francisco Morais and Alberto Rodrigues da Silva

Abstract: Model-Driven Development (MDD) is an approach that considers model as first citizen elements in the context of software development. Since there are so many modeling languages, there is a need to compare them and choose the best for each concrete situation. The selection of the most appropriate modeling language may influence the output’s quality, whether it is only a set of models or software. This paper introduces ARENA, a framework that allows to evaluate the quality and effectiveness of modeling languages. Then we will apply ARENA to a specific subset of User-Interface Modeling Languages (namely UMLi, UsiXML, XIS and XIS-Mobile), taking into account some of their characteristics and the influence they have when models are generated.
Download

Paper Nr: 145
Title:

Mapping Formal Results Back to UML Semi-formal Model

Authors:

Vinícius Pereira, Luciano Baresi and Márcio E. Delamaro

Abstract: UML is a widely used modeling language and it has a semi-formal notation that helps the software developers with a set of modeling rules, but without the need to have expertise in formal methods. This semi-formalism encourages the use of UML in Software Engineering domain because the software engineers involved can understand UML diagrams easily. Whereas, formal methods are more accurate than UML and their formal models have a higher correctness than the UML models. Thanks to this correctness, over the years, researchers are seeking ways to assign a formal semantics to UML. Usually they focus on how to formalize UML diagrams, transform them into formal models (such as LISP) and use them in model checkers. However, few researches discuss the problem of how to present the formal results to an audience who has no knowledge of formal methods. In order to fulfil this problem, in this paper is presented a mapping responsible for making the correlation between the formal results and the UML semi-formal environment, allowing the developer to analyze the results without having advance knowledge of formal methods. Therefore, we hope that this work may contribute to the increased adoption of formal methods in the software development industry.
Download

Paper Nr: 167
Title:

Metrics to Support IT Service Maturity Models - A Systematic Mapping Study

Authors:

Bianca Trinkenreich, Gleison Santos and Monalessa Perini Barcellos

Abstract: Background: Maturity models for IT service such as CMMI-SVC and MR-MPS-SV requires identification of critical business process and definition of relevant metrics to support decision-making, but there is no clear direction or strict suggestion about which should be those processes and metrics. Aims: We aim to identify adequate metrics to be used by organizations deploying IT service maturity models and the relationship between those metrics and processes of IT service maturity models or standards. Research questions are: (i) Which metrics are being suggested for IT service quality improvement projects? (ii) How do they relate to IT service maturity models processes? Method: We have defined and executed a systematic mapping review protocol. A specialist on systematic mapping review and IT service maturity models evaluated the protocol and its results. Results: Of 114 relevant studies, 13 addressed the research questions. All of them presented quality metrics, but none presented tools or techniques for metrics identification. Conclusions: We identified 133 metrics, 80 related to specific processes areas of service maturity models. Even being a broad result, not all models aspects were considered in this study.
Download

Paper Nr: 184
Title:

Bridging the Gap between a Set of Interrelated Business Process Models and Software Models

Authors:

Estrela Ferreira Cruz, Ricardo J. Machado and Maribel Yasmina Santos

Abstract: A business process model identifies the activities, resources and data involved in the creation of a product or service, having lots of useful information for starting to develop a supporting software system. With regard to software development, one of the most difficult and crucial activities is the identification of system functional requirements. A popular way to capture and describe those requirements is through UML use case models. Usually an organization deals with several business processes. As a consequence, a software product does not usually support only one business process, but rather a set of business processes. This paper presents an approach that allows aggregating in one use case model all the information that can be extracted from the set of business process models that will be supported by the software under development. The generated use case model serves as a basis for the software development process, helping reducing time and effort spent in requirements elicitation. The approach also helps to ensure the alignment between business and software, and enables traceability between business processes and the corresponding elements in software models.
Download

Paper Nr: 185
Title:

On the Modelling of the Influence of Access Control Management to the System Security and Performance

Authors:

Katarzyna Mazur, Bogdan Ksiezopolski and Adam Wierzbicki

Abstract: To facilitate the management of permissions in complex secure systems, the concept of reference models for role-based access control (RBAC) has been proposed. However, among many existing RBAC analyses and implementations, there still exists the lack of the evaluation of its impact on the overall system performance. In this paper, to reduce this deficiency, we introduce an initial approach towards estimation of the influence of the most common access control mechanism on the system efficiency. Modelling RBAC in Quality of Protection Modelling Language (QoP-ML), we analyse a real enterprise business scenario and report obtained results, focusing on time and resource consumption.
Download

Paper Nr: 199
Title:

A DSL for Configuration Management of Integrated Network Management System

Authors:

Rosangela Pieroni and Rosângela Aparecida Dellosso Penteado

Abstract: A management system of networks that takes all elements into consideration, regardless of the network technology, is one of the most emphasized requests by telecommunication companies. However, developing this system is not a trivial task. Furthermore, the software development process based on source code makes the task even more complex and requires a great effort of the developers to perform code update and maximize the reuse of software artifacts to insert a new network technology. In this paper, we propose a DSL approach to specify new network technologies into integrated network management system developed by a real company. An experiment was conducted according to all steps proposed by Wohlin (Wohlin et al., 2000) to validate our DSL approach versus specialization of classes with the purpose of increasing advantages with respect to time and number of errors inserted in the source code. Although the time spent to develop the application using the two approaches was not statistically different, all other results obtained such as code generated automatically without present errors and all comments reported by the participants regarding the ease of use of DSL, it encourages the development of new DSLs to others functions of the integrated network management system.
Download

Paper Nr: 201
Title:

Knowledge Management Practices in GSD - A Systematic Literature Review Update

Authors:

Romulo de Aguiar Beninca, Elisa Hatsue Moriya Huzita, Edwin Vladimir Cardoza Galdamez, Gislaine Camila Lapasini Leal, Renato Balancieri and Yoji Massago

Abstract: Software development is an activity that makes intensive use of knowledge. The reduction of face-to-face communication in Global Software Development Environments (GSD), make exponentially important to use Knowledge Management in these environments, which is performed by Practices of Knowledge Management. This study presents an update of a systematic review of Practices of Knowledge Management in GSD. The main contribution of this study relates to the identification of other practices, including sharing of the "social conscience", which gives for people the ability to identify themselves within the work context, improving the interaction, the performance of activities and also trust between individuals.
Download

Paper Nr: 204
Title:

A UML-based Approach for Multi-scale Software Architectures

Authors:

Ilhem khlif, Mohamed Hadj Kacem, Ahmed Hadj Kacem and Khalil Drira

Abstract: Multi-level software architecture design is an important issue in software engineering. Several research studies have been done on the modeling of multi-level architectures based on UML. However, they neither included the refinement between the levels nor clarified the relationships between them. In this paper, we propose a multiscale modeling approach for multi-level software architecture oriented to facilitate adaptability management. The proposed design approach is founded on UML notations and uses component diagrams. The diagrams are submitted to vertical and horizontal transformations for refinement; this is done to reach a fine-grain description that contains necessary details that characterize the architectural style. The intermediate models provide a description with a given abstraction that allow the validation to be conducted significantly while remaining tractable w.r.t. complexity. The validation scope can involve intrinsic properties ensuring the model correctness w.r.t. the UML specification. To achieve this, we propose a set of model refinement rules. The rules manage the refinement and abstraction process (vertical and horizontal) as a model transformation from a coarse-grain description to a fine-grain description. Finally, we experimented our approach by modeling an Emergency Response and Crisis Management System (ERCMS) as a case of study.
Download

Paper Nr: 224
Title:

Improving Software Design Decisions towards Enhanced Return of Investment

Authors:

Pedro Valente, David Aveiro and Nuno Nunes

Abstract: One outstanding issue in modern information systems development is the Return of Investment (ROI) of supporting Business Processes (BPs) through in-house development and/or integration of business modules from component-based development. Software solutions to solve this problem will usually be based in internal software development processes where the inadequate decision, for e.g. the wrong software framework, may lead to losses that will range from minor adjustment budgets to financially catastrophic situations. Here we propose to use information from the analysis of BPs metrics to enhance decisions related to software design, based on software development effort estimation for the new enhancement, and the related ROI as a path to consistently raise project success. This paper frames a Software Process Improvement (SPI), Enterprise Engineering (EE) and Software Engineering (SE) based-solution to enhance ROI following better design decisions, and provides in-depth relevant considerations regarding our future work.
Download

Paper Nr: 235
Title:

Metrics to Support It Service Maturity Models - A Case Study

Authors:

Bianca Trinkenreich and Gleison Santos

Abstract: Background: Maturity models for IT service require proper identification of critical business process and definition of relevant metrics to support decision-making, but there is no clear direction about what should be those critical business processes and metrics. Aims: This is part of a research in progress concerning the identification of adequate metrics to be used by organizations deploying IT service maturity models. We have conducted a systematic mapping study to answer: (i) What metrics are being suggested for IT service quality improvement projects? and (ii) How do they relate to IT service maturity models processes? In this paper, we aim to answer new research questions: (iii) What kind of relationship exist between processes that appear in derived metrics that include more than one process? (iv) Which of literature suggested metrics are being used by organizations? Method: We have conducted a case study in industry. Results: From relationship found between mapping study metrics, we had analysed those ones used by organization that had available data, but we could not evidence a correlation between them, even being related. However, as a result of this analysis, we had confirmed the need to evaluate IT services through multiple metrics or define metrics in a way that the same metric be able to present different aspects about IT services management, in order to provide a comprehensive approach about the organization scenario.
Download

Paper Nr: 240
Title:

TCG - A Model-based Testing Tool for Functional and Statistical Testing

Authors:

Laryssa Lima Muniz, Ubiratan S. C. Netto and Paulo Henrique M. Maia

Abstract: Model-based testing (MBT) is an approach that takes software specification as the base for the formal model creation and, from it, enables the test case extraction. Depending on the type of model, an MBT tool can support functional and statistical tests. However, there are few tools that support both testing techniques. Moreover, the ones that support them offer a limited number of coverage criteria. This paper presents TCG, a tool for the generation and selection of functional and statistical test cases. It provides 8 classic generation techniques and 5 selection heuristics, including a novel one called minimum probability of path.
Download

Paper Nr: 253
Title:

The Impact of Lean Techniques on Factors Influencing Defect Injection in Software Development

Authors:

Rob J. Kusters, Fabian M. Munneke and Jos J. M. Trienekens

Abstract: In this paper we will focus on the impact that lean may have in preventing the injection of defects. We will research the impact of a number of lean techniques on defect injection factors. Date have been obtained from a single large Dutch governmental organization which has been using lean techniques routinely for more than three years. To investigate the impact of lean on defect injection we developed a survey which focused on the perceptions of the software developers of this organisation. The results suggest that the link between lean techniques and factors influencing defect injection is real and they explain to a certain extent the positive impact of the usage of lean techniques on software productivity.
Download

Paper Nr: 255
Title:

Building a Community Cloud Infrastructure for a Logistics Project

Authors:

Maria Teresa Baldassarre, Nicola Boffoli, Danilo Caivano, Gennaro del Campo and Giuseppe Visaggio

Abstract: Cloud computing is becoming more and more adopted as infrastructure for providing service oriented solutions. Such a solution is especially critical when software and hardware resources are remotely distributed. In this paper we illustrate our experience in designing the architecture of a community cloud infrastructure in an industrial project related to integrated logistics (LOGIN) for made in Italy brand products. The cloud infrastructure has been designed with particular attention towards aspects such as virtualization, server consolidation and business continuity.
Download

Paper Nr: 257
Title:

Integrating User Stories and i* Models - A Systematic Approach

Authors:

Marcia Lucena, Celso Agra, Fernanda Alencar, Eduardo Aranha and Aline Jaqueira

Abstract: User stories are a common way to describe requirements in Agile methods. However, the use of user stories is restricted, since they offer only a limited view of the whole system. In contrast, one of the features of the i* framework is provides a visual representation of the actors involved in a system and the goals that are to be met. This allows for a better understanding of the problem as well as for a better overview and evaluation of alternative solutions. In addition, i* models consider the early phases of requirements engineering, while user stories cover the later phases. In this context, this paper presents an approach to map user stories to i* models and vice versa, aiming at providing a bigger picture of the system as a whole. A case study to evaluate this work is also presented, suggesting the viability of the approach.
Download

Paper Nr: 291
Title:

Analysis of Data Quality Problem Taxonomies

Authors:

Arturs Zogla, Inga Meirane and Edgars Salna

Abstract: There are many reasons to maintain high quality data in databases and other structured data sources. High quality data ensures better discovery, automated data analysis, data mining, migration and re-use. However, due to human errors or faults in data systems themselves data can become corrupted. In this paper existing data quality problem taxonomies for structured textual data and several improvements are analysed. A new classification of data quality problems and a framework for detecting data errors both with and without data operator assistance is proposed.
Download

Paper Nr: 308
Title:

Return on Investment of Software Product Line Traceability in the Short, Mid and Long Term

Authors:

Zineb Mcharfi, Bouchra El Asri, Ikram Dehmouch, Asmaa Baya and Abdelaziz Kriouile

Abstract: Several works discuss tracing in Software Product Lines from a technical and architectural points of view, by proposing methods to implement traceability in the system. However, before discussing this field of traceability, we first need to prove the profitability of integrating such approach in the Product Line. Therefore, we bring in this paper a quantitative analysis on how traceability can impact the Return on Investment of a Software Product Line, and in which conditions, in terms of number of products and SPL phase, can tracing be profitable. We compare the results of a generic Software Product Line estimation model, COPLIMO, and our model METra-SPL. Our analysis shows that introducing traceability costs when constructing the Product Line, but can be profit making in the long term, especially in maintenance phase, starting from 2 products to generate.
Download

Paper Nr: 313
Title:

A Study on the Usage of Smartphone Apps in Fire Scenarios - Comparison between GDACSmobile and SmartRescue Apps

Authors:

Parvaneh Sarshar, Vimala Nunavath and Jaziar Radianti

Abstract: In this paper, we present a thorough overview of the two recently developed applications in the field of emergency management. The applications titled GDACSmobile and SmartRescue are using mobile app and smartphone sensors as the main functionality respectively. Furthermore, we argue the differences and similarities of both applications and highlight their strengths and weaknesses. Finally, a critical scenario for fire emergency in a music festival is designed and the applicability of the features of each application in supporting the emergency management procedure is discussed. It is also argued how the aforementioned applications can support each other during emergencies and what the potential collaboration between them can be.
Download

Paper Nr: 342
Title:

BlueKey - A Bluetooth Secure Solution for Accessing Personal Computers

Authors:

Aziz Barbar and Anis Ismail

Abstract: A major realm of security breach for today’s users is unprivileged access, modification or sometimes forgery of critical business, or user information. Existing computer locking/unoking methods serve as an intermediate barrier against unethical deeds. The proposed solution BlueKey is a software-based solution installed on Personal Computers (PC) that safely unlocks the PC by securing the Bluetooth communication channel between the user’s mobile device and his/her PC. BlueKey helps end-users in not typing their passwords every time they need to access their PCs. At the same level, this solution includes a mobile application that allows the owner to fully control his/her PC via Bluetooth, and runs a breach detector module with safety measures to protect the PC when it is locked. At the technical level, BlueKey is a platform free application written using Java programming language, fulfilling the Write Once Run Anywhere (WORA) concept. This system is built using Java Development Kit (JDK) and compiled using Java Virtual Machine (JVM), and runs with Java Runtime Environment (JRE). Alongside, the mobile application is developed using Java 2 Micro Edition (J2ME), which is compatible with Android, Symbian, and BlackBerry operating systems.
Download

Paper Nr: 17
Title:

System of Localisation of the Network Activity Source in APCS Data Lines

Authors:

D. M. Mikhaylov, S. D. Fesenko, Y. Y. Shumilov, A. V. Zuykov, A. S. Filimontsev and A. M. Tolstaya

Abstract: Automated control system (ACS) is a complex engineering system covering virtually all spheres of industrial production support. The rapid advent of ACS leads to a fast growth of threats aimed at obtaining control over such systems. ACS intrusion may lead to privacy violation, equipment malfunction, loss of time in business processes and even endanger people`s life. This paper proposes a hardware-software complex ‘Shield’ ensuring comprehensive information security of automated control systems, mainly focusing on its hardware part. The system providing localisation of the network activity source in ACS data lines is described as well as its operational principle and main specifications. As the paper deals with the hardware-software complex, efficiency comparison of ‘Shield’ software part with the nearest analogues is presented. The hardware design of ‘Shield’ is now on the final stage, so the testing results of its performance effectiveness are not provided in this paper.
Download

Paper Nr: 76
Title:

Checklist-based Inspection of SMarty Variability Models - Proposal and Empirical Feasibility Study

Authors:

Ricardo T. Geraldi, Edson Oliveira Jr., Tayana Conte and Igor Steinmacher

Abstract: Software inspection is a particular type of software review applied to all life-cycle artifacts and follows a rigorous and well-defined defect detection process. Existing literature defines several inspection techniques for different domains. However, they are not for inspecting product-line UML variability models. This paper proposes SMartyCheck, a checklist-based software inspection technique for product-line use case and class variability models according to the SMarty approach. In addition, it presents and discusses the empirical feasibility of SMartyCheck based on the feedback from several experts. It provides evidence of the SMartyCheck feasibility, as well as to improve it, forming a body of knowledge for planning prospective empirical studies and automation of SMartyCheck.
Download

Paper Nr: 93
Title:

A Spatial Data Infrastructure Review - Sorting the Actors and Policies from Enterprise Viewpoint

Authors:

Italo Lopes Oliveira and Jugurta Lisboa-Filho

Abstract: The Commission on Geoinformation Infrastructures and Standards of the International Cartographic Association (ICA) has proposed a model based on five perspectives to describe Spatial Data Infrastructure (SDIs) using the Reference Model for Open Distributed Processing (RM-ODP) framework. This model was later extended by other researchers to describe the hierarchical relationship among SDIs and the interactions related with policies of an SDI, using the RM-ODP elements for these descriptions. However, the elements initially proposed by the ICA and the extended elements differ in terminology and semantically. This paper proposes unifying these elements, more precisely the actors and policies of the Enterprise Perspective proposed in the ICA model and its extensions in order to create a single model to describe SDIs, thus guaranteeing a common language when designing an SDI, besides facilitating knowledge sharing among designers.
Download

Paper Nr: 217
Title:

Bug Prediction for an ATM Monitoring Software - Use of Logistic Regression Analysis for Bug Prediction

Authors:

Ozkan Sari and Oya Kalipsiz

Abstract: Software testing which is carried out for the elimination of the software defects is one of the significant activities to achieve software quality. However, testing each fragment of the software is impossible and defects still occur even after several detailed test activities. Therefore, there is a need for effective methods to detect bugs in software. It is possible to detect faulty portions of the code earlier by examining the characteristics of the code. Serving this purpose, bug prediction activities help to detect the presence of defects as early as possible in an automated fashion. As a part of the ongoing thesis study, an effective model is aimed to be developed in order to predict software entities having bugs. A public bug database and ATM monitoring software source code are used for the creation of the model and to find the performance of the study.
Download

Paper Nr: 285
Title:

Dynamic Large Scale Product Lines through Modularization Approach

Authors:

Asmaa Baya, Bouchra El Asri, Ikram Dehmouch and Zineb Mcharfi

Abstract: Software product line (SPL) now faces major scalability problems because of technical advances of the past decades. However, using traditional approaches of software engineering to deal with this increasing scalability is not feasible. Therefore, new techniques must be provided in order to resolve scalability issues. For such a purpose, we propose through this paper a modularization approach according to two dimensions: In the first dimension we use Island algorithm in order to obtain structural modules. In the second dimension we decompose obtained modules according to features binding time so as to obtain dynamic sub-modules.
Download

Paper Nr: 292
Title:

Analysis of Knowledge Management and E-Learning Integration Approaches

Authors:

Janis Judrups

Abstract: The development of knowledge Management (KM) and E-Learning (EL) naturally brings both disciplines closer and encourages integration. Assessment of integration possibilities showed a number of conceptual, technological, organizational and content barriers, which are interfering with integration, and the organization by dealing with them will increase quality, convenience, diversity and effectiveness. Use of KM and EL as equal disciplines is called an integration approach, but using one of them as a support to the other is described as an adoption approach. KM and EL integration may be based on common ground – learning. SWOT analysis was performed to summarize integration possibilities.
Download

Paper Nr: 305
Title:

VisMinerTD - An Open Source Tool to Support the Monitoring of the Technical Debt Evolution using Software Visualization

Authors:

Thiago S. Mendes, Daniel A. Almeida, Nicolli S. R. Alves, Rodrigo O. Spínola, Renato Novais and Manoel Mendonça

Abstract: Software development and maintenance activities can be negatively impacted by the presence of technical debt. One of its consequences is the software quality decrease. In order to produce better software, the evolution of technical debt needs to be monitored. However, this is not a trivial task since it usually requires the analysis of large amount of data and different types of debt. The areas of metrics and software visualization can be used to facilitate the monitoring of technical debt. This paper presents an open source tool called VisMinerTD that uses software metrics and visualization to support developers in software comprehension activities including the identification and monitoring of technical debt. VisMinerTD brings a new perspective to the hard work of identifying and monitoring technical debt evolution on software projects. Moreover, the user can easily plug new metrics and new visual metaphors to address specific technical debt identification and monitoring activities.
Download

Paper Nr: 318
Title:

Semantically Enriching the Detrending Step of Time Series Analysis

Authors:

Lucélia de Souza, Maria Salete Marcon Gomes Vaz and Marcos Sfair Sunye

Abstract: In time series analysis, the trend extraction – detrending is considered a relevant step of preprocessing, where occurs the transformation of nonstationary time series in stationary, that is, free of trends. Trends are time series components that need be removed because they can hide other phenomena, causing distortions in further processing. To helping the decision making, by researchers, about how and how often the time series were detrended, the main contribution of this paper is semantically enriching this step, presenting the Detrend Ontology (DO prefix), designed in a modular way, by reuse of ontological resources, which are extended for modeling of statistical methods applied for detrending in the time domain. The ontology was evaluated by experts and ontologists and validated by means of a case study involving real-world photometric time series. It is described its extensibility for methods in time-frequency domain, as well as the association, when applicable, of instances with linked open data from DBpedia semantic knowledge base. As result of this paper, stands out the semantic enrichment of a relevant step of the analysis, contributing to the scientific knowledge generation in several areas that analyze time series.
Download

Paper Nr: 326
Title:

A Tool for the Analysis of Change Management Processes in Software Development Cycles

Authors:

Mario Pérez, Álvaro Navas, Hugo A. Parada and Juan C. Dueñas

Abstract: Change management processes theory specifies the life cycle of a change through an organization. It is a wellknown process present in day-to-day operations, with up to hundreds of changes passing through its phases each day. There is a broad range of tools that help with keeping track of each of those changes. However, the use of these tools, and hence the process itself, is not always translated perfectly into an organization. Therefore, it is necessary to analyse how the process has been implemented and how to correct it. Change management systems often offer some degree of analysis, but it is either too little or too obtuse. In this paper we present a tool that can help analyse the data gathered by these systems in order to detect bottle-necks and irregularities in a visual way tailored to the special time needs of the data.
Download

Paper Nr: 327
Title:

Systematic Mapping - Formalization of UML Semantics using Temporal Logic

Authors:

Vinícius Pereira and Marcio E. Delamaro

Abstract: Despite offering a wide variety of elements for graphical representation of models, the UML does not have a well-defined semantics. Therefore, over the years researches seek to assign some kind of formal semantics for UML. Objective: In this context, this paper seeks to bring evidence about the techniques for formalizing the UML semantics available in the literature, particularly those using temporal logic. Method: For this purpose, we conducted a systematic mapping study based on searching of major electronic databases. Results: We explored 278 studies, of which we claim 13 studies for analysis. In other words, the result shows that the overall picture defined by them is interest, because it shows that the majority of studies deal only with the formalization of one type of UML diagram. Conclusion: Summing up, we found out that State Diagram is the more formalized diagram in the studies. It is difficult to find the formalization of three or more UML diagrams, perhaps because of the difficulty in ensuring the overlap between UML elements. Furthermore, the results can provide perception of new research in the UML semantics for investigating and defining new tools/process to assist the software engineers.
Download

Paper Nr: 335
Title:

Mapping Textual Scenarios to Analyzable Petri-Net Models

Authors:

Edgar Sarmiento, Eduardo Almentero, Julio C. S. P. Leite and Guina Sotomayor

Abstract: With the growing use of user-oriented perspectives at requirements engineering, transforming requirements models into executable models is considered to be significant. One of the key elements in this perspective is the notion of scenarios; scenarios are used to describe specific behaviors of the application through a flow of events based on user-perspective. Since scenarios are often stated in natural languages, they have the advantage to be easy to adopt, but the requirements can then hardly be processed for further purposes like analysis or test generation; partly because interactions among scenarios are rarely represented explicitly. In this work, we propose a transformation method that takes textual description of scenarios as input and generates an equivalent Petri-Net model as output. The resulting Petri-Net model can be further processed and analyzed using Petri-Net tools to verify model properties, to identify concurrency problems and to optimize the input and output models. Demonstration of the feasibility of the proposed method is based on two examples using a supporting tool.
Download

Paper Nr: 337
Title:

Tracking Project Progress with Earned Value Management Metrics - A Real Case

Authors:

Maria Teresa Baldassarre, Nicola Boffoli, Danilo Caivano and Giuseppe Visaggio

Abstract: According to the Project Management Institute (PMI) project management consists of planning, organizing, motivating and controlling resources such as time and cost in order to produce products with acceptable quality levels. As so, project managers must monitor and control project execution, i.e. verify actual progress and performance of a project with respect to the project plan and timely identify where changes must be made on both process and product. Earned Value Management (EVM) is a valuable technique for determining and monitoring the progress of a project as it indicates performance variances based on measures related to work progress, schedule and cost information. This technique requires that a set of metrics be systematically collected throughout the entire project. A consequence is that, for large and long projects, managers may encounter difficulties in interpreting all the information collected and using it for decision-making. To assist managers in this tedious task, in this paper we classify the EVM metrics distinguishing them into five conceptual classes and present an interpretation model that managers can adopt as checklist for monitoring EVM values and tracking the project’s progress. At this point of our research the decision model has been applied during an industrial project to monitor project progress and guide project manager decisions.
Download

Area 4 - Software Agents and Internet Computing

Full Papers
Paper Nr: 30
Title:

Conceptual Interoperability Barriers Framework (CIBF) - A Case Study of Multi-organizational Software Development

Authors:

Llanos Cuenca, Andrés Boza, Angel Ortiz and Jos J. M. Trienekens

Abstract: This paper identifies conceptual barriers to enterprise interoperability and classifies them along interoperability levels of concern. The classification is based on the enterprise interoperability framework by Interop NoE and introduces the concepts of horizontal and vertical interoperability. From the initial classification a new conceptual interoperability barriers framework is proposed. The goal of the framework is to present generic conceptual barriers to interoperability and show where they are interrelated. The proposal has been validated in a case study of multi-organizational software development.
Download

Paper Nr: 174
Title:

Privacy-preserving Hybrid Peer-to-Peer Recommendation System Architecture - Locality-Sensitive Hashing in Structured Overlay Network

Authors:

Alexander Smirnov and Andrew Ponomarev

Abstract: Recommendation systems are widely used to mitigate the information overflow peculiar to current life. Most of the modern recommendation system approaches are centralized. Although the centralized recommendations have some significant advantages they also bear two primary disadvantages: the necessity for users to share their preferences and a single point of failure. In this paper, an architecture of a collaborative peer-to-peer recommendation system with limited preferences’ disclosure is proposed. Privacy in the proposed design is provided by the fact that exact user preferences are never shared together with the user identity. To achieve that, the proposed architecture employs a locality-sensitive hashing of user preferences and an anonymized distributed hash table approach to peer-to-peer design.
Download

Paper Nr: 183
Title:

An Approach to using a Laser Pointer as a Mouse

Authors:

Jeremiah Aizeboje and Taoxin Peng

Abstract: Modern technologies have evolved to present different ways users can interact with computers. Nowadays, computers and projectors are commonly used in teaching and presentations, in which the mouse and the USB wireless presenter are two of the main presentation devices. However, the USB wireless presenter, usually a laser pointer, cannot simulate the movement of a mouse but only simulate the actions of a right and left arrow key. This paper proposes a novel approach to allowing users to interact with a computer from a distance without the need of a mouse, but instead using a laser pointing device, a projector and a web camera, by developing a novel screen detection method (based on a simple pattern recognition technique), a laser detection method, and an accuracy algorithm to control the accuracy of the movement of the mouse cursor. The test results confirmed the laser pointer could be used to simulate the movement of the mouse as well as mouse clicks with very high accuracy. It could also be potentially used in a gaming environment.
Download

Paper Nr: 211
Title:

Towards a Context-Aware Adaptation Approach for Transactional Services

Authors:

Widad Ettazi, Hatim Hafiddi, Mahmoud Nassar and Sophie Ebersold

Abstract: One goal of ubiquitous computing is to enable users to access and perform transactions from any location, at any time and from any device. In a context-aware environment, transaction management is a critical task that requires a dynamic adaptation to the changing context in order to provide service reliability and data consistency. In this paper, we propose a new approach for managing context-aware transactional services. Then we discuss the various researches that have addressed this issue. Finally, we propose a new model for the adaptability of context-aware transactional services called CATSM (Context-Aware Transactional Service Model) and the adaptation mechanisms to implement this model.
Download

Paper Nr: 295
Title:

Smart Cities - An Architectural Approach

Authors:

André Duarte, Carlos Oliveira and Jorge Bernardino

Abstract: Smart cities are usually defined as modern cities with smooth information processes, facilitation mechanisms for creativity and innovativeness, and smart and sustainable solutions promoted through service platforms. With the objective of improving citizen’s quality of life and quickly and efficiently make informed decisions, authorities try to monitor all information of city systems. Smart cities provide the integration of all systems in the city via a centralized command centre, which provides a holistic view of it. As smart cities emerge, old systems already in place are trying to evolve to become smarter, although these systems have many specific needs that need to be attended. With the intent to suit the needs of specific systems the focus of this work is to gather viable information that leads to analyse and, present solutions to address their current shortcomings. In order to understand the most scalable, adaptable and interoperable architecture for the problem, existing architectures will be analysed as well as the algorithms that make them work. To this end, we propose a new architecture to smart cities.
Download

Short Papers
Paper Nr: 50
Title:

Dynamic Modeling of Twitter Users

Authors:

Ahmed Galal and Abeer El-Korany

Abstract: Social Networks are popular platforms for users to express themselves, facilitate interactions, and share knowledge. Today, users in social networks have personalized profiles that contain their dynamic attributes representing their interest and behavior over time such as published content, and location check-ins. Several proposed models emerged that analyze those profiles with their dynamic content in order to measure the degree of similarity between users. This similarity value can be further used in friend suggesting and link prediction. The main drawback of the majority of these models is that they rely on a static snapshot of attributes which do not reflect the change in user interest and behavior over time. In this paper a novel framework for modeling the dynamic of user’s behavior and measuring the similarity between users’ profiles in twitter is proposed. In this proposed framework, dynamic attributes such as topical interests and the associated locations in tweets are used to represent user’s interest and behavior respectively. Experiments on a real dataset from twitter showed that the proposed framework that utilizes those attributes outperformed multiple standard models that utilize a static snapshot of data.
Download

Paper Nr: 51
Title:

A Recommendation Engine based on Adaptive Automata

Authors:

Paulo Roberto Massa Cereda and João José Neto

Abstract: The amount of information available nowadays is huge and in raw state; systems have to act proactively on selecting and presenting context-relevant information, but such feature is time-consuming an exhaustive. This paper presents a recommendation engine based on an adaptive rule-driven device – namely, an adaptive automata – as a lightweight scalable alternative to usual approaches on resource recommendation. The technique employed here is based on frequency analysis instead of relying on usual machine learning.
Download

Paper Nr: 83
Title:

Managing Service Quality of Self-Service Technologies to Enhance e-Satisfaction in Digital Banking Context - The Roles of Technology Readiness and Perceived Value

Authors:

Sakun Boon-itt

Abstract: Perceived service quality, value, and customer satisfaction have long been regarded as the most important research topics in services marketing and service operations literature. Although the self-service technologies (SSTs) are deliberately designed to improve quality and contain necessary information to serve customer needs, service quality of SSTs (SQ-SSTs) has not yet been well achieved up to standards of performance. By integrating the self-service technology adoption and technology acceptance models, this study address SQ-SSTs by empirically testing a comprehensive model that capture the comprehensive model of SQ-SSTs to predict e-satisfaction in the context of digital banking in Thailand. The results show that technology readiness (TR) has the influence on SQ-SSTs, which in turn improve e-satisfaction. The study also found that even though SQ-SSTs can positively influence e-satisfaction, perceived value partial mediates the link between SQ-SSTs and e-satisfaction. The findings contribute to the literature in information system and service marketing by highlighting a key mechanism through which firms can enhance service quality of self-service technologies (SQ-SSTs) and e-satisfaction. Managers may therefore particularly wish to consider technology readiness and customers’ perceived value when trying to offer SSTs.
Download

Paper Nr: 103
Title:

Proactive Domain Data Querying based on Context Information in Ambient Assisted Living Environments

Authors:

Vinícius Maran, Alencar Machado, Iara Augustin, Leandro Krug Wives and José Palazzo M. de Oliveira

Abstract: Ubiquitous computing defines a set of technologies to make computing omnipresent in real life environments. In the area of ambient assisted living, ubiquitous technologies have been used to improve the life quality and expectancy for elderly people. Recently, researches have shown that the use of context-awareness combined with proactive actions can cause systems to act more appropriately in assistance to the user. In this paper, we present a new persistent and proactive data retrieval model for ambient assisted living systems. This model provides an architecture that is able to integrate information that is gathered from the user environment and considers the current user context to act in a proactive manner. The model was implemented on a service inte-grated in a Situation as a Service middleware and it was applied in a case study for evaluation and validation.
Download

Paper Nr: 144
Title:

Building Coalitions of Competitors in the Negotiation of Multiparty e-Contracts through Consultations

Authors:

Anderson P. Avila-Santos, Jhonatan Hulse, Daniel S. Kaster and Evandro Baccarin

Abstract: This paper argues that software agents may build two kinds of coalitions in e-negotiation processes. The first is the typical one in which the parties define roles, rights, guarantees before the negotiation starts. They act as a team. Either the whole coalition succeeds in the negotiation or fails. In the second one, addressed by this paper, the coalition members are competitors. They collaborate exchanging information before the negotiation trying to align their strategies to some degree. Such collaboration only occurs because there is some particularity (e.g., nearness) that can optimise their business processes if most of the coalition members succeed in the negotiation. They aim at maximising their chances of success in the negotiation, but act solo. It is important to note that the main challenge in this scenario lays on the fact that the coalition members are not bind to the coalition. They may act within the negotiation differently from what they had agreed previously. This gives rise to the concept of fairness, which is discussed in this paper. The paper also argues that the materialisation of coalitions within a negotiation protocol fits better in a multiparty negotiation protocol. Thus, it extends the SPICA Negotiation Protocol with the so-called consultations. The paper presents a study case that shows that consultations can be benefic to the suppliers, the industry and the consumers.
Download

Paper Nr: 165
Title:

Efficient Use of Voice as a Channel for Delivering Public Services

Authors:

Kapil Kant Kamal, Manish Kumar, Bharat Varyani and Kavita Bhatia

Abstract: Delivering the information and services to the citizen is a key task of Government. It is the responsibility of the government to keep their citizens informed and deliver public services to them on timely basis. This information required for making critical decisions and forming any opinion. For good governance and transparency, it is very essential that the services and information is delivered timely. Delivering information and services through conventional methods like paper forms, e-Forms have problems in countries having large section of population illiterate. So, more efficient methods need to be employed for the information sharing and data capture. With live human interaction and local language support, an Interactive Voice Response Systems (IVRS) can be an effective method through which data can be captured and information about the services can be shared even to the illiterate population. This paper discusses the issues involved in the implementation of IVR system and making voice as a channel in delivering services to the citizen. This paper is based on the investigation done for finding the potential of an IVRS services and it also discusses the real time IVRS requirements for successful implementation of Govt projects and how IVR systems will increase the acceptability, reduces the query-time of citizen and for making public delivery systems more efficient. We propose a nationwide single number for accessing all Govt. services on user local language. Further, it also includes the case study of Department of Agriculture & Cooperation, Ministry of Agriculture, depicting how IVR system has helped farmers. Such IVRS may be replicated by other Govt. department wherever necessary at customer ease.
Download

Paper Nr: 175
Title:

A Comparative Study of Two Egocentric-based User Profiling Algorithms - Experiment in Delicious

Authors:

Marie Françoise Canut, Manel Mezghani, Sirinya On-At, André Péninou and Florence Sèdes

Abstract: With the growing amount of social media contents, the user needs more accurate information that reflects his interests. We focus on deriving user’s profile and especially user’s interests, which are key elements to improve adaptive mechanisms in information systems (e.g. recommendation, customization). In this paper, we are interested in studying two approaches of user’s profile derivation from egocentric networks: individual-based approach and community-based approach. As these approaches have been previously applied in a co-author network and have shown their efficiency, we are interested in comparing them in the context of social annotations or tags. The motivation to use tagging information is that tags are proved relevant by many researches to describe user’s interests. The evaluation in Delicious social databases shows that the individual-based approach performs well when the semantic weight of user’s interests is taken more in consideration and the community-based approach performs better in the opposite case. We also take into consideration the dynamic of social tagging networks. To study the influence of time in the efficiency of the two user’s profile derivation approaches, we have applied a time-awareness method in our comparative study. The evaluation in Delicious demonstrates the importance of taking into account the dynamic of social tagging networks to improve effectiveness of the tag-based user profiling approaches.
Download

Paper Nr: 208
Title:

Can You Find All the Data You Expect in a Linked Dataset?

Authors:

Walter Travassos Sarinho, Bernadette Farias Lóscio and Damires Souza

Abstract: The huge volume of datasets available on the Web has motivated the development of a new class of Web applications, which allow users to perform complex queries on top of a set of predefined linked datasets. However, given the large number of available datasets and the lack of information about their quality, the selection of datasets for a particular application may become a very complex and time consuming task. In this work, we argue that one possible way of helping the selection of datasets for a given application consists of evaluating the completeness of the dataset with respect to the data considered as important by the application users. With this in mind, we propose an approach to assess the completeness of a linked dataset, which considers a set of specific data requirements and allows saving large amounts of query processing. To provide a more detailed evaluation, we propose three distinct types of completeness: schema, literal and instance completeness. We present the definitions underlying our approach and some results obtained with the accomplished evaluation.
Download

Paper Nr: 277
Title:

Towards a Novel Engine to Underlie the Data Transmission of Social Urban Sensing Applications

Authors:

Carlos Oberdan Rolim, Anubis Graciela de Moraes Rossetto, Valderi R. Q. Leithardt, Guilherme A. Borges, Tatiana F. M. dos Santos, Adriano M. Souza and Claudio Geyer

Abstract: Social urban sensing is a new paradigm which exploits human-carried or vehicle-mounted sensors to ubiquitously collect data for large-scale urban sensing. A challenge of such scenario is how to transmit sensed data in situations where the networking infrastructure is intermittent or unavailable. In this context, this paper outlines the early stages of our research which is concerned with a novel engine that uses Opportunistic Networks paradigm to underlie the data transmission of social urban sensing applications. It applies Situation awareness, Neural Networks and Fuzzy Logic for routing and decision-making process. As we know, this is the first paper to use such approaches in Smart Cities area with focus on social sensing application. As well as being original, the preliminary results from our simulations signals the way that further research can be carried out in this area.
Download

Paper Nr: 290
Title:

Multi-payment Solution for Smartlet Applications

Authors:

G. Vitols, N. Bumanis, J. Smirnova, V. Salajevs, I. Arhipova and I. Smits

Abstract: Organizations from different fields show more and more interest towards an effective solution, which would allow integrating and combining services and products from different providers into the single mobile or smartcard application for easy and comfortable usage by clients. For reaching these demands the need for certain service emerges, which would allow transformation of developer's knowledge to the technological solution in a form of application. We propose to solve these issues with smartlets, role distribution and integrated payment pool for business services. Proposed integrated payment pool was used to design Norvik Bank A-card product in Latvia where multiple payment applications were integrated into single smartcard.
Download

Paper Nr: 323
Title:

Towards a High Configurable SaaS - To Deploy and Bind Auser-aware Tenancy of the SaaS

Authors:

Houda Kriouile, Zineb Mcharfi and Bouchra El Asri

Abstract: User-aware tenancy approach integrates the flexibility of the Rich-Variant Component with the high configurability of multi-tenant applications. Multi-tenancy is the notion of sharing instances among a large group of customers, called tenants. Multi-tenancy is a key enabler to exploit economies of scale for Software as a Service (SaaS) approaches. However, the ability of a SaaS application to be adapted to individual tenant’s needs seem to be a major requirement. Thus, our approach proposes a more flexible and reusable SaaS system for Multi-tenant SaaS application using Rich-Variant Components. The approach consists in a user-aware tenancy for SaaS environments. In this paper, an algorithm is established to derive the necessary instances of Rich-Variant Components building the application and access to them in a scalable and performing manner. The algorithm is based on fundamental concepts from the graph theory.
Download

Paper Nr: 47
Title:

Temporal Constraint in Web Service Composition

Authors:

Bey Fella, Samia Bouyakoub and Abdelkader Belkhir

Abstract: Web service composition is studied by many works, and constitutes the heart of a great research activity. However, the majority of this work does not take into account all temporal constraints imposed by the service provider and the users in the composition process. Incorporating temporal constraints in Web service composition result in more complex model and make crucial the verification of temporal consistence during the modeling (at design time) and then during the execution (at run time). In this paper, we presented H-Service-Net model for Web service composition with time constraints, and propose a modular approach for modeling composition with time constraint using Extend time unit system (XTUS), Allen’s interval algebra and comparison operators in a time Petri net model.
Download

Paper Nr: 207
Title:

How Can Semantics and Context Awareness Enhance the Composition of Context-aware Services?

Authors:

Tarik Fissaa, Hatim Guermah, Hatim Hafiddi and Mahmoud Nassar

Abstract: The context-aware services refers to applications that use so-called contextual information to provide appropriate services or relevant information to the user or other applications to perform a specific task. An important challenge in context-aware service oriented systems is the creation of a new service on demand to carry out more complex tasks through the composition of existing services. In this work, we aim to propose a semantic based architecture for the development of context aware services composition using Artificial Intelligence (AI) planning. The straightforward translation between AI planning through PDDL and Semantic web services via OWL-S allows to automate the composition process. Thus planning based service composition launches a goal-oriented composition procedure to generate a plan of composite service corresponding to the user request.
Download

Paper Nr: 274
Title:

Evaluating Potential Improvements of Collaborative Filtering with Opinion Mining

Authors:

Manuela Angioni, Maria Laura Clemente and Franco Tuveri

Abstract: An integration of an Opinion Mining approach with a Collaborative Filtering algorithm has been applied to the Yelp dataset to improve the predictions through the information provided by the user-generated textual reviews. The research, still in progress, based the Opinion Mining approach on the syntactic analysis of textual reviews and on a beginning polarity evaluation of the sentences. The predictions produced in this way was blended with the predictions coming from a Biased Matrix Factorization algorithm obtaining interesting results in terms of Root Mean Squared Error (RMSE), with potential enhancements. We intend to improve these results in a further phase of activity by including in the Opinion Mining approach the semantic disambiguation and by using better criteria of evaluation of the reviews taking into account a set of 12 business aspects. The Opinion Mining approach will be evaluated comparing the output in terms of predictions with the values manually assigned by a small group of people to a sample of the same reviews.
Download

Area 5 - Human-Computer Interaction

Full Papers
Paper Nr: 130
Title:

A Wearable Face Recognition System Built into a Smartwatch and the Visually Impaired User

Authors:

Laurindo de Sousa Britto Neto, Vanessa Regina Margareth Lima Maike, Fernando Luiz Koch, Maria Cecília Calani Baranauskas, Anderson de Rezende Rocha and Siome Klein Goldenstein

Abstract: Practitioners usually expect that real-time computer vision systems such as face recognition systems will require hardware components with high processing power. In this paper, we present a concept to show that it is technically possible to develop a simple real-time face recognition system in a wearable device with low processing power – in this case an assistive device for the visually impaired. Our platform of choice here is the first generation Samsung Galaxy Gear smartwatch. Running solely in the watch, without pairing to a phone or tablet, the system detects a face in the image captured by the camera, and then performs face recognition (on a limited dictionary), emitting an audio feedback that either identifies the recognized person or indicates that s/he is unknown. For the face recognition approach we use a variation of the K-NN algorithm which accomplished the task with high accuracy rates. This paper presents the proposed system and preliminary results on its evaluation.
Download

Paper Nr: 156
Title:

Evaluating an Inspection Technique for Use Case Specifications - Quantitative and Qualitative Analysis

Authors:

Natasha M. Costa Valentim, Tayana Conte and José Carlos Maldonado

Abstract: Usability inspections in early stages of the development process help revealing problems that can be corrected at a lower cost than at advanced stages of the development. The MIT 1 (Model Inspection Technique for Usability Evaluation) is a usability inspection technique, which aims to anticipate usability problems through the evaluation of use cases. This technique was evaluated using a controlled experiment aimed at measuring its efficiency and effectiveness, when compared to the Heuristic Evaluation (HEV) method. According to quantitative results, the MIT 1 exceeded the HEV in terms of effectiveness and obtained a similar performance in terms of efficiency. In other words, the MIT 1 allows finding more problems than the HEV. On the other hand, the subjects spent more time finding these problems using MIT 1. Moreover, the MIT 1 was considered easy to use and useful by the subjects of the study. We analysed the qualitative data using the procedures from the Grounded Theory (GT) method and results indicate improvement opportunities.
Download

Paper Nr: 195
Title:

RockQuery - An Ontology-based Data Querying Tool

Authors:

Jose Lozano, Joel Carbonera, Marcelo Pimenta and Mara Abel

Abstract: Nowadays many petroleum companies are adopting different knowledge-based systems in order to improve the reservoir quality prediction. In the last years, these systems have been adopting ontologies for representing the domain knowledge. However, there are still some challenges to overcome for allowing geologists with different backgrounds to retrieve information without the help of an information technology expert. New terminology can be added to the ontology, making the user interaction cumbersome, especially for the novice users. In this paper, we propose an approach that combines ontology views with Human-Computer Interaction (HCI) techniques, for improving the user interaction in computer applications, by reducing the overload of information with which the user should handle for performing tasks. We propose RockQuery; a new Visual Query System that applies our approach, and which is able to present to the user only the knowledge that is relevant for supporting the required query formulation. In addition, the interaction design of RockQuery includes data visualizations that help geologists to make sense of the retrieved data. In order to test our approach, we evaluated the impact of using ontology views in the performance of the users for formulating queries.
Download

Paper Nr: 196
Title:

A Semiotic-informed Approach to Interface Guidelines for Mobile Applications - A Case Study on Phenology Data Acquisition

Authors:

Flavio Nicastro, Roberto Pereira, Bruna Alberton, Leonor Patrícia C. Morellato, Cecilia Baranauskas and Ricardo da S. Torres

Abstract: Portable devices have been experimented for data acquisition in different domains, e.g., logistics and census data acquisition. Nevertheless, their large-scale adoption depends on the development of effective applications with a careful interaction design. In this paper, we revisit existing interface design strategies and propose a guideline composed of semiotic-informed rules and questions for mobile user interface design. We demonstrate the use of the guideline in the evaluation of mobile application interfaces proposed for phenological data acquisition in the field.
Download

Paper Nr: 221
Title:

Location-sharing Model in Mobile Social Networks with Privacy Guarantee

Authors:

Tiago Antonio and Sergio Donizetti Zorzo

Abstract: Mobile social networks allow users to access, publish, and share information with friends, family, or groups of friends by using mobile devices. Location is one kind of information frequently shared. By using location-sharing on a social network, users allow service providers to register this information and use it to offer products and services based on the geographic area. Many users consider offers a personal gain, but for others, it causes concerns with security and privacy. These concerns can eliminate the use of mobile social networks. This paper presents a model of a mobile social network with a privacy guarantee. The model enables the user to set rules determining when, where, and with whom (friends or a group of friends) location information will be shared. Moreover, the model provides levels of privacy with anonymity techniques which hide the user’s high-accuracy current location before it is shared. To validate the model, a mobile social network prototype, MSNPrivacy (Mobile Social Network with Privacy), was developed for Android. Tests were carried out aiming to measure MSNPrivacy’s performance. The results verify that the rules and privacy levels in place provide an acceptable delay, and the model can be applied in real applications.
Download

Short Papers
Paper Nr: 53
Title:

A Perspective-based Usability Inspection for ERP Systems

Authors:

Joelma Choma, Diego Quintale, Luciana A. M. Zaina and Daniela Beraldo

Abstract: The inspection methods to evaluate the usability of ERP systems require more specific heuristics and most suitable criteria into this field. This article proposes a set of heuristics based on perspectives of presentation, and task support aiming to facilitate the inspection of usability in ERP systems especially for novice inspectors. An empirical study was conducted to verify the efficiency and effectiveness of inspections conducted with the proposed heuristics. The results indicate the efficiency and effectiveness to detect problems, mainly in medium-fidelity prototypes of ERP modules.
Download

Paper Nr: 96
Title:

An Australian Ski Resort System

Authors:

Kayleigh Rumbelow, Peter Busch and Deborah Richards

Abstract: The aim of this system was to use and display existing ski access data in a new way to create business as well as social enhancement opportunities for resorts and their guests. Radio Frequency Identification (RFID) enabled passes, were used as input mechanisms captured by a scanner on snow at various locations. Each scan was stored in a relational database and information extracted from this was shown to a user via a webpage. A comparative analysis of two major resorts, both of which are currently using RFID ticket technology was used to assess what information was currently provided to guests and how it was delivered. This analysis was used to identify areas for future growth and development of an improved system. The use of these services was often more of an after-ski activity rather than during (Jambon and Meillon, 2009). The improvement described herein allowed the user display to operate on a delay rather than instantaneously. The significance of this improved solution enabled a resort to differentiate itself from competitors. An alternative data display is presented detailing the technologies employed and additional functionality that could be explored.
Download

Paper Nr: 141
Title:

Automatic Generation of LIBRAS Signs by Graphic Symbols of SignWriting

Authors:

Carlos Eduardo Andrade Iatskiu, Laura Sánchez García and Rafael dos Passos Canteri

Abstract: The Brazilian Sign Language is the natural language used by Deaf people in Brazil to communicate between themselves and with the society, as well as it is part of culture and tradition. Providing access to communication, information and knowledge (creation) for the Deaf community are just some of the motivations for Brazilian Sign Language writing record. This paper shows some hypotheses for the low usage of computational tools for recording sign languages and proposes a new way to generate the graphic records in Brazilian Sign Language through the SignWriting System for assisting the Deaf individuals in the exercise of their full citizenship.
Download

Paper Nr: 157
Title:

Evaluating HCI Design with Interaction Modeling and Mockups - A Case Study

Authors:

Adriana Lopes, Anna Beatriz Marques, Simone Diniz Junqueira Barbosa and Tayana Conte

Abstract: Interactive systems are increasingly present in daily life, but many people still face difficulties to use them. We believe that using models and artifacts to represent the interaction in a systematic way during systems design may prevent such difficulties. In this paper, we investigate the combined use of MoLIC, an interaction modeling language, with user interface mockups. While both artifacts are supposed to promote the understanding of user goals and the designer’s reflection on alternative solutions and decisions regarding the interaction, we have not found evidence of their usage impacts on quality. Thus, this paper presents an experimental study on the joint usage of MoLIC interaction diagrams and mockups during systems design, aiming both to identify participants’ perceptions on the joint use of the two artifacts and to analyze the quality of the generated artifacts by observing which types of defects would occur. The results show that, although some participants found that MoLIC diagrams were not very easy to build, most participants considered the creation of mockups based on MoLIC diagrams useful. In addition, the number of defects found in the MoLIC diagrams points to the need of developing techniques to evaluate the artifact before proceeding with the design process.
Download

Paper Nr: 177
Title:

Using a Study to Assess User eXperience Evaluation Methods from the Point of View of Users

Authors:

Luis Rivero and Tayana Conte

Abstract: User eXperience (UX) refers to a holistic perspective and an enrichment of traditional quality models with non-utilitarian concepts, such as fun, joy, pleasure or hedonic value. In order to evaluate UX, several methods have been proposed that range from using questionnaires to employing biometrics to evaluate the users’ emotions. However, few of these UX evaluation methods are comfortable or easy to use from the point of view of users. This paper presents a study in which 10 users applied the Expressing Emotions and Experiences (3E) and EmoCards methods. While 3E provides a template for reporting the experience, the EmoCards provides a set of cards illustrating emotions as helping material. We have analyzed the features that make it easy or difficult for users to employ these methods, the users’ preference and the number of identified problems. Besides showing an application example of the methods to aid software practitioners in future evaluations, we identified that the EmoCards allowed users to identify more problems, but 3E was preferred do to its ease of use and freedom when describing an emotion and its causes.
Download

Paper Nr: 179
Title:

Modeling NFC-triggered User Interactions with Simple Services in a Smart Environment

Authors:

Antonio P. Volpentesta and Nicola Frega

Abstract: NFC is an emerging wireless technology that can enable users to interact with smart objects in a smart environment. NFC applications have been developed to provide services like ticketing, access control, tourism information extension, voucher redemption and contactless payment. The interaction technique is a sort of “tap-and-go” as it is currently employed in smartcard usage for travel operations and workspace access/logging. Employing a recently introduced framework for human interaction with mobiquitous services, we present a model of NFC-triggered user interactions with simple context-awareness services in a smart environment. The rationale is to provide a conceptual tool for both an appropriate communication among NFC ecosystem stakeholders and the interface design of NFC apps with a generic applicability. Lastly, we discuss the application of the model in a project which required the design of NFC-based interactions with services for car parking management in a city area.
Download

Paper Nr: 180
Title:

Integrating the Usability into the Software Development Process - A Systematic Mapping Study

Authors:

Williamson Silva, Natasha M. Costa Valentim and Tayana Conte

Abstract: With the increasing use of interactive applications, there is a need for a development with better quality and a good interaction that facilitates the use for end users, because such applications are increasingly present in daily life. Therefore, it is necessary to include usability, which is one of the important quality attributes, in the development process for obtaining good acceptance rates and, consequently, improving the quality of these applications. In this paper we present a Systematic Mapping Study (SM) that assists categorizing and summarizing technologies that have been used in order to improve usability. The results from our SM show some technologies that can help improving usability in various applications. Also, it identifies gaps that still need to be researched. We found that most technologies have been proposed for the Testing phase (67.28%) and that Web applications are the most evaluated type of application (52.65%). We also identified that few technologies assist designers improving usability in the early stages of the development process (13.50% Analysis phase and 15.95% Design phase). The results from this SM allow observing the state of the art regarding technologies that can be integrated into the development process, aimed at improving the usability of interactive applications.
Download

Paper Nr: 189
Title:

Understanding Game Modding through Phases of Mod Development

Authors:

Satyam Agarwal and Priya Seetharaman

Abstract: Game modding has been rapidly emerging as a source of competitive advantage in the gaming industry. While gaming companies are increasingly focusing towards establishing modder communities, very little is known about the process of modding itself. In this paper, we have carried out an analysis of activities of mod developers on mod distribution websites and their interactions with mod users. The theoretical lens of meta-structuring of technology use mediation helps us understand the phases of mod development. The phases relate to the activities that gamers and modders perform in order to maximize the game-play experience and usage of the mods respectively. We believe that these phases are integral part of mod development and can be used to establish appropriate support infrastructure to nurture modder communities. The paper concludes with implications for gaming firms and modding communities along with potential for further research in the area.
Download

Paper Nr: 232
Title:

Video Games in Education of Deaf Children - A Set of Pratical Design Guidelines

Authors:

Rafael dos Passos Canteri, Laura Sánchez García, Tânya Amara Felipe de Souza and Carlos Eduardo Andrade Iatskiu

Abstract: Deaf communities are quite unsupported in terms of assistive technology. These communities have many special needs in terms of Education, Communication and Leisure which, most of the time, are not attended. There is a great variety of studies that ensure the benefits the educative video games bring to children. However, Deaf communities do not have satisfactory softwares of this kind as well. The present study shows a set of guidelines, based on known educative video games models and on a Deaf children education methodology, intended to support game developers when creating educational video games for Deaf children. Following the guidelines, the construction of a game for Deaf children is presented in order to show the effectiveness of the guidelines within the design process and to assess them.
Download

Paper Nr: 280
Title:

The Adoption and Use of Human Resource Information System (HRIS) in Ghana

Authors:

Peter K. Osei Nyame and Richard Boateng

Abstract: The study looked at the adoption of Human Resource Information System (HRIS) among Ghanaian firms. A survey was conducted on 129 firms out of the 150 samples randomly selected from both the public and the private sectors in the country with a response rate of 86%. The findings first revealed that the adoption rate of HRIS in enterprises is not a common practice in Ghana since two-thirds of the organizations have never adopted HRIS use. Major general denominators for adoption and use of HRIS include firm size, organization type (i.e. profit making limited liability companies and profit making government organization) and age as well as the industry to which firms belong. Firms attributed the slow rate of adoption to reasons including the low numbers of employees, high cost of system installation, unawareness and low priority for such a system. Again, it was realized that the companies’ readiness to adopt such a system was not encouraging. There were some technical, organizational and environmental factors that affect HRIS adoption which were unearthed.
Download

Area 6 - Enterprise Architecture

Full Papers
Paper Nr: 74
Title:

Modelling the Resistance of Enterprise Architecture Adoption - Linking Strategic Level of Enterprise Architecture to Organisational Changes and Change Resistance

Authors:

Nestori Syynimaa

Abstract: During the last few years Enterprise Architecture (EA) has received increasing attention among industry and academia. By adopting EA, organisations may gain a number of benefits such as better decision making, increased revenues and cost reduction, and alignment of business and IT. However, EA adoption has been found to be difficult. In this paper a model to explain resistance during EA adoption process (REAP) is introduced and validated. The model reveals relationships between strategic level of EA, resulting organisational changes, and sources of resistance. By utilising REAP model, organisations may anticipate and prepare for the organisational change resistance during EA adoption.
Download

Paper Nr: 133
Title:

Tool Support for Analyzing the Evolution of Enterprise Architecture Metrics

Authors:

Manoj Bhat, Thomas Reschenhofer and Florian Matthes

Abstract: Managing the evolution of the Enterprise Architecture (EA) is a key challenge for modern enterprises. The EA metrics are instrumental in quantitatively measuring the progress of an enterprise towards its goals. Retrospective analysis of EA metrics empower business users to take informed decisions while planning and selecting efficient alternatives to achieve envisioned EA goals. Even though the current EA management tools support the definition and calculation of EA metrics, they do not capture the temporal aspects of EA metrics in their meta-model to enable retrospective analysis. In this paper, we first propose a model-based approach to capture the temporal aspects of EA metrics and then extend a domain specific language to compute EA metrics at any point of time in the past. This allows visualizing the evolution of EA metrics and as a consequence the evolution of the EA.
Download

Paper Nr: 142
Title:

An Operational Model of Variable Business Process

Authors:

Raoul Taffo Tiam, Abdelhak-Djamel Seriai and Raphael Michel

Abstract: Software editors concerned to produce faster, better and cheaper, are irreversibly affected by the development of product lines (software factory). The software product line approach offers techniques to increase reuse by explicitly modelling the common and variable characteristics. Considering this approach, the variability is modelled and managed throughout all stages of development. Thus, models of variable business processes are part of the design artefacts in analysis stage. Several models have been proposed to represent variable business processes. However, these models are far from being directly usable in real industrialization of production in software factory. Indeed, deficiencies such as non-representation of variability on all entities of business processes, not taking into account all the possible types of variability, or use proprietary languages, prevent those models to be operational. In this paper, we present these barriers to the operationalization and propose solutions to overcome each of them. The result is an operational model of variable business process, actually used and integrated in a software factory approach.
Download

Paper Nr: 150
Title:

Costing as a Service

Authors:

André Machado, Carlos Mendes, Miguel Mira da Silva and João Almeida

Abstract: Cost awareness and cost efficiency have always been major concerns to organizations from all industries but in the last few years its importance grew due to the global economic and financial crisis. Considering their small size and market exposure, Small and Medium Enterprises (SMEs) need cost awareness and efficiency more than ever. However, efficient and accurate costing methodologies are out of reach for most SMEs. In this research we propose that costing should be offered as a service to reduce the cost of cost analysis. Our research proposal is a cloud-based costing system that offers costing as a service using Time-Driven Activity Based Costing (TDABC) methodology and the concept of Business Process Costing Templates. When combined, they reduce the cost of cost analysis, especially for SMEs. We used the Design Science Research Methodology (DSRM) to conduct our research. This proposal was demonstrated in three Portuguese organizations and evaluated with feedback gathered from interviews and results from the system instantiation in all organizations.
Download

Short Papers
Paper Nr: 27
Title:

A Comparative Study on the Impact of Business Model Design & Lean Startup Approach versus Traditional Business Plan on Mobile Startups Performance

Authors:

Antonio Ghezzi, Andrea Cavallaro, Andrea Rangone and Raffaello Balocco

Abstract: Business Model Design (BMD) & Lean Startup (LSA) approach are two widespread practices among entrepreneurs, where many Mobile startups declare to adopt them. However, neither of the two frameworks are well rooted in the academic literature; and few studies address the issue of whether they actually outperform traditional approaches to new Mobile Startups creation. This study’s aim is to assesses the contribution to performance of the combined use of BMD and LSA for two startups operating in the highly dynamic Mobile Applications Industry; performances are then compared to those achieved by two Mobile Star-ups adopting the traditional Business Plan approach. Findings reveal how the combined use of BMD and LSA outperforms the traditional BP in the cases analyzed, thus constituting a promising methodology to support Strategic Entrepreneurship.
Download

Paper Nr: 34
Title:

A Process Approach for Capability Identification and Management

Authors:

Matthias Wißotzki

Abstract: Enterprises reach their goals by implementing strategies. Successful strategy implementation is affected by challenges that an enterprise has to face and overcome. Enterprises require specific capabilities in order to be able to implement strategies in an effective way and achieve desired results. The demand for a systematic capability management approach is thus growing. This paper introduces a general process for identifying, improving, and maintaining capabilities in an enterprise. This process is based on an integrated capability approach that results from a number of investigations performed over the past years. Comprised of four building blocks, the capability management process represents a flexible engineering approach for capability catalog developers and designers.
Download

Paper Nr: 71
Title:

Relaxed Soundness Verification for Interorganizational Workflow Processes

Authors:

Lígia Maria Soares Passos and Stéphane Julia

Abstract: This paper presents a method for the Relaxed Soundness verification of interorganizational workflow processes. The method considers Interorganizational WorkFlow net models and is based on the analysis of Linear Logic proof trees. To verify the Relaxed Soundness criterion, a Linear Logic proof tree is built for each different scenario of an unfolded InterorganizationalWorkFlow net. These proof trees are then analysed considering two conditions: the first verifies if the analysed scenario can finish properly, without spare tokens and the second verifies if every activity concerning the global process was covered by at least one possible scenario. The Interorganizational WorkFlow net is then considered as relaxed sound if the scenarios satisfy these conditions.
Download

Paper Nr: 84
Title:

A Method for Business-IT Alignment of Legacy Systems

Authors:

Jonathan Pepin, Pascal André, Christian Attiogbe and Erwan Breton

Abstract: The separate evolution of the business side of the information system and its IT side leads to inconsistent enterprise architectures. The consequences are unpredictable and costly evolutions of software systems, delayed answers to strategic decisions requirements. Numerous contributions emerged to answer the Business-IT alignment problem but they do not completely fit to legacy systems because either they are top-down or they focus on the strategic alignment or they require seamless models like the BPM-SOA alignment. We propose a method to tackle the challenge of legacy architectures alignment from a practical point of view. This method includes: (i) meta-models (business process, functional and application), (ii) a top-down and bottom-up process to feed the models and (iii) an implemented tool chain based on model transformations and weaving. Our objective is to establish and maintain a consistency link between the legacy software architecture models and the enterprise business models. This link makes them aligned and the mismatches can be revealed (as-is) and avoided in the future state of the system (to-be). We experiment the method on a real case study.
Download

Paper Nr: 118
Title:

A SOA Repository with Advanced Analysis Capabilities - Improving the Maintenance and Flexibility of Service-Oriented Applications

Authors:

Thomas Bauer, Stephan Buchwald, Julian Tiedeken and Manfred Reichert

Abstract: In a service-oriented architecture (SOA), a change or shutdown of a particular service might have a significant impact on its consumers (e.g., IT systems). To effectively cope with such situations, the IT systems affected by a service change should be identified before actually applying the latter. For this purpose, a SOA repository with advanced analysis capabilities is needed. However, due to the numerous complex inter-dependencies between service providers and consumers, it is a challenging task to figure out which IT systems might be directly or indirectly affected by a service change and for which period of time this applies. The paper tackles this challenge and presents the design of an advanced SOA repository enriched with analysis capabilities. In particular, this repository enables automatic analyses to detect already existing problems (as-is analyses) as well as problems that might occur due to future service changes (what-if analyses). Respective analyses will foster the development of robust service-oriented applications.
Download

Paper Nr: 168
Title:

BPMN4V - An Extension of BPMN for Modelling Adaptive Processes using Versions

Authors:

Imen Ben Said, Mohamed Amine Chaâbane, Eric Andonoff and Rafik Bouaziz

Abstract: This paper presents BPMN4V, an extension of BPMN 2.0 to support business process adaptation modelling using versions. It introduces the provided extensions to the BPMN meta-model to take into account the notion of version, considering both static and dynamic aspects of process versions. It also presents BPMN4V-Modeller, an implementation of these extensions. Therefore, using BPMN4V business process designers can model process adaptation, which is an important issue to address before the definitive acceptance and use of business process management systems in companies.
Download

Paper Nr: 178
Title:

Petri Net Model Cost Extension based on Process Mining - Cost Data Description and Analysis

Authors:

Dhafer Thabet, Sonia Ayachi Ghannouchi and Henda Hajjami Ben Ghézala

Abstract: Organizations always look for enhancing their efficiency and competitiveness by improving their business processes. Business Process Management includes techniques allowing continuous business process improvement. Process mining is a mature technology allowing to extract knowledge from event logs. Process model extension is a process mining technique covering different perspectives of the business process. Furthermore, financial cost incurred during business process execution is one of the relevant information needed by decision makers to take the appropriate improvement decisions in terms of cost reduction. Thus, we proposed a solution allowing Petri Net model extension with cost information using process mining extension technique. However, the proposed solution simply provides cost information by associating them to the corresponding elements of the Petri Net model, which is not sufficient for decision making support. In this paper, we propose several improvements and extensions of the proposed solution in order to enhance the provided decision making support. These proposals include cost data structuring, description and analysis with respect to the recommendations drawn from talks with experts.
Download

Paper Nr: 182
Title:

e-Business Architecture for Web Service Composition based on e-Contract Lifecycle

Authors:

José Bernardo Neto and Celso Hirata

Abstract: Nowadays, most of the approaches for compositions of web services are focused on feasibility of implementation rather than on satisfying business concerns. Meeting business concerns also demands flexible and agile implementations. We present an approach for service composition based on the lifecycle of e-contract. Econtracts have clauses and rules that express business concerns on how services are offered and consumed. We propose an architecture that enables the automation of implementation of composite services. The automation is on the configuration of web service engines. The architectural model supports the publication of contracts that describe how services are offered from different providers in order to develop the composition of services.
Download

Paper Nr: 186
Title:

From Bitcoin to Decentralized Autonomous Corporations - Extending the Application Scope of Decentralized Peer-to-Peer Networks and Blockchains

Authors:

Kalliopi Kypriotaki, Efpraxia Zamani and George Giaglis

Abstract: Inspired by the new technological advancements and the groundbreaking technology at the foundation of cryptocurrencies, organizational structures are expected to evolve and new corporate structures to emerge, based on full decentralization. We posit that the blockchain, i.e., the technology, system and protocol behind and beyond the most popular digital crypto-currencies, will introduce decentralization in many manifestations of our everyday life, especially in cases where an independent trusted third party is needed to ensure and verify operations and transactions. This paper builds upon the blockchain technology and discusses how it could enable fully decentralized forms of business structures to emerge; decentralized autonomous corporations (DACs) are business entities totally based on code; running on the cloud, providing certain services and creating value for their customers. Thus, we argue that DACs could prove a means of decentralizing and automating decision making in organizations.
Download

Paper Nr: 261
Title:

Extending WSLA for Service and Contract Composition

Authors:

Antonella Longo, Marco Zappatore and Mario Bochicchio

Abstract: Cloud Services (CSs) nowadays experience constantly improving successes in IT scenarios. Dynamic allocation of network, storage and computational resources, the hiding of visibility of internal IT components, as well as the pay-per-use paradigm are becoming more and more widespread ways to provide and consume services. The complexity of CSs is often due to service chains into which third-party services are aggregated in order to satisfy user requests. This confirms the need of modeling both contracts and corresponding Service Level Agreements (SLAs) referring to services provided to customers. Similarly, time-related variability issues in CSs require run-time performance monitoring and reporting solutions capable of comparing SLAs and feeding requesters with effective resource reservation and allocation policies. A detailed analysis in contracts and SLAs management has revealed a lack of expressivity in SLA specification and a consequent inadequacy in tools for describing and managing SLAs and contract composition. Therefore, we propose an extension of WSLA, a widely known SLA description language. We aim at modeling contracts and SLAs with additional details to support contract owners during service composition and its monitoring. The proposed approach has been adopted to develop and validate a tree-graph-based tool, to simplify SLA and contract composition.
Download

Paper Nr: 304
Title:

Investigating Completeness of Coding in Business Process Model and Notation

Authors:

Carlos Habekost dos Santos, Lucinéia Heloisa Thom and Marcelo Fantinato

Abstract: One of the ways to represent a business process graphically is using the Business Process Model and Notation (BPMN). One of the things defined by the BPMN specification is a textual rule and a correspondent XML Schema for each notational element. However, there are some limitations regarding textual rules of notational elements and their XML Schema. For example, the XML Schema of end event element do not have any control to not connect any element after it, which can lead to a modeling issue. This paper introduces an approach to increment the XML Schema in a set of notational elements. The approach considers the BPMN textual rules and compares with the current XML schema, proposed by BPMN. To evaluate the approach, we will develop a prototype, to verify the completeness of the developed XML Schema allows better understanding compared with the current schema and will use mathematical formalism to verify the correctness of this new schema. We expect that our approach facilitate the understanding of business process by users and minimize possible implementation problems (e.g. deadlocks, lack of synchronization, livelocks, etc). Altogether, the results of this research can be interesting for users who want develop the BPM tools.
Download

Paper Nr: 322
Title:

Capability-based Planning with ArchiMate - Linking Motivation to Implementation

Authors:

Adina Aldea, Maria Eugenia Iacob, Jos Van Hillegersberg, Dick Quartel and Henry Franken

Abstract: This paper proposes a methodology for capability-based planning (CBP) and investigates how it can be modelled with ArchiMate. This can be considered an important step in aligning Business and IT. By having a common language to express organisational plans, enterprise architects can engage business leaders to plan organisational change based on business outcomes, rather than projects, processes and applications. This is possible because CBP is centred on realising strategic goals by focusing on what an organisation can do, rather than how it can do it. In order to determine a methodology for CBP we look at current research and practice, and propose a generic set of steps. Based on this, we analyse the ArchiMate 2.1 Specification for suitability and propose the addition of the Capability and Metric concepts. In the last section we validate our proposed methodology and metamodel with the help of a case study.
Download

Paper Nr: 341
Title:

Adapting Service Development Life-cycle for Cloud

Authors:

George Feuerlicht and Hong Thai Tran

Abstract: As the adoption of cloud computing gathers momentum, many organizations are facing new challenges that relate to the management of cloud computing environments that may involve hundreds of autonomous cloud services provided by a large number of independent service providers. In this paper we argue that the large-scale use of externally provided cloud services in enterprise applications necessitates re-assessment of the SOA paradigm. The main contribution of this paper is the identification of the differences between service provider and service consumer SDLC cycles and the description of the service consumer SDLC phases.
Download

Paper Nr: 18
Title:

Towards a Reference Enterprise Application Architecture for the Customer Relationship Management Domain

Authors:

André Cruz and André Vasconcelos

Abstract: The work presented in this paper, focus on a first step towards a Reference Application Architecture, for the CRM domain. A Reference Architecture is a way to approach usual occurring problems through good architectural design patterns. To reach a Reference Architecture, we analyzed the features of five CRM market solutions, to get the industry best practices. The chosen CRM solutions were: SugarCRM, Microsoft Dynamics CRM, Sage CRM, Siebel Oracle CRM and Salesforce CRM. From these solutions we extracted fifty-three common features from the systems datasheets. These fifty-three features are grouped into ten modules (namelly: Sales module, Marketing module , Service module, Reporting module, Calendar module, Integration module, Document module, Workflow module, Mobile module and Security module), with all these modules being part of the CRM system. We arrived at these modules through the groups that already existed in CRM’s datasheets. With the proposed Reference Architecture we expect to help architects by providing guidelines and knowledge about the CRM domain, with focus on CRM market solutions which targeted primarily small and medium businesses.
Download

Paper Nr: 35
Title:

A Survey on Enterprise Architecture Management in Small and Medium Enterprises

Authors:

Matthias Wißotzki, Felix Timm and Anna Sonnenberger

Abstract: Companies need to control enterprise-wide processes and adopt matching actions. In the past, IT focused architectures failed to integrate other layers and functions of the enterprise. The connection between just business-focused and IT-focused managing has to be established in consideration to the dynamic environment, forcing for adaption and internal changing of enterprises. This paper defines important terms for understanding of Enterprise Architectures (EA) and its Management (EAM), its importance as well as its adaptability for small and medium-sized enterprises (SME). An empirical survey underlines the adaptability by researching the implementation of EAM in SME in practice. The survey shows that IT focus asserted by the literature sources is not realized in practice.
Download

Paper Nr: 128
Title:

Business-IT Alignment in PSS Value Networks - Linking Customer Knowledge Management to Social Customer Relationship Management

Authors:

Samaneh Bagheri, Rob J. Kusters and Jos J. M. Trienekens

Abstract: Offering a PSS that is based on co-creating value with customer, starts with understanding customer needs. Customer understanding is realized through the process of managing customer knowledge across a PSS value network. In this respect, customer knowledge management (CKM) is seen as a core business capability. We extend the notion of CKM capability to a PSS value network, defining it as a value network CKM (VN-CKM) capability. We also look at the supportive IT capability, which we define as the value network social customer relationship management (VN-SCRM) capability. At operational level VN-CKM and VN-SCRM capabilities are reflected in the execution of business processes and information systems. To achieve BIA, a linkage is required between the VN-CKM capability and the VN-SCRM capability and between its accompanying business processes and systems. If in the process of VN-CKM, activities such as creation, storage/retrieve, transfer, and usage of customer knowledge are enabled by VN-SCRM systems across a network, the established BIA will support the functioning of the PSS. In this study we discuss the role of a VN-SCRM capability and identify requirement components of accompanying systems in relation to a VN-CKM capability and accompanying processes, in order to foster BIA at a network level.
Download

Paper Nr: 191
Title:

Evaluation of Paradigms Enabling Flexibility - BPMSs Comparative Study

Authors:

Asma Mejri and Sonia Ayachi Ghanouchi

Abstract: In this paper, we make a comparative study between several paradigms that provide flexibility: constraint based, rule based, case handling and adaptive process support paradigms. We evaluate existing Business Process Management Systems (BPMSs) using the taxonomy of Regev et al. in order to assign a flexibility score to each of the corresponding paradigms.
Download

Paper Nr: 239
Title:

Digital Curation Costs - A Risk Management Approach Supported by the Business Model Canvas

Authors:

Diogo Proença, Ahmad Nadali, Raquel Bairrão and José Borbinha

Abstract: Data management has been emerging as a specific concern, which when applied through the full lifecycle of the data also has been named of data curation. However, when it comes to the estimation of costs for digital curation the references are rare. To address that problem we propose a method a pragmatic method based on the body of knowledge of risk assessment and the established concept of Business Model Canvas. The details of the method are presented, as also references to a tool to support it, and the demonstration is provided by its application to a real case (a national Web Archive).
Download

Paper Nr: 279
Title:

Enforcing Data Protection Regulations within e-Government Master Data Management Systems

Authors:

Federico Piedrabuena, Laura González and Raúl Ruggia

Abstract: The growing adoption of information technology by governments has led to the implementation of e-Government systems which are usually supported by middleware-based integration platforms. In particular, the increasing need of information sharing across government agencies has motivated the implementation of shared Master Data Management (MDM) Systems. On the other hand, these systems have to comply with Data Protection regulations which may hinder an extensive reuse of information in a government context. This paper addresses the issues of enforcing Data Protection (DP) regulations in e-Government MDM systems. In particular, it analyzes the requirements that DP issues pose on these systems and it proposes solutions, which leverage middleware-based capabilities and traditional MDM systems, to enforce these regulations considering different MDM architecture styles.
Download

Paper Nr: 303
Title:

An Experience of using SoaML for Modeling a Service-Oriented Architecture for Health Information Systems

Authors:

Fernanda G. Silva, Jislane S. S. de Menezes, Josimar de S. Lima, Joyce M. S. França, Rogério P. C. do Nascimento and Michel S. Soares

Abstract: Service-oriented applications have been modeled with different modeling languages and diagrams, which suggests a lack of standardization. Although UML concepts for modeling SOA can be regarded as a good starting point, it is not an entirely feasible approach, as UML has not been proposed with the purpose of modeling services and SOA applications. Besides, the proper concept of service is absent in UML. SoaML is a UML profile for the specification and design of services within a service-oriented architecture. One of the advantages of using SoaML to model interoperability between systems in health care is that it is possible to model the consumers and service providers. This would be quite difficult to achieve using UML only. Therefore, the main contribution of this work is to manipulate a relatively new modeling language to real operational problems related to the integration of health systems.
Download

Paper Nr: 306
Title:

Driving the Adoption of Enterprise Architecture Inside Small Companies - Lessons Learnt from a Long Term Case Study

Authors:

Christophe Ponsard and Annick Majchrowski

Abstract: Including Enterprise Architecture (EA) as part of an organisation’s processes is an important milestone in reaching higher maturity levels because it will drive the a long term alignment of IT and business dimensions. This paper explores some important questions related to the introduction of EA inside very small enterprises or entities (VSE), referring to the very common situation where the IT department has limited resources even possibly in a larger organisation. We provide a number of elements of answers on how EA can successfully be deployed in VSE based on a study of approaches already developed by others complemented by our extensive experience in helping such kind of companies to improve their IT practices and adopt EA. The paper is illustrated on a multi-year case study. It has also a special focus on the ISO29110 standard directed towards VSE and possible ways to evolve it to take EA into account in more advanced maturity profiles under preparation.
Download

Paper Nr: 307
Title:

Social Business Process Management Approaches - A Comparative Study

Authors:

Hadjer Khider and Amel Benna

Abstract: The rapid development of web 2.0 has led fundamental changes and has offered huge opportunities in the way the business process models are made available to individuals and organizations. Indeed, in order to enhance their traditional Business Process Management (BPM), organizations are looking increasingly to use these web 2.0 technologies. The social software easiness of use and their distinct features (weak ties, implicit knowledge, knowledge sharing, etc.) has recently led the emergence of the social BPM approaches. In this paper, we discuss the interaction of social software with BPMand provide a comparative study between social business process approaches for each business process life cycle phase; we then propose how, in each phase of a business process life cycle, a BPM can capitalize on social software.
Download

Paper Nr: 321
Title:

A Review of Enterprise Modelling Studies

Authors:

Lerina Aversano and Maria Tortorella

Abstract: This paper aims to provide a basis for the improvement of enterprise modelling research through a review of previous work published in literature. The review identifies 198 enterprise modelling papers in 49 journals and classifies the papers according to: research topic, modelling approach, research approach, study context and type of validation set. A database of these enterprise modelling papers is provided to ease the identification of relevant research results. The review results are combined with other knowledge and provide a support for modelling strategy recommendations for future enterprise modelling research, including: identification of relevant papers within a carefully selected set of journals when completeness is essential; need of conducting more studies on modelling methods commonly used from the software industry; and increase the awareness of how the properties of the case studies impact on the results when evaluating modelling methods.
Download

Paper Nr: 324
Title:

Enterprise Architecture Components for Cloud Service Consumers

Authors:

Eapen George and George Feuerlicht

Abstract: Enterprise Architecture (EA) and appropriate governance enables cloud computing adoption by consumer organisations. EA is gaining acceptability as an approach for strategic alignment of business and IT and as key enabler for cloud computing. EA practices consist of a range of activities and covers many of the elements necessary for enabling cloud computing. This paper discusses the key architectural components necessary from the perspective of a consumer organization for the adoption of cloud computing and discusses these elements in the context of EA frameworks and governance. The ability to use maturity assessments on these architectural components to determine organizational readiness to achieve cloud benefits is introduced.
Download