ICEIS 2014 Abstracts


Area 1 - Databases and Information Systems Integration

Full Papers
Paper Nr: 41
Title:

What Are the Factors Affecting ERP System Integration? - Observations from a Large Manufacturing Enterprise

Authors:

Tommi Kähkönen, Andrey Maglyas and Kari Smolander

Abstract: The first wave of Enterprise Resource Planning (ERP) systems integrated the core internal business processes and provided operational benefits for companies. The second wave of ERPs introduced additional challenges due to the need for ERPs to interact also with various other systems beyond organizational boundaries, highlighting integration as a critical activity during the ERP system development. This paper takes a Grounded Theory approach to investigate ERP system integration. A model of four groups of factors affecting on ERP system integration was created. Challenged by the domain, organizational landscape, ERP development network partners and system characteristics, ERP system integration is a continuous and cooperative effort during the ERP development, conducted by the dynamic ERP development network. It struggles through forced-marriage relationships, political games and organizational changes and aims at an integrated business engine that makes the business more competitive. The model creates a base for further research to investigate how integration issues are solved in ERP development networks.
Download

Paper Nr: 61
Title:

Specifying Complex Correspondences Between Relational Schemas in a Data Integration Environment

Authors:

Valéria Pequeno and Helena Galhardas

Abstract: When dealing with the data integration problem, the designer usually encounters incompatible data models characterized by differences in structure and semantics, even in the context of the same organization. In this work, we propose a declarative and formal approach to specify 1-to-1, 1-m, and m-to-n correspondences between relational schema components. Differently from usual correspondences, our Correspondence Asser- tions (CAs) have semantics and can deal with joins, outer-joins, and data-metadata relationships. Finally, we demonstrate how we can generate mapping expressions in the form of SQL queries from CAs.
Download

Paper Nr: 67
Title:

Business Information System for the Control of Workforce Through Behaviour Monitoring Using Reactive and Terminal-based Mobile Location Technologies

Authors:

Sergio Ríos-Aguilar, Francisco-Javier Lloréns-Montes and Aldo Pedromingo-Suárez

Abstract: This paper analyzes the viability of the use of employees’ smartphones following the BYOD paradigm as a valid tool for companies in order to conduct presence control (primarily for remote workforce). A Mobile Information System is also proposed for Presence Control using exclusively terminal-based reactive location technologies, meeting cost minimization and universal access criteria. Qualitative and quantitative references are proposed, adequate to the location information accuracy demanded in different business remote workforce control scenarios, and taking into consideration the strictest international regulation in force relevant to the location of individuals in Emergency Systems, promoted by the North American FCC. A prototype for the proposed Information System was developed to evaluate its validity under different real world conditions, and valuable information was obtained on the accuracy and precision of location data using real devices (iOS and Android) under heterogeneous connectivity conditions and workplace premises.
Download

Paper Nr: 144
Title:

The Manufacturing Knowledge Repository - Consolidating Knowledge to Enable Holistic Process Knowledge Management in Manufacturing

Authors:

Christoph Gröger, Holger Schwarz and Bernhard Mitschang

Abstract: The manufacturing industry is faced with strong competition making the companies’ knowledge resources and their systematic management a critical success factor. Yet, existing concepts for the management of process knowledge in manufacturing are characterized by major shortcomings. Particularly, they are either exclusively based on structured knowledge, e. g., formal rules, or on unstructured knowledge, such as documents, and they focus on isolated aspects of manufacturing processes. To address these issues, we present the Manufacturing Knowledge Repository, a holistic repository that consolidates structured and unstructured process knowledge to facilitate knowledge management and process optimization in manufacturing. First, we define requirements, especially the types of knowledge to be handled, e. g., data mining models and text documents. On this basis, we develop a conceptual repository data model associating knowledge items and process components such as machines and process steps. Furthermore, we discuss implementation issues including storage architecture variants and finally present both an evaluation of the data model and a proof of concept based on a prototypical implementation in a case example.
Download

Paper Nr: 148
Title:

Efficient and Distributed DBScan Algorithm Using MapReduce to Detect Density Areas on Traffic Data

Authors:

Ticiana L. Coelho da Silva, Antônio C. Araújo Neto, Regis Pires Magalhães, Victor A. E. de Farias, José A. F. de Macêdo and Javam C. Machado

Abstract: Mobility data has been fostered by the widespread diffusion of wireless technologies. This data opens new opportunities for discovering the hidden patterns and models that characterise the human mobility behaviour. However, due to the huge size of generated mobility data and the complexity of mobility analysis, new scalable algorithms for efficiently processing such data are needed. In this paper we are particularly interested in using traffic data for finding congested areas within a city. To this end we developed a new distributed and efficient strategy of the DBScan algorithm that uses MapReduce to detect what are the density areas. We conducted experiments using real traffic data of a brazilian city (Fortaleza) and compare our approach with centralized and map-reduce based DBSCAN approaches. Our preliminaries results confirm that our approach is scalable and more efficient than others competitors.
Download

Paper Nr: 149
Title:

TrieMotif - A New and Efficient Method to Mine Frequent K-Motifs from Large Time Series

Authors:

Daniel Y. T. Chino, Renata R. V. Gonçalves, Luciana A. S. Romani, Caetano Traina Jr. and Agma J. M. Traina

Abstract: Finding previously unknown patterns that frequently occur on time series is a core task of mining time series. These patterns are known as time series motifs and are essential to associate events and meaningful occurrences within the time series. In this work we propose a method based on a trie data structure, that allows a fast and accurate time series motif discovery. From the experiments performed on synthetic and real data we can see that our TrieMotif approach is able to efficiently find motifs even when the size of the time series goes longer, being in average 3 times faster and requiring 10 times less memory than the state of the art approach. As a case study on real data, we also evaluated our method using time series extracted from remote sensing images regarding sugarcane crops. Our proposed method was able to find relevant patterns, as sugarcane cycles and other land covers inside the same area.
Download

Paper Nr: 155
Title:

Preserving the Original Query Semantics in Routing Processes

Authors:

Crishane Freire, Nicolle Cysneiros, Damires Souza and Ana Carolina Salgado

Abstract: In distributed data environments, peers (data sources) are connected with each other through a set of semantic correspondences in such a way that peers directly connected are called semantic neighbours. Queries are submitted considering partial information provided by a peer schema and may be answered by other neighbour peers. From the query submission peer, the original query is successively rewritten into queries over the peers, according to the correspondences between the original peer and the target ones. In this process, some of the original query terms may be lost while other ones may be added, leading to a semantic loss of the original query. In this work, we argue that it is essential to try preserving the original query semantics if we wish to hold what the users defined as important at query submission time. With this in mind, we propose an approach to preserve the original query semantics in query routing processes. Furthermore, we present a metric for assessing the relevance of neighbour peers according to an estimated query semantic value obtained at each query reformulation. In this paper, we present the developed approach and some experimental results we have accomplished.
Download

Paper Nr: 160
Title:

Quality Indices in Medical Alert Systems

Authors:

Juan-Pablo Suarez-Coloma, Christine Verdier and Claudia Roncancio

Abstract: Numerous alert systems exist in healthcare domains but most of them produce too many false alerts leading to bad usage or disinterest. The need of better alert systems motivates the development of context-aware alert systems. The alert system Tempas is a help-decision tool based on personalized alerts. It is adaptable to business environment, target population, expert user needs, and customized in real-time for immediate needs by end users. The adaptability is defined during the alert creation process. The customization is defined during the alert management process. It is based on the population targeted, activation conditions, and the alert behavior. It is supported by two quality indices: the applicability index expresses how much a patient is concerned by the alert and the confidence index expresses how much the user can trust the alert. Both indices are used during the alert creation process (minimal thresholds for the population) and during the management process (minimal personalized threshold). The paper presents a summarized view of Tempas and focuses on the quality indices.
Download

Paper Nr: 185
Title:

A Proposal to Maintain the Semantic Balance in Cluster-based Data Integration Systems

Authors:

Edemberg Rocha Silva, Bernadette Farias Lóscio and Ana Carolina Salgado

Abstract: With the large volume of data sources on the Web, we need a system that integrates them, so that the user can query them transparently. For efficiency in queries, integration systems can group these sources in clusters according to the semantic similarity of their schemas. However, the sources have autonomy to evolve their schema, and to join or to leave the integration system at any time. This autonomy may cause a problem which we define as semantic unbalance of clusters. The semantic unbalance can compromise the formation of clusters and hence the efficiency of the submitted queries. In this paper, we propose a solution to the semantic balance of clusters in dynamic data integration systems based on self-organization. We also introduce a measure to evaluate how much the clusters are semantically unbalanced
Download

Paper Nr: 189
Title:

A Framework for the Discovery of Predictive Fix-time Models

Authors:

Francesco Folino, Massimo Guarascio and Luigi Pontieri

Abstract: Fix-time prediction is a key task in bug tracking systems, which has been recently faced through the definition of inductive learning methods, trained to estimate the time needed to solve a case at the moment when it is reported. And yet, the actions performed on a bug along its life can help refine the prediction of its (remaining) fix time, possibly with the help of Process Mining techniques. However, typical bug-tracking systems lack any task-oriented description of the resolution process, and store fine-grain records, just capturing bug attributes’ updates. Moreover, no general approach has been proposed to support the definition of derived data, which can help improve considerably fix-time predictions. A new methodological framework for the analysis of bug repositories is presented here, along with an associated toolkit, leveraging two kinds of tools: (i) a combination of modular and flexible data-transformation mechanisms, for producing an enhanced process-oriented view of log data, and (ii) a series of ad-hoc induction techniques, for extracting a prediction model out of such a view. Preliminary results on the bug repository of a real project confirm the validity of our proposal and, in particular, of our log transformation methods.
Download

Short Papers
Paper Nr: 6
Title:

Multi-domain Schema Clustering and Hierarchical Mediated Schema Generation

Authors:

Qizhen Huang, Chaoliang Zhong and Jun Zhang

Abstract: In data integration, users can access multiple data sources through a uniform interface. Yet it is still not easy to query from data sources where many domains coexist even if the data sources are clustered into several domains since users have to write different query clauses for each different domain. Previous researches have presented various data integration techniques, but nearly all of them require the schemas of data sources to be integrated belong to the same domain, or failed to address that some different domains may be the sub-domains of a high level domain in which case a more abstract query clause for upper domain can substitute several less abstract query clauses for lower domains. In this paper, we propose a graph-based approach for clustering schemas which would finally expose to users a hierarchical mediated schema forest, and a query forwarding mechanism to transform queries down along the schema forest. A set of experimental results demonstrate that our schema clustering algorithm is effective in clustering the data sources into hierarchical schemas, queries on the mediated schemas could achieve answers with good accuracy, and the cost of writing query clauses for users is reduced without losing query accuracy.
Download

Paper Nr: 15
Title:

The Organisational Impact of Implementing Integrated IS in HE Institutions - A Case Study from a UK University

Authors:

Dimitra Skoumpopoulou and Teresa Waring

Abstract: This paper explores the implementation process of integrated Information Systems (IS) in Higher Education (HE) institutions. This is achieved through the analysis of a HE institution’s strategy during the implementation process of the integrated IS and the impact that the new system had on the working practices of the HE institution. Through the use of interviews, the research indicates that there has been a growth of alternative power bases within the university, new roles and responsibilities for administrative staff and a different working environment for academics.
Download

Paper Nr: 47
Title:

Architectural Key Dimensions for a Successful Electronic Health Record Implementation

Authors:

Eduardo Pinto and António Carvalho Brito

Abstract: The availability of patient clinical data can be vital to a more effective diagnosis and treatment, by an healthcare professional. This information should be accessible regardless of context, place, time or where it was collected. In order to share this type of data, many countries have initiated projects aiming to implement Electronic Health Record (EHR) systems. Throughout the years, some were more successful than others but all of them were complex and difficult to materialise. The research involves the study of four international projects – in Canada, Denmark, England and France – launched with the goal of fostering the clinical data sharing in the respective countries, namely by implementing EHR-like systems. Those case studies served as data to identify the critical issues in this area. To address the challenge of sharing clinical information, the authors believe to be necessary to act in three different dimensions of the problem: (1) the engagement of the stakeholders and the alignment of the system development with the business goals (2) the building of complex systems of systems with the capability to evolve and easily admit new peers (3) the interoperability between different systems which use different conventions and standards.
Download

Paper Nr: 49
Title:

Evaluation of Exclusive Data Allocation Between SSD Tier and SSD Cache in Storage Systems

Authors:

Shinichi Hayashi and Norihisa Komoda

Abstract: We propose an exclusive data allocation method and evaluate the storage I/O response time with this method between a solid state drive (SSD) for a tiered volume and an SSD for cache in a storage system that uses both an SSD and hard disk drive (HDD). With the proposed method, the SSD cache function with exclusive data allocation caches only data allocated on the HDD tier. This enables more data to be allocated on the SSD, which reduces storage I/O response time. The simulation results show that the proposed method reduces the storage I/O response time in high I/O locality workload or low I/O locality workload with large SSD capacity. It also reduces the storage I/O response time by up to 23% compared to a combination of SSD/HDD volume tiering and SSD cache methods with no exclusive data allocation.
Download

Paper Nr: 52
Title:

DG-Query: An XQuery-based Decision Guidance Query Language

Authors:

Alexander Brodsky, Shane G. Halder and Juan Luo

Abstract: Decision optimization is broadly used for making business decisions such as those for finding the best production planning in manufacturing. An optimization model may indicate the total cost of a certain supply chain given the various sourcing and transportation options used; the corresponding optimization problem can be to select among all possible sourcing and transportation options to minimize the total cost. Optimization modelling requires considerable mathematical expertise and effort to generate effective models. Additionally, the optimization process is heavily dependent on data. However, optimization languages such as IBM’s ILOG CPLEX OPL and Bell Laboratories’ AMPL, do not provide native support for manipulation of XML data. On the other hand, XQuery is a language for querying and manipulating XML data, which has become a ubiquitous standard (W3C) for data exchange between organizations; although, XQuery has no decision optimization functionality. To resolve this gap, this paper proposes DG-Query, an XQuery-based Analytics Language that seamlessly merges the XML data transformation and decision optimization capabilities. This is accomplished by first annotating existing XQuery expressions to precisely express the optimization semantics, and second to translate the annotated queries into an equivalent mathematical programming (MP) formulation that can be solved efficiently using existing optimization solvers. This paper presents DG-Query with an example, provides its formal semantics, and describes implementation through a reduction to MP formulation.
Download

Paper Nr: 89
Title:

Cloud Computing - An Evaluation of Rules of Thumb for Tuning RDBMSs

Authors:

Tarcizio Alexandre Bini, Marcos Sfair Sunye and Adriano Lange

Abstract: Cloud computing environments are attractive for IT service provision as they allow for greater flexibility and rationalization of IT infrastructure. In an attempt to benefit from these environments, IT professionals are incorporating legacy Relational Database Management Systems (RDBMSs) in them. However, the design of these legacy systems do not account to the changes in resource availability, present in cloud environments. This work evaluates the use of rules of thumb in RDBMS configuration. Through an evaluation method that simulates concurrent I/O workloads, we analyzed the RDBMS performance under various settings. The results show that well-known configuration rules are inefficient in these environments and that new definitions are necessary to harvest the benefits of cloud computing environments.
Download

Paper Nr: 111
Title:

A Pattern-based Approach for Semantic Retrieval of Information Resources in Enterprises - Application Within STMicroelectronics

Authors:

Sara Bouzid, Corine Cauvet, Claudia Frydman and Jacques Pinaton

Abstract: Information-resource retrieval in enterprises is becoming a major concern nowadays because of the importance of business information in supporting the satisfaction of business objectives. To enhance resource retrieval in enterprises, this paper argues that it is necessary to coherently include the user need in the search process, in particular when this need is business-context dependent. A pattern-based approach is proposed for this purpose. The approach captures the business needs in a company using goal-oriented mechanisms and integrates them in a keyword search using alignment patterns. These patterns are used to both guide the search process and semantically fill the gap between the low-level description of information resources and the high-level needs of business actors. The approach has been applied for resource retrieval in the context of the manufacturing-process control within the STMicroelectronics Company.
Download

Paper Nr: 122
Title:

Performance Tuning of Object-Oriented Applications in Distributed Information Systems

Authors:

Zahra Davar and Janusz R. Getta

Abstract: Majority of the global information systems are constructed from a number of heterogeneous distributed database systems that provide a global object-oriented view of the data stored at the remote systems. Such global information systems have two sides: the source side which consists of heterogeneous distributed databases and the global side which provides an integrated view of the database systems from the source side. User applications access data through the iterations over the classes of objects included in a global object-oriented view. The iterations over the classes of objects are implemented as iterations over the data items such as the rows in the relational tables on the source side of the system creating serious performance problems. This paper addresses the performance problem of object-oriented applications accessing data on the source side of global information system through an object-oriented view on a global side. We propose a number of transformation rules which allow for more efficient processing of object-oriented application on the source side. The rules can eliminate the iterations of classes of objects on the global schema side. We prove the correctness of the rules and show how to systematically apply the rule to object-oriented applications. The paper proposes a number of templates for programming of object-oriented applications that allow for easier and more efficient performance tuning transformations.
Download

Paper Nr: 154
Title:

Approximate String Matching Techniques

Authors:

Taoxin Peng and Calum Mackay

Abstract: Data quality is a key to success for all kinds of businesses that have information applications involved, such as data integration for data warehouses, text and web mining, information retrieval, search engine for web applications, etc. In such applications, matching strings is one of the popular tasks. There are a number of approximate string matching techniques available. However, there is still a problem that remains unanswered: for a given dataset, how to select an appropriate technique and a threshold value required by this technique for the purpose of string matching. To challenge this problem, this paper analyses and evaluates a set of popular token-based string matching techniques on several carefully designed different datasets. A thorough experimental comparison confirms the statement that there is no clear overall best technique. However, some techniques do perform significantly better in some cases. Some suggestions have been presented, which can be used as guidance for researchers and practitioners to select an appropriate string matching technique and a corresponding threshold value for a given dataset.
Download

Paper Nr: 161
Title:

The SITSMining Framework - A Data Mining Approach for Satellite Image Time Series

Authors:

Bruno F. Amaral, Daniel Y. T. Chino, Luciana A. S. Romani, Renata R. V. Gonçalves, Agma J. M. Traina and Elaine P. M. Sousa

Abstract: The amount of data generated and stored in many domains has increased in the last years. In remote sensing, this scenario of bursting data is not different. As the volume of satellite images stored in databases grows, the demand for computational algorithms that can handle and analyze this volume of data and extract useful patterns has increased. In this context, the computational support for satellite images data analysis becomes essential. In this work, we present the SITSMining framework, which applies a methodology based on data mining techniques to extract patterns and information from time series obtained from satellite images. In Brazil, as the agricultural production provides great part of the national resources, the analysis of satellite images is a valuable way to help crops monitoring over seasons, which is an important task to the economy of the country. Thus, we apply the framework to analyze multitemporal satellite images, aiming to help crop monitoring and forecasting of Brazilian agriculture.
Download

Paper Nr: 178
Title:

Workflow Model for Management of Ontology of Homecare Pervasive Systems

Authors:

Leandro O. Freitas, Ederson Bastiani and Giovani R. Librelotto

Abstract: Pervasive computing applied to healthcare is seen nowadays as an alternative to the overcrowding in hospitals. The implementation proactive applications improve the services offered, helping to speed the treatment of patients. We present in this paper a workflow process model to be used for the development of pervasive systems to homecare. It represents the management of the ontologies used by the systems. We describe the processes that compose the workflow, showing a case study to validate it.
Download

Paper Nr: 180
Title:

External Database Extension Framework

Authors:

Alexander Adam and Wolfgang Benn

Abstract: Database systems nowadays show an incredible amount of extensibility interfaces, ranging from simple user defined functions to extensible indexing frameworks as seen in e. g. DB2 and Oracle. Approaching new projects using these interfaces definitely is a valuable option. For legacy systems, already set up and running in production environments, these options are often not available since most of them impose a change in the applications. In this work we present a database extension framework, that enables the user to define functionality which does not reside inside the database. We show different ways to integrate it into existing application landscapes without further modifications.
Download

Paper Nr: 183
Title:

Application-Mimes - An Approach for Quantitative Comparison of SQL - and NoSQL-databases

Authors:

Martin Kammerer and Jens Nimis

Abstract: Due to the rise of NoSQL systems over the last years, the world of commercially applicable database systems has become much larger and heterogeneous than it was ever before. But the opportunities that are associated with the upcoming systems have also introduced a new decision problem that occurs in the information system design process. Once, benchmarking has helped to identify the proper database product among the de facto standard SQL systems. Nowadays, functional and non-functional properties of database systems and their implication on application development are so divergent that not all systems that come into account for realisation of a specific application can be covered by the same benchmark. In this paper we present an approach for experimental comparative information system evaluation that allows for well-grounded selection among diverging database systems. It is based on the concept of so-called application-mimes, i.e. functionally restricted implementations that focus exclusively on the information systems’ interaction with data management and try to mimic the target systems behaviour in this respect as realistic as possible.
Download

Paper Nr: 196
Title:

Generative Modeling of Itemset Sequences Derived from Real Databases

Authors:

Rui Henriques and Cláudia Antunes

Abstract: The increasingly studied problem of discovering temporal and attribute dependencies from multi-sets of events derived from real-world databases can be mapped as a sequential pattern mining task over itemset sequences. Still, the length and local nature of pattern-based models have been limiting its application. Although generative approaches can offer a critical compact and probabilistic view of sequential patterns, existing contributions are only prepared to deal with sequences of single elements. This work targets the task of modeling itemset sequences under a Markov assumption using models centered on sequential patterns. Experimental results hold evidence for the ability to model sequential patterns with acceptable completeness and precision levels, and with superior efficiency for dense or large datasets. We show that the proposed learning setting allows: i) compact representations; ii) the probabilistic decoding of patterns; iii) the inclusion of user-driven constraints through simple parameterizations; and iv) the use of the generative pattern-centered models to support key tasks such as classification. Relevance is demonstrated on retail and administrative databases.
Download

Paper Nr: 220
Title:

A Hybrid Strategy for Integrating Sensor Information

Authors:

Koly Guilavogui, Laila Kjiri and Mounia Fredj

Abstract: The combination of sensor networks with databases has led to a large amount of real-time data to be managed, and this trend will still increase in the next coming years. With this data explosion, current integration systems have to adapt. One of the main challenges is the integration of information coming from autonomously deployed sensor networks, with different geographical scales, but also with the combination of such information with other sources, such as legacy systems. Two main approaches for integrating sensor information are generally used: virtual and warehousing approaches. In the virtual approach, sensor devices are considered as data sources and data are managed locally. In contrast, in the warehousing approach, sensor data are stored in a central database and queries are performed on it. However, these solutions turn out to be difficult to exploit in the current technology landscape. This paper focuses on the issue of integrating multiple heterogeneous sensor information and puts forward a framework for decision making process.
Download

Paper Nr: 226
Title:

Few-exemplar Information Extraction for Business Documents

Authors:

Daniel Esser, Daniel Schuster, Klemens Muthmann and Alexander Schill

Abstract: The automatic extraction of relevant information from business documents (sender, recipient, date, etc.) is a valuable task in the application domain of document management and archiving. Although current scientific and commercial self-learning solutions for document classification and extraction work pretty well, they still require a high effort of on-site configuration done by domain experts and administrators. Small office/home office (SOHO) users and private individuals do often not benefit from such systems. A low extraction effectivity especially in the starting period due to a small number of initially available example documents and a high effort to annotate new documents, drastically lowers their acceptance to use a self-learning information extraction system. Therefore we present a solution for information extraction that fits the requirements of these users. It adopts the idea of one-shot learning from computer vision to the domain of business document processing and requires only a minimal number of training to reach competitive extraction effectivity. Our evaluation on a document set of 12,500 documents consisting of 399 different layouts/templates achieves extraction results of 88% F1 score on 10 commonly used fields like document type, sender, recipient, and date. We already reach an F1 score of 78% with only one document of each template in the training set.
Download

Paper Nr: 227
Title:

ETL Patterns on YAWL - Towards to the Specification of Platform-independent Data Warehousing Populating Processes

Authors:

Bruno Oliveira and Orlando Belo

Abstract: The implementation of data warehouse populating processes (ETL) is considered a complex task, not only in terms of the amount of data processed but also in the complexity of the tasks involved. The implementation and maintenance of such processes faces various design drawbacks, such as the change of business requirements, which consequently leads to adapting existing data structures and reusing existing parts of ETL system. We consider that a more abstract view of the ETL processes and its data structures is need as well as a more effective mapping to real execution primitives, providing its validation before conducting an ETL solution to its final implementation. With this work we propose the use of standard solutions, which already has proven very useful in software developing, for the implementation of standard ETL processes. In this paper we approach ETL modelling in a new perspective, using YAWL, a Workflow language, as the mean to get ETL models platform-independent.
Download

Paper Nr: 274
Title:

Advances in the Decision Making for Treatments of Chronic Patients Using Fuzzy Logic and Data Mining Techniques

Authors:

M. Domínguez, J. Aroba, J. G. Enríquez, I. Ramos, J. M. Lucena-Soto and M. J. Escalona

Abstract: Virological events in HIV-infected patients can rise with no apparent reason. Therefore, when they appear, immunologists or medical doctors do not know whether they will produce other future virological events or they will entail relevant clinical consequences. This paper presents the results of applying Prefurge to HIV-infected patients’ clinical data, with the aim of obtaining rules and information about this set of clinical trials data that will relate these kinds of virological events.
Download

Paper Nr: 284
Title:

Analyzing Tagged Resources for Social Interests Detection

Authors:

Manel Mezghani, André Péninou, Corinne Amel Zayani, Ikram Amous and Florence Sèdes

Abstract: The social user is characterized by his social activity like sharing information, making relationships, etc. With the evolution of social content, the user needs more accurate information that reflects his interests. We focus on analyzing user's interests which are key elements for improving adaptation (recommendation, personalization, etc.). In this article, we are interested to overcome issues that influence the quality of adaptation in social networks, such as the accuracy of user's interests. The originality of our approach is the proposal of a new technique of user's interests detection by analyzing the accuracy of the tagging behaviour of the users in order to figure out the tags which really reflect the resources content. We focus on semi-structured data (resources), since they provide more comprehensible information. Our approach has been tested and evaluated in Delicious social database. A comparison between our approach and classical tag-based approach shows that our approach performs better.
Download

Paper Nr: 38
Title:

Business Process Change in Enterprise Systems Integration - Challenges and Opportunities

Authors:

Vahid Javidroozi, Ardavan Amini, Adrian Cole and Hanifa Shah

Abstract: Currently, many organisations have undertaken systems integration with the aim of improving business performance, which potentially involves radical change in all organisational aspects, including business processes. The aim of this research is to explore and prioritise the challenges of Business Process Change (BPC) in Enterprise Systems Integrations (ESI) specifically focusing on two approaches that are Business Process Reengineering (BPR) and Business Process Modelling (BPMo), as well as identify the solutions for them. Literature review is carried out in order to explore and understand the BPC challenges of systems integration in BPR and BPMo perspectives. Secondly, a questionnaire is deployed to gather various industrial and academic views and compare these with findings from the literature. Then, BPC challenges are prioritised, and relevant solutions are recommended to address those challenges. The main finding of this research represents “minimising human Issues” as the most important BPC challenge in both areas of BPR and BPMo in ESI and the solutions such as top-down management and people involvement are proposed to address it.
Download

Paper Nr: 53
Title:

Paired Indices for Clustering Evaluation - Correction for Agreement by Chance

Authors:

Maria José Amorim and Margarida G. M. S. Cardoso

Abstract: In the present paper we focus on the performance of clustering algorithms using indices of paired agreement to measure the accordance between clusters and an a priori known structure. We specifically propose a method to correct all indices considered for agreement by chance – the adjusted indices are meant to provide a realistic measure of clustering performance. The proposed method enables the correction of virtually any index – overcoming previous limitations known in the literature - and provides very precise results. We use simulated datasets under diverse scenarios and discuss the pertinence of our proposal which is particularly relevant when poorly separated clusters are considered. Finally we compare the performance of EM and K-Means algorithms, within each of the simulated scenarios and generally conclude that EM generally yields best results.
Download

Paper Nr: 59
Title:

Exploring Data Fusion under the Image Retrieval Domain

Authors:

Nádia P. Kozievitch, Carmem Satie Hara, Jaqueline Nande and Ricardo da S. Torres

Abstract: Advanced services in data compression, data storage, and data transmission have been developed and are widely used to address the required capabilities of an assortment of systems across diverse application domains. In order to reuse, integrate, unify, manage, and support heterogeneous resources, a number of works and concepts have emerged with the aim of facilitating aggregation of content and helping system developers. In particular, images, along with existing Content-Based Image Retrieval services, have the potential to play a key role in information systems, due to the large availability of images and the need to integrate them with existing collections, metadata, and available image manipulation softwares and applications. In this work, we explore a data fusion approach for solving data value conflicts in the context of image retrieval domain. In particular, we target the process of solving value conflicts resulted from different features integrating the data resulted from the Content-Based Image Retrieval process, along with the image metadata, provided from a number of sources and applications. Our approach reduces the need of human intervention for keeping a clean and integrated view of an image repository when new data sources are added to an image management system.
Download

Paper Nr: 80
Title:

Behavioral Study of Nested Transaction Success Ratio

Authors:

Mourad Kaddes, Laurent Amanton, Bruno Sadeg, Alexandre Berred, Majed Abdouli and Rafik Bouaziz

Abstract: In a real-time database system, different scheduling policies are proposed in order for transactions to meet their individual deadline. The majority of theses studies are focused on the analysis of flat transactions behavior. Nevertheless, extended transactions are the most suited to support new applications due their flexibility. In this paper, we study scheduling of extended transactions, specifically nested transactions, using Generalized Earliest Deadline First (GEDF) policy. GEDF is a protocol in which a transaction priority is assigned according to both its deadline and its importance in the system. The accuracy of GEDF scheduling policy and the influence of transaction composition and database size on the system performances are investigated. This study enabled us to describe the behavior of the nested transaction success ratio.

Paper Nr: 124
Title:

A Data Analysis Framework for High-variety Product Lines in the Industrial Manufacturing Domain

Authors:

Christian Lettner and Michael Zwick

Abstract: Industrial manufacturing companies produce a variety of different products, which, despite their differences in function and application area, share common requirements regarding quality assurance and data analysis. The goal of the approach presented in this paper is to automatically generate Extract-Transform-Load (ETL) packages for semi-generic operational database schema. This process is guided by a descriptor table, which allows for identifying and filtering the required attributes and their values. Based on this description model, an ETL process is generated which first loads the data into an entity-attribute-value (EAV) model, then gets transformed into a pivoted model for analysis. The resulting analysis model can be used with standard business intelligence tools. The descriptor table used in the implementation can be substituted with any other non-relational description language, as long as it has the same descriptive capabilities.
Download

Paper Nr: 177
Title:

Customer Churn Prediction in Mobile Operator Using Combined Model

Authors:

Jelena Mamčenko and Jamil Gasimov

Abstract: Data Mining technologies are developing very rapidly nowadays. One of the biggest fields of application of data mining is prediction of churn in service provider companies. Customers who switch to another service provider are called churned customers. In this study are described main techniques and processes of Data Mining. Customer churn is defined, different types and causes of churn are discussed. Social aspects of churn are brought to attention and specifically related to realities of Azerbaijan.
Download

Paper Nr: 200
Title:

Multi-dimensional Pattern Mining - A Case Study in Healthcare

Authors:

Andreia Silva and Cláudia Antunes

Abstract: Huge amounts of data are continuously being generated in the healthcare system. A correct and careful analysis of these data may bring huge benefits to all people and processes involved in the healthcare management. However, the characteristics of healthcare data do not make this job easy. These data are usually too complex, massive, with high dimensionality, and are irregularly distributed over time. In the last decade, data mining has begun to address this area, providing the technology and approaches to transform these complex data into useful information for decision support. Multi-relational data mining, in particular, has gained attention since it aims for the discovery of frequent relations that involve multiple dimensions. In this work we present a case study on the healthcare domain. Using the Hepatitis dataset, we show how that data can be modeled and explored in a multi-dimensional model, and we present and discuss the results of applying a multi-dimensional data mining algorithm to that model.
Download

Paper Nr: 225
Title:

SILAB - A System to Support Experiments in the Electric Power Research Center Labs

Authors:

Henrique Burd, Wagner Duboc, Marcio Antelio, Sérgio Assis Rodrigues, Allan Freitas Girão, Jacson Hwang, Rodrigo Pereira dos Santos and Jano Moreira de Souza

Abstract: Companies, research centers, and universities are increasingly keeping in contact over time. Especially after Internet, the “Web Era” has contributed to a dynamic market as well as to a critical relation between management and engineering in industry. Thus, research centers can help companies in supporting and improving their activities and processes through innovation partnerships. In Brazil, the Research Center for Energy (CEPEL) is exploring information systems applied to its modernization. In this sense, this paper presents SILAB, a system to manage actions of clients and laboratories during processes of provision test and certification of equipment. SILAB was developed from an experience based on the govern-university partnership. The main focus is to support standards, transparence and productivity in a domain-driven workflow. Some experiences collected from SILAB’s stakeholders are also discussed.
Download

Paper Nr: 260
Title:

Evolution of the Application and Database with Aspects

Authors:

Rui Humberto R. Pereira and J. Baltasar García Perez-Schofield

Abstract: Generally, the evolution process of applications has impact on their underlining data models, thus becoming a time-consuming problem for programmers and database administrators. In this paper we address this problem within an aspect-oriented approach, which is based on a meta-model for orthogonal persistent programming systems. Applying reflection techniques, our meta-model aims to be simpler than its competitors. Furthermore, it enables database multi-version schemas. We also discuss two case studies in order to demonstrate the advantages of our approach.
Download

Paper Nr: 261
Title:

On the Formalisation of an Application Integration Language Using Z Notation

Authors:

Mauri J. Klein, Sandro Sawicki, Fabricia Roos-Frantz and Rafael Z. Frantz

Abstract: Companies rely on applications in their software ecosystem to provide IT support for their business processes. It is common that these applications were not designed taking integration into account, which makes hard their reuse. Enterprise Application Integration (EAI) focuses on the design and implementation of integration solutions. The demand for integration has motivated the rapid growing of tools to support the construction of EAI solutions. Guaraná is a proposal that can be used to design and implement EAI solutions, and different from other proposals includes a monitoring system that can be configured using a rule-based language to endow solutions with fault-tolerance. Although Guaraná is available, it has not been formalised yet. This is a limitation since it is not possible to validate the rules written by software engineers, using the rule-based language, to ensure that all possibilities of failure in a given EAI solution are covered. Besides, it is not possible to generate automatically these rules based on the semantics of the EAI solution. In this paper we provide a formal specification of the language provided by Guaraná to design EAI solutions, using Z notation.
Download

Paper Nr: 272
Title:

Semantic Integration of Semi-Structured Distributed Data in the Domain of IT Benchmarking - Towards a Domain Specific Ontology

Authors:

Matthias Pfaff and Helmut Krcmar

Abstract: In the domain of IT benchmarking a variety of data and information are collected. The collection of this heterogeneous data is usually done in the course of specific benchmarks (e.g. focusing on IT service management topics). This collected knowledge needs to be formalized previous to any data integration, in order to ensure interoperability of different and/or distributed data sources. Even though these data are the basis to identify potentials for IT cost reductions or IT service improvements, a semantic data integration is missing. Building on previous research in IT benchmarking we emphasise the importance of further research in data integration methods. Before we describe why the next step of research needs to focus on the semantic integration of data that typically resides in IT benchmarking, the evolution of IT benchmarking is outlined first. In particular, we motivate why an ontology is required for the domain of IT benchmarking.
Download

Paper Nr: 275
Title:

Reflections on the Concept of Interoperability in Information Systems

Authors:

Delfina Soares and Luis Amaral

Abstract: Information systems interoperability is one of the main concerns and challenges of information systems managers and researchers, most of whom perceive and approach it on a pure or predominantly technological perspective. In this paper, we argue that a sociotechnical perspective of information systems interoperability should be adopted and we set out seven assertions that, if taken into consideration, may improve the understanding, management, and study of the information systems interoperability phenomenon.
Download

Paper Nr: 305
Title:

New Trends in Knowledge Driven Data Mining

Authors:

Cláudia Antunes and Andreia Silva

Abstract: Existing mining algorithms, from classification to pattern mining, reached considerable levels of efficiency, and their extension to deal with more demanding data, such as data streams and big data, show their incontestable quality and adequacy to the problem. Despite their efficiency, their effectiveness on identifying useful information is somehow impaired, not allowing for making use of existing domain knowledge to focus the discovery. The use of this knowledge can bring significant benefits to data mining applications, by resulting in simpler and more interesting and usable models. However, most of existing approaches are concerned with being able to mine specific domains, and therefore are not easily reusable, instead of building general algorithms that are able to incorporate domain knowledge, independently of the domain. In our opinion, this requires a drift in the focus of the research in data mining, and we argue this change should be from domain-driven to knowledge-driven data mining, aiming for a stronger emphasis on the exploration of existing domain knowledge for guiding existing algorithms.
Download

Area 2 - Artificial Intelligence and Decision Support Systems

Full Papers
Paper Nr: 45
Title:

RecRoute - A Bus Route Recommendation System Based on Users’ Contextual Information

Authors:

Adriano de Oliveira Tito, Arley Ramalho R. Ristar, Luana M. dos Santos, Luiz Antonio V. Filho, Patrícia Restelli Tedesco and Ana Carolina Salgado

Abstract: Traffic has become an increasingly significant problem in the lives of citizens of major and medium sized cities. This has contributed to the inefficiency of public transportation, where one of the main issues to be tackled is the absence of relevant, timely information to users. In times where technology solutions for daily tasks are widely available, Public Transportation User Information Systems emerge as a possible solution to this issue, providing information to passengers and supporting their decision-making. This work aims to present a recommendation system for public transportation routes by bus, called RecRoute, that considers contextual information related to users, climate, time of day and traffic to recommend bus routes that are more adequate to the passengers’ particular needs. The results of our experiment show that RecRoute was approved and its recommendations were well evaluated by the participants.
Download

Paper Nr: 100
Title:

An Improved Parallel Algorithm Using GPU for Siting Observers on Terrain

Authors:

Guilherme C. Pena, Marcus V. A. Andrade, Salles V. G. Magalhães, W. R. Franklin and Chaulio R. Ferreira

Abstract: This paper presents an efficient method to determine a set of observers (that is, where to site them) such that a given percentage of a terrain is visually covered. Our method extends the method proposed in (Franklin, 2002) including a local search heuristic efficiently implemented using dynamic programming and GPU parallel programming. This local search strategy allows to achieve a higher coverage using the same number of observers as the original method and thus it is possible to obtain a given coverage using a smaller number of observers. It can be an important improvement since an observer can represent an expensive facility such as a telecommunication tower. The proposed method performance was compared with that of other methods and the tests showed that it can be more than 1200 times faster than the sequential implementation (with no use of dynamic programming and no GPU parallel programmming) and, also, more than 20 times faster than a previous parallel method presented in (Magalhães et al., 2011).
Download

Paper Nr: 102
Title:

AIV: A Heuristic Algorithm based on Iterated Local Search and Variable Neighborhood Descent for Solving the Unrelated Parallel Machine Scheduling Problem with Setup Times

Authors:

Matheus Nohra Haddad, Luciano Perdigão Cota, Marcone Jamilson Freitas Souza and Nelson Maculan

Abstract: This paper deals with the Unrelated Parallel Machine Scheduling Problem with Setup Times (UPMSPST). The objective is to minimize the maximum completion time of the schedule, the so-called makespan. This problem is commonly found in industrial processes like textile manufacturing and it belongs to NP-Hard class. It is proposed an algorithm named AIV based on Iterated Local Search (ILS) and Variable Neighborhood Descent (VND). This algorithm starts from an initial solution constructed on a greedy way by the Adaptive Shortest Processing Time (ASPT) rule. Then, this initial solution is refined by ILS, using as local search the Random VND procedure, which explores neighborhoods based on swaps and multiple insertions. In this procedure, here called RVND, there is no fixed sequence of neighborhoods, because they are sorted on each application of the local search. In AIV each perturbation is characterized by removing a job from one machine and inserting it into another machine. AIV was tested using benchmark instances from literature. Statistical analysis of the computational experiments showed that AIV outperformed the algorithms of the literature, setting new improved solutions.
Download

Paper Nr: 109
Title:

A Heuristic Procedure with Local Branching for the Fixed Charge Network Design Problem with User-optimal Flow

Authors:

Pedro Henrique González, Luidi Gelabert Simonetti, Carlos Alberto de Jesus Martinhon, Philippe Yves Paul Michelon and Edcarllos Santos

Abstract: Due to the constant development of society, increasing quantities of commodities have to be transported in large urban centers. Therefore, network design problems arise as tools to support decision-making, aiming to meet the need of finding efficient ways to perform the transportation of each commodity from its origin to its destination. This paper reviews a bi-level formulation, an one level formulation obtained by applying the complementary slackness theorem, Bellman’s optimality conditions and Big-M linearizing technique. A heuristic procedure is proposed, through combining a randomized constructive algorithm with a Relax-and-Fix heuristic to generate an initial solution. After that a Local Branching technique is applied to improve the constructed solution, so high quality solutions can be found. Besides that, our computational results are compared with the results found in the literature, showing the efficiency of the proposed method.
Download

Paper Nr: 145
Title:

Extending the Hybridization of Metaheuristics with Data Mining to a Broader Domain

Authors:

Marcos Guerine, Isabel Rosseti and Alexandre Plastino

Abstract: The incorporation of data mining techniques into metaheuristics has been efficiently adopted to solve several optimization problems. Nevertheless, we observe in the literature that this hybridization has been limited to problems in which the solutions are characterized by sets of (unordered) elements. In this work, we develop a hybrid data mining metaheuristic to solve a problem for which solutions are defined by sequences of elements. This way, we extend the domain of combinatorial optimization problems which can benefit from the combination of data mining and metaheuristic. Computational experiments showed that the proposed approach improves the pure algorithm both in the average quality of the solution and in execution time.
Download

Paper Nr: 151
Title:

A Data-driven Approach to Predict Hospital Length of Stay - A Portuguese Case Study

Authors:

Nuno Caetano, Raul M. S. Laureano and Paulo Cortez

Abstract: Data Mining (DM) aims at the extraction of useful knowledge from raw data. In the last decades, hospitals have collected large amounts of data through new methods of electronic data storage, thus increasing the potential value of DM in this domain area, in what is known as medical data mining. This work focuses on the case study of a Portuguese hospital, based on recent and large dataset that was collected from 2000 to 2013. A data-driven predictive model was obtained for the length of stay (LOS), using as inputs indicators commonly available at the hospitalization process. Based on a regression approach, several state-of-the-art DM models were compared. The best result was obtained by a Random Forest (RF), which presents a high quality coefficient of determination value (0.81). Moreover, a sensitivity analysis approach was used to extract human understandable knowledge from the RF model, revealing top three influential input attributes: hospital episode type, the physical service where the patient is hospitalized and the associated medical specialty. Such predictive and explanatory knowledge is valuable for supporting decisions of hospital managers.
Download

Paper Nr: 201
Title:

Router Nodes Positioning for Wireless Networks Using Artificial Immune Systems

Authors:

P. H. G. Coelho, J. L. M. do Amaral, J. F. M. do Amaral, L. F. de A. Barreira and A. V. de Barros

Abstract: This paper proposes the positioning of intermediate router nodes using artificial immune systems for use in industrial wireless sensor networks. These nodes are responsible for the transmission of data from sensors to the gateway in order to meet criteria especially those that lead to a low degree of failure and reducing the number of retransmissions by routers. These criteria can be enabled individually or in groups, combined with weights. Positioning is performed in two stages, the first uses elements of two types of immune networks, SSAIS (Self-Stabilising Artificial Immune System) and AINET (Artificial Immune Network), and the second uses potential fields for positioning the routers such that the critical sensors attract them while obstacles and other routers repel them. Case studies are presented to illustrate the procedure.
Download

Short Papers
Paper Nr: 24
Title:

Possibilistic Interorganizational Workflow Net for the Recovery Problem Concerning Communication Failures

Authors:

Leiliane Pereira de Rezende, Stéphane Julia and Janette Cardoso

Abstract: In this paper, an approach based on interorganizational WorkFlow nets and on possibilistic Petri nets is proposed to deal with communication failures in business processes. Routing patterns and communication protocols existing in business processes are modeled by interorganizational WorkFlow nets. Possibilistic Petri nets with uncertainty on the marking and on the transition firing are considered to express in a more realistic way the uncertainty attached to communication failures. Combining both formalisms, a kind of possibilistic interorganizational WorkFlow net is obtained. An example of communication failure at a process monitoring level that precedes the presentation of a paper at a conference is presented.
Download

Paper Nr: 60
Title:

Distributed Knowledge Management Architecture and Rule Based Reasoning for Mobile Machine Operator Performance Assessment

Authors:

Petri Kannisto, David Hästbacka, Lauri Palmroth and Seppo Kuikka

Abstract: The performance of mobile machine operators has a great impact on productivity that can be translated to, for example, wasted time or environmental concerns such as fuel consumption. In this paper, solutions for improving the assessment of mobile machine are studied. Usage data is gathered from machines and utilized to provide feedback for operators. The feedback is generated with rules that define in what way different measures indicate performance. The study contributes to developing an architecture to manage both data collection and inference rules. A prototype is created: rule knowledge is managed with decision tables from which machine-readable rules are generated. The rules are then distributed to application instances executed in various locations. The results of the prototype promote several benefits. Rules can be maintained independent of the actual assessment application, and they can also be distributed from a centrally managed source. In addition, no IT expertise is required for rule maintenance so the rule administrator can be a pure domain expert. The results bring the architecture towards a scalable cloud service that combines the benefits of both centralized knowledge and distributed data management.
Download

Paper Nr: 84
Title:

Machine Learning Techniques for Topic Spotting

Authors:

Nadia Shakir, Imran Sarwar Bajwa and Erum Iftikhar

Abstract: Automatically choosing topics for text documents that describe the document contents, is a useful technique for text categorization. For example queries sent on the web can use this technique to identify the query topic and accordingly forward query to small group of people. Similarly online blogs can be categorized according to the topics they are related to. In this paper we applied machine learning techniques to the problem of topic spotting. We used supervised learning techniques which are highly dependent on training data and the particular training algorithm used. Our approach differs from automatic text clustering which uses unsupervised learning for clustering the text. Secondly the topics are known in advance and come from an exhaustive list of words. The machine learning techniques we applied are 1) neural network., 2) Naïve Bayes Classifier, 3) Instance based learning using k-nearest neighbours and 4) Decision Tree method. We used Reuters-21578 text categorization dataset for our experiments.
Download

Paper Nr: 96
Title:

An Evolutionary Algorithm for Graph Planarisation by Vertex Deletion

Authors:

Rodrigo Lankaites Pinheiro, Ademir Aparecido Constantino, Candido F. X. de Mendonça and Dario Landa-Silva

Abstract: A non-planar graph can only be planarised if it is structurally modified. This work presents a new heuristic algorithm that uses vertices deletion to modify a non-planar graph in order to obtain a planar subgraph. The proposed algorithm aims to delete a minimum number of vertices to achieve its goal. The vertex deletion number of a graph G = (V, E) is the smallest integer k ≥ 0 such that there is an induced planar subgraph of G obtained by the removal of k vertices of G. Considering that the corresponding decision problem is NP-complete and an approximation algorithm for graph planarisation by vertices deletion does not exist, this work proposes an evolutionary algorithm that uses a constructive heuristic algorithm to planarise a graph. This constructive heuristic has time complexity of O(n + m), where m = |V| and n = |E|, and is based on the PQ-trees data structure and on the vertex deletion operation. The algorithm performance is verified by means of case studies.
Download

Paper Nr: 107
Title:

Evaluating Artificial Neural Networks and Traditional Approaches for Risk Analysis in Software Project Management - A Case Study with PERIL Dataset

Authors:

Carlos Timoteo, Meuser Valença and Sérgio Fernandes

Abstract: Many software project management end in failure. Risk analysis is an essential process to support project success. There is a growing need for systematic methods to supplement expert judgment in order to increase the accuracy in the prediction of risk likelihood and impact. In this paper, we evaluated support vector machine (SVM), multilayer perceptron (MLP), a linear regression model and monte carlo simulation to perform risk analysis based on PERIL data. We have conducted a statistical experiment to determine which is a more accurate method in risk impact estimation. Our experimental results showed that artificial neural network methods proposed in this study outperformed both linear regression and monte carlo simulation.
Download

Paper Nr: 132
Title:

Multi-objective Optimization of Investment Strategies - Based on Evolutionary Computation Techniques, in Volatile Environments

Authors:

Jose Matias Pinto, Rui Ferreira Neves and Nuno Horta

Abstract: In this document, the use of a multi-objective evolutionary system to optimize an investment strategy based on the use of Moving Averages is proposed to be used on stock markets, able to yield high returns at minimal risk. Fair and established metrics are used to both evaluate the return and the risk of the optimized strategies. The Pareto Fronts obtained with the training data during the experiments conducted outperform both B&H strategy and the classical approaches that consider solely the absolute return. Additionally, the PF obtained show the inherent trade-off between risk and returns. The experimental results are evaluated using data coming from the principal world markets, namely, the main stock indexes of the most developed economies, such as: NASDAQ, S&P500, FTSE100, DAX30 and NIKKEI225. Although, the experimental results suggest that the positive connection between the gains with training and testing data, usually assumed in the single-objective proposals, is not necessarily true for all cases.
Download

Paper Nr: 238
Title:

Knowledge-based System for Urinalysis

Authors:

Fabrício Henrique Rodrigues, José Antônio Tesser Poloni, Cecília Dias Flores and Liane Nanci Rotta

Abstract: Urinalysis is a very important test of laboratory medicine, providing valuable information about metabolism, kidney, and urinary tract. For several reasons, including lacking of professional qualification, it does not receive the proper attention, what prevents it to achieve its whole power. Considering that, a knowledge-based system for decision support in urinalysis could help to change this situation, being useful to professional training, decision support during the process or even the automation of the test. This paper proposes the development of such a system, employing ontologies, Bayesian networks, and templates of cognitive tasks to treat domain knowledge. Then, urinalysis is briefly discussed and system architecture is presented, as well as the current state of the work and future steps.
Download

Paper Nr: 303
Title:

Service Level Agreement Constraints into Processes for Document Classification

Authors:

Marco Bianchi, Mauro Draoli, Francesca Fallucchi and Alessandro Ligi

Abstract: This position paper supports research activities aimed to define feasible processes for suppliers providing document classification services with a guaranteed classification quality. Since, in the presence of a Service Level Agreement (SLA) between suppliers and customers, missing a SLA target may involve the contract resolution and/or financial penalties, we assume a supplier is ready to adopt any economically viable classification strategy useful to respect SLA targets. The purpose of this paper is to stimulate research activities in a scenario where neither manual nor full automatic classification process are feasible. To reach this goal we introduce a real case study and we point out some issues we believe should be addressed to define realistic solutions.

Paper Nr: 21
Title:

Artificial Intelligence - Applications on Bioinformatics and Textile Industry

Authors:

H. İbrahim Çelik, M. T. Daş, L. C. Dülger and M. Topalbekiroğlu

Abstract: AI techniques have been successfully used in many fields of engineering. A brief description of possible applications of AI in engineering are dated with future prospects. This study reviews two different experimental systems; bioinformatics and textile engineering. The experimental systems are described. Different databases are used and their implementation results are also presented by using AI methods; like ANN and PSO-NN. Implementation accuracies are given with tables for their use in these cases.
Download

Paper Nr: 87
Title:

Fuzzy DEMATEL Model for Evaluation Criteria of Business Intelligence

Authors:

Saeed Rouhani, Amir Ashrafi and Samira Afshari

Abstract: In response to an ever increasing competitive environment, today’s organizations intend to utilize business intelligence (BI) in order to promote their decision support. In other words, BI capabilities for enterprise systems would be essential to evaluate the enterprise systems. Hence, the key factors for evaluating intelligence-level of enterprise systems have been determined in past studies. More in this research, the causal relationships between criteria of each factor have been obtained to construct impact-relation map. To this aim, this study presents a new hybrid approach containing fuzzy set theory, and the decision making trial and evaluation laboratory (DEMATEL) method. This study considered six main factors for evaluation of BI for enterprise system include: analytical and intelligent decision-support, providing related experiment and integration with environmental information, optimization and recommended model, reasoning, enhanced decision-making tools, and stakeholders’ satisfaction; and have determined the root or cause criteria in each factor. In general, the outcomes of this study can be used as a basis for roadmap of differentiation of BI capabilities in the form of evaluation criteria. Also, it can provide an effective and useful model by separating criteria into cause group and effect group in an uncertainty environment.
Download

Paper Nr: 170
Title:

Evolutionary Algorithms Applied to Agribusiness Scheduling Problem

Authors:

Andre Noel, José Magon Jr. and Ademir Aparecido Constantino

Abstract: This paper addresses a scheduling problem in agribusiness context, especifically about chicken catching. To solve that problem, memetic algorithms combined with local search in a two-phase algorithm were proposed and investigated. Four versions of memetic algorithms were implemented and compared. Also, to apply local search, k-swap and SRP is proposed and experimented. At last we analyze the results, comparing performances. The obtained results show a good improvement in solutions, especially when compared to the manual scheduling actually performed by the company that provides the data to this study.
Download

Paper Nr: 202
Title:

Norm-based Behavior Modification in Reflex Agents - An Implementation in JAMDER 2.0

Authors:

Francisco I. S. Cruz, Robert M. Rocha Jr, Emmanuel S. S. Freire and Mariela I. Cortés

Abstract: The agent-oriented development is becoming more frequent in the industry and academy. Currently, more works are turning to the growth of this area. Many frameworks that support the development of Normative Multi-agent Systems. However, few works deal the impact of norms on individual behavior of the agent. Like many others, JAMDER 2.0 framework follows this aspect. This paper discusses the modification of the behavior of simple reactive agent based on impact caused by norms on the JAMDER 2.0 platform. This work has been collaborating for the extension of this framework, re-establishing the dynamism, which was in its first version, and giving it support for changing the behavior of simple reactive agent. In addition, new features have been included in the framework. Among them, an agent that is able to monitor the actions of a set of agents, evaluating them according to the norms and applying appropriate sanctions to these agents, if available. For illustrate extension, the Vacuum cleaner world was implement using the extended JAMDER 2.0.
Download

Paper Nr: 203
Title:

A Problem-solving Agent to Test Rational Agents - A Case Study with Reactive Agents

Authors:

Francisca Raquel de V. Silveira, Gustavo Augusto L. de Campos and Mariela I. Cortés

Abstract: Software agents are a promising technology for the development of complex systems, although few testing techniques have been proposed to validate these systems. In this paper, we propose an agent-based approach to select test cases and test the performance of rational agent. Interactions between agent and environment are realized in order to evaluate the agent performance for each test case. As a result, we obtain a set of test cases where the agent has not been well evaluated. Based on this result, the approach identifies the goals that are not met by the agent and reported to the designer.
Download

Paper Nr: 242
Title:

The Use of Genetic Algorithms in Mobile Applications

Authors:

Plechawska-Wojcik Malgorzata

Abstract: The goal of the paper is to present the application of genetic algorithm in practice. The main result is the mechanism based on genetic algorithm applied in the application dedicated to tourists. The goal of the mechanism is to propose the most effective route between points - tourist facilities. Those objects are also chosen automatically based on user’s interest as well as on his and his friends opinions expressed via social networking services Facebook. Genetic algorithm was implemented to obtain efficient way of solving the problem of matching the appropriate route regarding requirements concerning time and location. The results are obtained in short time by the genetic algorithm running on the web server. The paper presents also results of the application and mechanisms testing, including performance testing.
Download

Paper Nr: 262
Title:

A Multiagent-based Framework for Solving Computationally Intensive Problems on Heterogeneous Architectures - Bioinformatics Algorithms as a Case Study

Authors:

H. M. Faheem and B. König-Ries

Abstract: The exponential increase of the amount of data available in several domains and the need for processing such data makes problems become computationally intensive. Consequently, it is infeasible to carry out sequential analysis, so the need for parallel processing. Over the last few years, the widespread deployment of multicore architectures, accelerators, grids, clusters, and other powerful architectures such as FPGAs and ASICs has encouraged researchers to write parallel algorithms using available parallel computing paradigms to solve such problems. The major challenge now is to take advantage of these architectures irrespective of their heterogeneity. This is due to the fact that designing an execution model that can unify all computing resources is still very difficult. Moreover, scheduling tasks to run efficiently on heterogeneous architectures still needs a lot of research. Existing solutions tend to focus on individual architectures or deal with heterogeneity among CPUs and GPUs only, but in reality, often, heterogeneous systems exist. Up to now very cumbersome, manual adaption is required to take advantage of these heterogeneous architectures. The aim of this paper is to provide a proposal for a functional-level design of a multiagent-based framework to deal with the heterogeneity of hardware architectures and parallel computing paradigms deployed to solve those problems. Bioinformatics will be selected as a case study.
Download

Paper Nr: 293
Title:

Self-organizing Contents

Authors:

Agostino Forestiero

Abstract: To delivery static contents, Content Delivery Networks (CDNs) are an effective solution, but that shows its limits in dynamic and large systems with centralized approach. Decentralized algorithms and protocols can be usefully employed to tackle this weakness. A biologically inspired algorithm to organize the contents in a Content Delivery Networks, is proposed in this paper. Mobile and bio-inspired agents move and logically reorganize the metadata that describe the contents to improve discovery operation. Experimental results confirm the efficacy of the self-organizing and decentralized algorithm.
Download

Paper Nr: 298
Title:

Situation Modeling and Visual Analytics for Decision Support in Sports

Authors:

Anders Dahlbom and Maria Riveiro

Abstract: High performance is a goal in most sporting activities, for elite athletes as well as for recreational practitioners, and the process of measuring, evaluating and improving performance is one fundamental aspect to why people engage in sports. This is a complex process which possibly involves analyzing large amounts of heterogeneous data in order to apply actions that change important properties for improved outcome. The number of computer based decision support systems in the context of data analysis for performance improvement is scarce. In this position paper we briefly review the literature, and we propose the use of information fusion, situation modeling and visual analytics as suitable tools for supporting decision makers, ranging from recreational to elite, in the process of performance evaluation.
Download

Area 3 - Information Systems Analysis and Specification

Full Papers
Paper Nr: 29
Title:

A Systematic Review on Performance Evaluation of Aspect-Oriented Programming Techniques used to Implement Crosscutting Concerns

Authors:

Rodrigo F. G. da Silva, Marcelo A. Maia and Michel S. Soares

Abstract: Aspect-Oriented Programming (AOP) was proposed with the main objective of addressing an important software quality principle that is modularization. The basic idea of the paradigm is to capture crosscutting concerns as a programming abstraction called aspect. Since the introduction of aspects as a complement to object-oriented programming, many evaluations and empirical studies were provided to the new paradigm, including the application of a variety of software metrics in order to provide evidence of the benefits or problems with the new paradigm. There is no consensus about the impact on performance of the use of AOP techniques to deal with crosscutting concerns. The use of AOP to implement crosscutting concerns and its impact on performance is the motivation for this study. This paper explores further the evaluation of performance by proposing a systematic literature review with the purpose of finding out how performance is affected by the introduction of aspects. The result of this systematic review is that there has been few studies on scientific literature concerning AOP and performance and most of these studies are too specific, and sometimes even inconclusive. This article presents these miscellaneous results and how they were extracted from the literature.
Download

Paper Nr: 66
Title:

Assisted Tasks to Generate Pre-prototypes for Web Information Systems

Authors:

Fábio P. Basso, Raquel M. Pillat, Rafael Z. Frantz and Fabricia Roos-Frantz

Abstract: Pre-prototypes are models represented in different abstraction levels that can be validated in preliminary software process phases. So far, these pre-prototypes have been designed by experienced modellers, requiring weeks of work to specify all the details required before generating source code and, finally, get a feedback from clients in acceptance tests. This paper presents a new methodology to develop web information systems through pre-prototypes. Such methodology aims at helping designers with low experience in modelling by allowing them quickly produce detailed pre-prototypes, which are used as input for model transformations that generate working application pieces. Thus, as means of validation, we report on a case study conducted in industry and discuss shortcomings and benefits about our methodology.
Download

Paper Nr: 77
Title:

Using Artificial Intelligence Techniques to Enhance Traceability Links

Authors:

André Di Thommazo, Rafael Rovina, Thiago Ribeiro, Guilherme Olivatto, Elis Hernandes, Vera Werneck and Sandra Fabbri

Abstract: One of the most commonly used ways to represent requirements traceability is the requirements traceability matrix (RTM). The difficulty of manually creating it motivates investigation into alternatives to generate it automatically. This article presents two approaches to automatically creating the RTM using artificial intelligence techniques: RTM-Fuzzy, based on fuzzy logic and RTM-N, based on neural networks. They combine two other approaches, one based on functional requirements entry data (RTM-E) and the other based on natural language processing (RTM-NLP). The RTMs were evaluated through an experimental study and the approaches were improved using a genetic algorithm and a decision tree. On average, the approaches that used fuzzy logic and neural networks to combine RTM-E and RTM-NLP had better results compared with RTM-E and RTM-NLP singly. The results show that artificial intelligence techniques can enhance effectiveness for determining the requirement’s traceability links.
Download

Paper Nr: 110
Title:

Code Inspection Supported by Stepwise Abstraction and Visualization - An Experimental Study

Authors:

Anderson Belgamo, Elis Montoro Hernandes, Augusto Zamboni, Rafael Rovina and Sandra Fabbri

Abstract: Background: In order to inspect source code effectively and efficiently, in a previous work the use of visualization for supporting the reading technique Stepwise Abstraction was proposed and implemented in the CRISTA tool. Visualization aids code comprehension, which is an essential task for a successful inspection. Goal: The objective of this paper is to evaluate the effectiveness and efficiency of using stepwise abstraction supported by visualization for defects detection, in comparison to an ad-hoc approach. Method: A controlled experiment was conducted with two groups of undergraduate students. One group inspected the Java source code of the Paint software using the approach implemented in CRISTA and the other group inspected the code using an ad-hoc approach. Results: The general performance of the subjects who used Stepwise Abstraction supported by visualization was better than that of the subjects who used the ad-hoc approach. Besides, the subjects’ experience in inspection and Java did not influence the identification of defects. Conclusion: the results reveal that the use of Stepwise Abstraction and visualization promotes better performance in detecting defects than the ad-hoc approach. In future work, other approaches are being investigated as well as the support of the approaches for different types of defects.
Download

Paper Nr: 125
Title:

Automatic Removal of Buffer Overflow Vulnerabilities in C/C++ Programs

Authors:

Sun Ding, Hee Beng Kuan Tan and Hongyu Zhang

Abstract: Buffer overflow vulnerability is one of the commonly found significant security vulnerabilities. This vulnerability may occur if a program does not sufficiently prevent input from exceeding intended size or accessing unintended memory locations. Researchers have put effort in different directions to address this vulnerability, including creating a run-time defence mechanism, proposing effective detection methods or automatically modifying the original program to remove the vulnerabilities. These techniques share many commonalities and also have differences. In this paper, we characterize buffer overflow vulnerability in the form of four patterns and propose ABOR--a framework that integrates, extends and generalizes existing techniques to remove buffer overflow vulnerability more effectively and accurately. ABOR only patches identified code segments; thus it is an optimized solution that can eliminate buffer overflows while keeping a minimum runtime overhead. We have implemented the proposed approach and evaluated it through experiments on a set of benchmarks and three industrial C/C++ applications. The experiment result proves ABOR’s effectiveness in practice.
Download

Paper Nr: 168
Title:

Evaluating the Effort for Modularizing Multiple-Domain Frameworks Towards Framework Product Lines with Aspect-oriented Programming and Model-driven Development

Authors:

Victor Hugo Santiago C. Pinto, Rafael S. Durelli, André L. Oliveira and Valter V. de Camargo

Abstract: Multiple-Domain Frameworks (MDFs) are frameworks that unconsciously involve variabilities from several domains and present two main problems: i) useless variabilities in the final releases and ii) architectural inflexibility. One alternative for solving this problem is to convert them into Framework Product Lines (FPL). FPL is a product line whose members are frameworks rather than complete applications. The most important characteristic of FPLs is the possibility of creating members (frameworks) holding just the desired variabilities. However, the process of converting an MDF into an FPL is very time-consuming and the choice for the most suitable technique may improve significantly the productivity. The main focus of this paper is an experiment that evaluates two techniques that are usually considered for dealing with features: model-driven development and aspect-oriented programming. Our experiment was conducted comparing the effort in converting an MDF called GRENJ into an FPL called GRENJ-FPL The results showed significant differences regarding the time spent and the occurrence of errors using both techniques.
Download

Paper Nr: 186
Title:

Improving Business Processes Through Mobile Apps - An Analysis Framework to Identify Value-added App Usage Scenarios

Authors:

Eva Hoos, Christoph Gröger, Stefan Kramer and Bernhard Mitschang

Abstract: Mobile apps offer new possibilities to improve business processes. However, the introduction of mobile apps is typically carried out from a technology point of view. Hence, process improvement from a business point of view is not guaranteed. There is a methodological lack for a holistic analysis of business processes regarding mobile technology. For this purpose, we present an analysis framework, which comprises a systematic methodology to identify value-added usage scenarios of mobile technology in business processes with a special focus on mobile apps. The framework is based on multi-criteria analysis and portfolio analysis techniques and it is evaluated in a case-oriented investigation in the automotive industry.
Download

Paper Nr: 204
Title:

Aspect-Oriented Requirements Engineering - A Systematic Mapping

Authors:

Paulo Afonso Parreira Junior and Rosângela Aparecida Dellosso Penteado

Abstract: Background: Aspect-Oriented Requirements Engineering (AORE) is a research field that provides the most appropriate strategies for identification, modularization and composition of crosscutting concerns. Several AORE approaches have been developed recently, although with different features, strengths and limitations. Goals: the aim of this paper is threefold: i) cataloguing existing AORE approaches based on the activities encompassed by them; ii) describing what types of techniques have been used for concern identification and classification – a bottleneck activity; and iii) identifying which are the most used means of publication of AORE-based studies and how it has been the progress of these studies over the years. Results: we have selected and analyzed 60 papers and among them, we identified 38 AORE distinct approaches. Some interesting obtained results were: i) few approaches lead to Conflict Identification and Resolution, an activity responsible for discovering and treating the mutual influence between different concerns existing in a software; ii) the most of 60 studies consist of presenting new AORE approaches or extensions of previous approaches - therefore, there is a lack of evaluation studies about already existing approaches; iii) few studies have been published in journals, what can be a consequence of the item (ii).
Download

Paper Nr: 236
Title:

Quality of Requirements Specifications - A Framework for Automatic Validation of Requirements

Authors:

Alberto Rodrigues da Silva

Abstract: Requirements specifications describe multiple technical concerns of a system and are used throughout the project life-cycle to help sharing a common understanding among the stakeholders. In spite a lot of interest has been given to manage the requirements lifecycle, which resulted in numerous tools and techniques becoming available, however, little work has been done that address the quality of requirements specifications. Most of this work still depends on human-intensive tasks made by domain experts that are time-consuming and error prone, and have negative consequences in the success of the project. This paper proposes an automatic validation approach that, with proper tool support, can help to mitigate some of these limitations and therefore can increase the quality of requirements specifications, in particular those that concerns consistency, completeness, and unambiguousness.
Download

Short Papers
Paper Nr: 3
Title:

Empirical Validation of Product-line Architecture Extensibility Metrics

Authors:

Edson Oliveira Jr. and Itana M. S. Gimenes

Abstract: The software product line (PL) approach has been applied as a successful software reuse technique for specific domains. The SPL architecture (PLA) is one of the most important SPL core assets as it is the abstraction of the products that can be generated, and it represents similarities and variabilities of a PL. Its quality attributes analysis and evaluation can serve as a basis for analyzing the managerial and economical values of a PL. This analysis can be quantitatively supported by metrics. Thus, we proposed metrics for the PLA extensibility quality attribute. This paper is concerned with the empirical validation of such metrics. As a result of the experimental work we can provide evidence that the proposed metrics serve as relevant indicators of extensibility of PLA by presenting a correlation analysis.
Download

Paper Nr: 14
Title:

Study on Combining Model-driven Engineering and Scrum to Produce Web Information Systems

Authors:

Fábio P. Basso, Raquel M. Pillat, Fabricia Roos-Frantz and Rafael Z. Frantz

Abstract: Model-driven engineering and agile methods are two important approaches to produce web information systems. However, whereas model-driven engineering is based on widely detailed models, agile methods such as Scrum propose to not spend too much time in modelling. Model-driven engineering literature suggests the use of pre-prototypes models that can be evaluated by clients before generating source code, and, agile methods also propose to get client feedback soon after requirements are specified as user stories. Despite of agile methods and pre-prototypes aim to quick validate requirements, their combined use must be carefully studied. The quick design of pre-prototypes must be considered in order to achieve the benefits provided by both approaches. In this paper we propose a new pre-prototype based methodology, which combines practices to achieve quick feedback from clients from model-driven engineering and Scrum based agile methods. We also report on a real-world case study concerning the development of a web information system.
Download

Paper Nr: 34
Title:

Implementing Novel IT Products in Small Size Organizations - Technology-driven Requirements Engineering

Authors:

Jolita Ralyté and Laurent Biggel

Abstract: The growing popularity of new mobile information technology products influences enterprises to buy them without an appropriate evaluation of their usability. This situation is relatively new and conventional usage-driven requirements engineering approaches are not well suitable. The objective of our work is to propose a technology-driven requirements engineering approach where requirements are discovered not only from business needs but also from product features and properties. In particular we take into consideration small size organizations with a small number of business roles and activities.
Download

Paper Nr: 43
Title:

Knowledge-based Design Cost Estimation Through Extending Industry Foundation Classes

Authors:

Shen Xu, Kecheng Liu and Weizi Li

Abstract: In order to overcome divergence of estimation with the same data, the proposed costing process adopts an integrated design of information system to design the process knowledge and costing system together. By employing and extending a widely used international standard, industry foundation classes, the system can provide an integrated process which can harvest information and knowledge of current quantity surveying practice of costing method and data. Knowledge of quantification is encoded from literatures, motivation case and standards. It can reduce the time consumption of current manual practice. The further development will represent the pricing process in a different type of knowledge representation. The hybrid types of knowledge representation can produce a reliable estimation for construction project. In a practical term, the knowledge management of quantity surveying can improve the system of construction estimation. The theoretical significance of this study lies in the fact that its content and conclusion make it possible to develop an automatic estimation system based on hybrid knowledge representation approach.
Download

Paper Nr: 57
Title:

Granular Cognitive Map Reconstruction - Adjusting Granularity Parameters

Authors:

Wladyslaw Homenda, Agnieszka Jastrzebska and Witold Pedrycz

Abstract: The objective of this paper is to present developed methodology for Granular Cognitive Map reconstruction. Granular Cognitive Maps model complex imprecise systems. With a proper adjustment of granularity parameters, a Granular Cognitive Map can represent given system with good balance between generality and specificity of the description. The authors present a methodology for Granular Cognitive Map reconstruction. The proposed approach takes advantage of granular information representation model. The objective of optimization is to readjust granularity parameters in order to increase coverage of targets by map responses. In this way we take full advantage of the granular information representation model and produce better, more accurate map, which maintains exactly the same balance between generality and specificity. Proposed methodology reconstructs Granular Cognitive Map without loosing its specificity. Presented approach is applied in a series of experiments that allow evaluating quality of reconstructed maps.
Download

Paper Nr: 65
Title:

Model-driven Structural Design of Software-intensive Systems Using SysML Blocks and UML Classes

Authors:

Marcel da Silva Melo and Michel S. Soares

Abstract: One particular characteristic of software-intensive systems is that software is a fundamental component together with other components. For the software design counterpart, both for structural and dynamic views, UML is one of the most used modeling language. However, UML is weak in modeling elements of a software-intensive system that are not software. This is the main reason why the Systems Modeling Language (SysML), a UML profile, was introduced by OMG. One objective of this article is to combine the SysML Block diagram and the UML Class diagram to design the structural view of a software-intensive system architecture. A meta-model describing the relationship between the two diagrams and an automatic model-driven transformation using the ATL language are proposed. The evaluation was performed by applying the meta-model in practice to develop software-intensive systems in the field of road traffic management, as shown in the case study.
Download

Paper Nr: 79
Title:

Using Visualization and Text Mining to Improve Qualitative Analysis

Authors:

Elis Montoro Hernandes, Emanuel Teodoro, Andre Di Thommazo and Sandra Fabbri

Abstract: Context: Qualitative analysis is a scientific way to deeply understand qualitative data and to aid in its analysis. However, qualitative analysis is a laborious, time-consuming and subjective process. Aim: The authors propose the use of visualization and text mining to improve the qualitative analysis process. The objective of this paper is to explain how the use of visualization can support the Coding in multiple documents simultaneously, which may allow codes standardization thus making the process more efficient. Method: The Insight tool is being developed to make the proposal feasible and a feasibility study was performed to verify if the proposal offers benefits to the process and improves its results. Results: The study shows that the subjects who applied the proposal got more standardized codes and were more efficient than the ones who applied the process manually. Conclusions: The results derived from the use of visualization and text mining, even in a feasibility study, encourage proceeding with the project, which aims to combine both techniques to obtain more benefits on qualitative analysis conduction.
Download

Paper Nr: 121
Title:

Verification and Validation Activities for Embedded Systems - A Feasibility Study on a Reading Technique for SysML Models

Authors:

Erik Aceiro Antonio, Rafael Rovina and Sandra C. P. F. Fabbri

Abstract: Embedded Systems play an important role on today's interconnected world. However, there is a gap in relation to Verification and Validation (V&V) activities for Embedded Systems, particularly when they are designed with SysML models. Hence, the objective of this paper is to present a feasibility study on a Reading Techniques for detecting defects in SysML models. This technique is part of a family of reading techniques for inspecting Requirement Diagrams and State Machine Diagrams which are SysML models designed along the SYSMOD development process. The definition of these techniques required the establishment of a defects taxonomy, which was based on three sources: i) the certification standards for embedded systems UL-98 and DO-178C; ii) the Failure Mode and Effects Analysis (FMEA); and iii) the syntactic and semantic elements available in the formalism of the SysML language. A feasibility study was carried out to evaluate the effectiveness and efficiency of one of the techniques. From a total of 26 subjects, 50% have found an average of 72% of defects and spent an average of 48 minutes.
Download

Paper Nr: 133
Title:

Towards the Effectiveness of the SMarty Approach for Variability Management at Sequence Diagram Level

Authors:

Anderson Marcolino, Edson Oliveira Jr and Itana Gimenes

Abstract: SMarty is a variability management approach for UML-based software product lines. It allows the identification, delimitation and representation of variabilities in several UML models by means of a UML profile, the SMartyProfile, and a systematic process with guidelines to provide user directions for applying such a profile. SMarty, in its first versions did not support sequence models. In recent studies, SMarty was extended support to these types of UML models. Existing UML-based variability management approaches in the literature, including SMarty, do not provide empirical evidence of their effectiveness, which is an essential requirement for technology transfer to industry. Therefore, this paper presents empirical evidence of the SMarty approach to recent extension to UML sequence level models.
Download

Paper Nr: 142
Title:

Scoping Customer Relationship Management Strategy in HEI - Understanding Steps towards Alignment of Customer and Management Needs

Authors:

B. Khashab, S. R. Gulliver, A. Alsoud and M. Kyritsis

Abstract: Higher Education Institutions (HEI) are complex organisations, offering a wide range of services, which involve a multiplicity of customers, stakeholders and service providers; both in terms of type and number. Satisfying a diverse set of customer groups is complex, and requires development of strategic Customer Relationship Management (CRM). This paper contributes to the HEI area, by proposing an approach that scopes CRM strategy, allowing us a better understanding CRM implementation in Higher Education Institutions; maximising alignment of customer and management desires, expectation and needs.
Download

Paper Nr: 152
Title:

Extraction of Classes Through the Application of Formal Concept Analysis

Authors:

Decius Pereira, Luis Zárate and Mark Song

Abstract: The class hierarchy is one of the most important activities of the object-oriented software development. The class design and its hierarchy is a difficult task especially when what is sought is an extensive and complex modeling. Some problems are difficult to understand even when modeled using a methodology. The precise construction of a class hierarchy requires deep understanding of the problem, a correct identification of attributes and methods, their similarities, dependencies and specializations. An inaccurate or incomplete class hierarchy entails manufacturing defects of the software, making it difficult to maintain or make corrections. The Formal Concept Analysis provides a theory which enables troubleshoot hierarchy of classes to accomplish the maximum factoring of classes while preserving the relationships of specialization. This paper presents an approach to the application of Formal Concept Analysis theory in class factoring to simplify the design stages of new classes. A framework was developed to support experiments.
Download

Paper Nr: 175
Title:

Managing Distributed Software Development with Performance Measures

Authors:

Guilherme Sperandio dos Santos, Renato Balancieri, Gislaine Camila L. Leal, Elisa Hatsue M. Huzita and Edwin Cardoza

Abstract: The Distributed Software Development (DSD) has been increasingly adopted for providing advantages over traditional software development. But this approach presents some challenges such as communication difficulties, cultural differences among the involved and low proximity among developers. This paper presents a set of performance measures for management through five perspectives: financial, customer, internal processes and, learning and growth, based on Balanced Scorecard (BSC).The fifth perspective, geographical dispersion, has been proposed as an extension of the BSC System for DSD projects. The performance perspectives aim measure and to support the decision making process of stakeholders through metrics related to the attributes of quality, productivity, cost, time and geographic dispersion, fundamental in the software project management. So, the performance measures are a mechanism to evaluate the return on financial investment, the satisfaction of customers and employees, the performance of processes running on the DSD, the continuous improvement of the organization and the success of the geographical dispersion.
Download

Paper Nr: 191
Title:

Towards Formal Foundations for BORM ORD Validation and Simulation

Authors:

Martin Podloucký and Robert Pergl

Abstract: Business Object Relation Modelling (BORM) is a method for systems analysis and design that utilises an object oriented paradigm in combination with business process modelling. BORM’s Object Relation Diagram (ORD) is successfully used in practice for object behaviour analysis (OBA). We, however, identified several flaws in the diagram’s behaviour semantics. These occur mostly due to inconsistent and incomplete formal specification of the ORD behaviour. In this paper, we try to amend this gap by introducing so called input and output conditions, which we consider to be the most important first step towards a sound formal specification of the ORD.
Download

Paper Nr: 197
Title:

SaaS Usage Information for Requirements Maintenance

Authors:

Ana Garcia and Ana C. R. Paiva

Abstract: The incorrect requirements elicitation, requirements changes and evolution during the project lifetime are the main causes pointed out for the failure of software projects. The requirements in the context of Software as a Service are in constant change and evolution which makes even more critical the attention given to Requirements Engineering (RE). The dynamic context evolution due to new stakeholders needs brings additional challenges to the RE such as the need to review the prioritization of requirements and manage their changes related to their baseline. It is important to apply methodologies and techniques for requirements change management to allow a flexible development of SaaS and to ensure their timely adaptation to change. However, the existing techniques and solutions can take a long time to be implemented so that they become ineffective. In this work, a new methodology to manage functional requirements is proposed. This new methodology is based on collecting and analysis of information about the usage of the service to extract pages visited, execution traces and functionalities more used. The analysis performed will allow review the existing requirements, propose recommendations based on quality concerns and improve service usability with the ultimate goal of increasing the software lifetime.
Download

Paper Nr: 198
Title:

A Set of Practices for Distributed Pair Programming

Authors:

Bernardo José da Silva Estácio and Rafael Prikladnicki

Abstract: Geographically distributed teams have adopted agile practices as a work strategy. One of these practices is Distributed Pair Programming (DPP) that consists in two developers working remotely on the same design, algorithm, or code. In this paper, we describe a set of practices for DPP. In our research we seek to understand how distributed teams can use and adopt DPP in a more effective way. Based on a systematic literature review and a field study, we suggest twelve practices that can help both professionals and software organizations in the practice of DPP.
Download

Paper Nr: 208
Title:

An Automated Approach of Test Case Generation for Concurrent Systems from Requirements Descriptions

Authors:

Edgar Sarmiento, Julio C. S. P. Leite, Noemi Rodriguez and Arndt von Staa

Abstract: Concurrent applications are frequently written, however, there are no systematic approaches for testing them from requirements descriptions. Methods for sequential applications are inadequate to validate the reliability of concurrent applications and they are expensive and time consuming. So, it is desired that test cases can be automatically generated from requirements descriptions. This paper proposes an automated approach to generate test cases for concurrent applications from requirements descriptions. The Scenario language is the representation used for these descriptions. Scenario describes specific situations of the application through a sequence of episodes, episodes execute tasks and some tasks can be executed concurrently; these descriptions reference relevant words or phrases (shared resources), the lexicon of an application. In this process, for each scenario a directed graph is derived, and this graph is represented as an UML activity diagram. Because of multiple interactions among concurrent tasks, test scenario explosion becomes a major problem. This explosion is controlled adopting the interaction sequences and exclusive paths strategies. Demonstration of the feasibility of the proposed approach is based on two case studies.
Download

Paper Nr: 223
Title:

Flexible Peak Shaving in Data Center by Suppression of Application Resource Usage

Authors:

Masaki Samejima, Ha Tuan Minh and Norihisa Komoda

Abstract: We address the peak shaving of the electricity consumption in the data center. The conventional peak shaving method is “power capping” that limits the electricity consumption by all the applications in the server. In order to shave the peak of only the unimportant applications, we propose the flexible peak shaving by suppression of application resource usage. By monitoring the resource usage of all the applications, the proposed method decides how much the electricity consumption should be decreased with multiple regression analysis on the linear model between the electricity consumption and the CPU usage. As preliminary investigation, we constructed the linear model with using the observed values of the power consumption and CPU usage on the actual servers.
Download

Paper Nr: 235
Title:

Formalizing Artifact-Centric Business Processes - Towards a Conformance Testing Approach

Authors:

Hemza Merouani, Farid Mokhati and Hassina Seridi-Bouchelaghem

Abstract: Recently, Artifact-Centric Business Processes have emerged as an approach in which processes are centred on data as a “first-class citizen”. A key challenge faced by such processes is to develop effective mechanisms that support formal specification, validation and verification of their static and dynamic behaviours i.e., the data of interest and how they evolve. We present, in this paper, a novel approach that allows on one hand, formalizing Artifact-Centric Business Process Models described in UML as an executable formal specification in the Maude and its strategy language and, on the other hand, testing whether the implementation of such models is conformant to its specification using all possible scenarios that are described as Maude strategies. One of the main reasons for using Maude-Strategy language is due to its execution environment in which the use of a wide range of formal methods is facilitated.
Download

Paper Nr: 255
Title:

Development of Open Source Software, a Qualitative View in a Knowledge Management Approach

Authors:

Luã Marcelo Muriana, Cristiano Maciel and Ana Cristina Bicharra Garcia

Abstract: Open Source Software (OSS) is software that users have freedom to modify and share it with no cost whatever their intentions. A major feature of this kind of software is its development in public, where the collective intelligence (CI) is applied and the knowledge is shared. The communication is a fundamental activity to these settings of development. To support the communication process, knowledge management (KM) stimulates the communication and the information sharing among people. This way, a good communication among users that are stimulated and coordinated addresses the final quality of the open source project. This work surveys how KM stimulates quality assurance in developing open source settings. It focuses on users, on the communication among them, and on the documentation they can help to write.
Download

Paper Nr: 278
Title:

Cooperation Strategies for Multi-user Transmission in Manhattan Environment

Authors:

Jaewon Chang and Wonjin Sung

Abstract: One of the major drawbacks for wireless communication systems in Manhattan environment with tall buildings lining both sides of the streets is the performance degradation caused by the penetration loss and the effects of inter-sector interference. To overcome such a degradation, cooperation among the sectors is under an active investigation as an efficient means to provide an enhanced coverage as well as increased spectral efficiency. In this paper, we describe various type of cooperation sector location for cooperative multi-user transmission, and determine cooperation strategies for the cooperative operation among sectors by evaluating and comparing the types of microcell in Manhattan environment. The results shows that the more suitable number of cooperation sectors is determined by signal-to-interference plus noise ratio (SINR), outage probability, and throughput comparison. Performance variations based on different numbers of sector density under cooperation are also presented to suggest an efficient inter-sector cooperative transmission strategy in Manhattan environment.
Download

Paper Nr: 280
Title:

Social Business – A New Dimension for Education and Training in Organizations - The EToW Model

Authors:

Maria João Ferreira, Fernando Moreira and Isabel Seruca

Abstract: Organizations have suffered a large (r)evolution at the social, economic and technological levels. A change of paradigm in what comes to information systems and technologies (IST) used in the day-to-day life of every citizen, by itself, does not sustain such a transformation; it is therefore necessary a change of culture and behaviour. The use of IST in an appropriate and integrated way with the organization's processes will depend on an individual and collective effort. The younger generation, familiarized to sharing, often through mobile devices, personal information on social networks, enters the job market looking for similar tools. These social tools allow the production, sharing and management of information and knowledge within the organization between peers and other stakeholders, eliminating the barriers of communication and sharing. Taking advantage of these technologies for the education and training of organization’ employees within the context of Social Business, in particular concerning nomadic workers, requires a comprehension exercise in how to demonstrate their usefulness with regard to the creation, access and sharing of contents in a safe way. To this end, this paper proposes a model for the 2nd layer – Education and Training of Organizational Workers – of the mobile Create, Share, Document and Training (m_CSDT) framework.
Download

Paper Nr: 287
Title:

Recommendations for Impact Analysis of Model Transformations - From the Requirements Model to the Platform-independent Model

Authors:

Dmitri Valeri Panfilenko, Andreas Emrich, Christian Meyer and Peter Loos

Abstract: Model transformations are the core component of MDA. They make it possible to transform models between different levels of abstraction, which allows the implicit in-built knowledge to be passed on from domain experts to the IT professionals. What is not considered by the OMG, are the consequences that changes at each level cause to the other MDA levels, which could be estimated through impact analysis techniques. For example, if the course of a procurement process in a company is to be changed, this would be performed by the proper experts at the technical level. However, it is difficult at this time to estimate the resulting changes at the following adjacent levels. This shortcoming needs to be addressed and proper recommendation support for the impact analysis of model transformations has to be elaborated.
Download

Paper Nr: 291
Title:

Slrtool: A Tool to Support Collaborative Systematic Literature Reviews

Authors:

Balbir S. Barn, Franco Raimondi, Lalith Athappian and Tony Clark

Abstract: Systematic Literature Reviews (SLRs) are used in a number of fields to produce unbiased accounts of specific research topics. SLRs and meta-analysis techniques are increasingly being used in other fields as well, from Social Sciences to Software Engineering. This paper presents the SLRTool tool - an open source, web based, multi-user tool that supports the SLR process for a range of research areas. The tool is available at http://www.slrtool.org and is developed using a model-driven approach to enable its adaptation to different disciplines. The functionality of the tool is presented in the context of support for various phases of the SLR process. The use of the tool is illustrated by means of a simulated SLR aiming to map out existing research in the domain of Enterprise Architecture (EA). Commentary on the use of the tool and potential additional benefits are discussed, for example, the role of the tool in non-medical meta-studies.The SLRTool tool supports all phases of the SLR process and lends itself to creating and supporting research communities geared to SLR oriented activities. In particular, the tool could be suitable for the novice researcher community.
Download

Paper Nr: 294
Title:

A Research Agenda for Mobile Systems Evaluation

Authors:

Tamara Högler

Abstract: The present work shows the necessity of an economic evaluation model that is based on singularities of mobile systems and that takes into account the interdependencies of their individual components. A motivation for this approach is not only given by the continuing discussion on the economic efficiency of mobile systems, but also by the fact that appropriate methodologies for comprehensive evaluation still do not exist. Starting point for a research agenda is the definition of the term mobile system, followed by the explanation of the single components and singularities of such systems. The findings of the present work motivate the development of a generic model for economic evaluation. By defining the research agenda we provide guidance for constructing such a model.
Download

Paper Nr: 300
Title:

Toward a QoS Based Run-time Reconfiguration in Service-oriented Dynamic Software Product Lines

Authors:

Jackson Raniel Florencio da Silva, Aloisio Soares de Melo Filho and Vinicius Cardoso Garcia

Abstract: Ford invented the product line that makes possible to mass produce by reducing the delivery time and production costs. Regarding the software industry, this, roughly presents both a manufacturing and mass production that generates products that are denoted as individual software and standard software (Pohl et al., 2005): a clear influence of Fordism in the development paradigm of Software Product Lines (SPL). However, this development paradigm was not designed to support user requirements changes at run-time. Faced with this problem, the academy has developed and proposed the Dynamic Software Product Lines (DSPL) (Hallsteinsen et al., 2008) paradigm. Considering this scenario, we objective contribute to DSPL field presenting a new way of thinking which DSPL features should be connected at run-time to a product based on an analysis of quality attributes in service levels specified by the user. In order to validate the proposed approach we tested it on a context-aware DSPL. At the end of the exploratory validation we can observe the effectiveness of the proposed approach in the DSPL which it was applied. However, it is necessary to perform another studies in order to achieve statistical evidences of this effectiveness.
Download

Paper Nr: 312
Title:

Towards Real-time Static and Dynamic Profiling of Organisational Complexity

Authors:

Kon Shing Kenneth Chung

Abstract: In this position paper, I argue that although the definition and quantifiable metric for organisational complexity may still be controversial, it is possible to capture structural aspects of complexity in both static and dynamic forms. Based on Kannampallil’s theoretical framework for computing complexity, it is proposed here that complexity, in an aggregate sense, can be evaluated in terms of (i) the number of components (NoC) there are within a socio-technical organisation and (ii) the degree of interrelatedness (DoI) between these components. Given these variables, it is then possible to characterise complexity in terms of simple, complicated, relatively complex and complex profiles. These profiles serve as useful toolkits for indicating the complexity level a team, a department or the entire organisation is at for useful interventions or insights to be made. Adapting the ideas of Pentland, I also argue that with technological advances in Information Systems, organisations are now able to capture relational or social network data with relative ease, to construct useful network and complexity maps of individuals, teams and organisations in real time.
Download

Paper Nr: 4
Title:

Vulnerability and Remediation for a High-assurance Web-based Enterprise

Authors:

William R. Simpson and Coimbatore Chandersekaran

Abstract: A process for fielding vulnerability free software in the enterprise is discussed. This process involves testing for known vulnerabilities, generic penetration testing and threat specific testing coupled with a strong flaw remediation process. The testing may be done by the software developer or certified testing laboratories. The goal is to mitigate all known vulnerabilities and exploits, and to be responsive in mitigating new vulnerabilities and/or exploits as they are discovered. The analyses are reviewed when new or additional threats are reviewed and prioritized with mitigation through the flaw remediation process, changes to the operational environment or the addition of additional controls or products). This process is derived from The Common Criteria for Information Technology Security Evaluation, Common Evaluation Methodology which covers both discovery and remediation. The process has been modified for the USAF enterprise.
Download

Paper Nr: 5
Title:

Structuring Software Measurement - Metrication in the Context of Feedback Loops

Authors:

Jos J. M. Trienekens and Rob J. Kusters

Abstract: This paper presents results of a case study in a software engineering department of a large industrial company. This software engineering department struggles with the monitoring and control of the performance of software projects. The current measurement processes doesn’t provide adequate and sufficient information to both project and organisational management. Based on an analysis of the current measurement processes four guidelines for measurement process improvement have been proposed. Following these guidelines a three-level feedback loop has been developed and been implemented. This multi-level feedback loop distinguishes measurement, analysis and improvement on respectively the project, the multi-project and the organisational level. In the context of this feedback loop new ‘process oriented’ metrics have been identified in collaboration with project and organisational management. Preliminary results show that these ‘process oriented’ metrics, i.e. regarding different types of effort deviations, provide useful insights in the performance of software projects for managers on the different levels of the implemented feedback loops.
Download

Paper Nr: 18
Title:

A Straightforward Introduction to Formal Methods Using Coloured Petri Nets

Authors:

Franciny Medeiros Barreto, Joslaine Cristina Jeske de Freitas, Michel S. Soares and Stéphane Julia

Abstract: Coloured Petri Nets (CPN) arose from the need to model very large and complex systems, which are found in real industrial applications. The idea behind CPN is to unite the ability to represent synchronization and competition for resources of Petri nets with the expressive power of programming languages, data types and diverse abstraction levels. Through this union, systems which study was previously impractical have become amenable to study. The objective of this paper is to present a formal modeling of the Health Watcher System applying the concepts of CPN using CPN Tools. Using a graphical language such as CPN often proves to be a helpful didactic method for introducing formal methods. This paper presents a brief introduction to Coloured Petri Nets, and illustrates how the construction, simulation, and verification are supported through the use of CPN Tools.
Download

Paper Nr: 54
Title:

Cross-Sensor Iris Matching using Patch-based Hybrid Dictionary Learning

Authors:

Bo-Ren Zheng, Dai-Yan Ji and Yung-Hui Li

Abstract: Recently, more and more new iris acquisition devices appear on the market. In practical situation, it is highly possible that the iris images for training and testing are acquired by different iris image sensors. In that case, the recognition rate will decrease a lot and become much worse than the one when both sets of images are acquired by the same image sensors. Such issue is called “cross-sensor iris matching”. In this paper, we propose a novel iris image hallucination method using a patch-based hybrid dictionary learning scheme which is able to hallucinate iris images across different sensors. Thus, given an iris image in test stage which is acquired by a new image sensor, a corresponding iris image will be hallucinated which looks as if it is captured by the old image sensor used in training stage. By matching training images with hallucinated images, the recognition rate can be enhanced. The experimental results show that the proposed method is better than the baseline, which proves the effectiveness of the proposed image hallucination method.
Download

Paper Nr: 58
Title:

Cloud-based Enterprise Resources Planning System (ERP) - A Review of the Literature

Authors:

Yuqiuge Hao and Petri Helo

Abstract: Cloud computing recently attracted a lot of attentions. The growing number of articles on cloud is an indication of its importance. Cloud ERP is a specific service delivered by cloud model. It provides companies the benefits of all business management functionalities with minimum IT investment and low cost. Despite cloud ERP is being promoted as a new strategy to improve companies’ management and operations, no systematic research on cloud ERP has been published until now. The main objectives of this research are to review up-to-date publications on cloud ERP, to classify the publications based on a suitable classification of themes and to develop a conceptual framework for organizing its related knowledge. In this paper, 40 peer-reviewed journal and conference publications are analysed and classified into different themes. A concept framework is designed with four domains: Technology Innovation, Business Model, Development Method and Usage & Assimilation. This framework specifies the research gap between cloud ERP and business alignment. In the end, some research agendas are developed.
Download

Paper Nr: 104
Title:

DC2DP: A Dublin Core Application Profile to Design Patterns

Authors:

Angélica Aparecida de Almeida Ribeiro, Jugurta Lisboa-Filho, Lucas Francisco da Matta Vegi and Alcione de Paiva Oliveira

Abstract: Design patterns describe reusable solutions to existing problems in object-oriented software development. Design patterns are mostly documented in written form in books and scientific papers, which hinders processing them via computer, their diffusion, and their broader reuse. They can also be found on the internet, though documented with little detail, which makes it hard to understand and consequently reuse them. This paper presents an application profile of the Dublin Core metadata standard specific for design patterns, called DC2DP. The goal is to allow design patterns to be documented so as to provide the user with a more detailed and standardized description, besides enabling automatic processing through web services. The paper also extends an Analysis Patterns Reuse Infrastructure (APRI) by adding a design pattern repository to it, thus allowing these patterns to be cataloged and searched, which makes their discovery, study, and reuse easier.
Download

Paper Nr: 114
Title:

Domain Ontology for Time Series Provenance

Authors:

Lucélia de Souza, Maria Salete Marcon Gomes Vaz and Marcos Sfair Sunye

Abstract: Time series data are generated all the time with a volume without precedent, constituting themselves of a points sequence spread out over time, usually at time regular intervals. Time series analysis is different from data analysis, given its intrinsic nature, where observations are dependent and the observations order is important for analysis. The knowledge about the data which will be analyzed is relevant in an analysis process, but this knowledge is not always explicit and easy to interpret in many information resources. Time series can be semantically enriched where provenance information using ontologies allows to representing and inferring knowledge. The main contribution of this paper is to present a domain ontology developed by modular design for time series provenance, which adds semantic knowledge and contributes to the choice of appropriate statistical methods for an important step of time series analysis that is the trend extraction (detrending). Trend is a time series component that needs be extracted because it can hide other phenomena, as well as the most statistical methods are developed for stationary time series. With this work, is intended to contribute for semantically improving the decision making about trend extraction step, facilitating the preprocessing phase of time series analysis.
Download

Paper Nr: 117
Title:

University’s Scientific Resources Processing in Knowledge Management Systems

Authors:

Zhomartkyzy Gulnaz, Milosz Marek and Balova Tatiana

Abstract: This article deals with some issues of modern approaches to word processing in knowledge management systems. The method of documents’ profiles formation based on scientific knowledge ontology model which provides the semantic processing and retrieval of information is proposed. The article describes the main stages of the university's information resources word processing to form a semantic document profile: the extraction of terminological collocations, the automatic classification of texts on scientific topics, the formation of a document’s semantic profile.
Download

Paper Nr: 131
Title:

Detection of Software Anomalies Using Object-oriented Metrics

Authors:

Renato Correa Juliano, Bruno A. N. Travençolo and Michel S. Soares

Abstract: The development of quality software has always been the aim of many studies in past years, in which the focus was on seeking for better software production with high effectiveness and quality. In order to evaluate software quality, software metrics were proposed, providing an effective tool to analyze important features such as maintainability, reusability and testability. The Chidamber and Kemerer metrics (CK metrics) are frequently applied to analyze Object-Oriented Programming (OOP) features related to structure, inheritance and message calls. The main purpose of this article is to gather results from studies that used the CK metrics for source code evaluation, and based on the CK metrics, perform a review related to software metrics and the values obtained. Results on the mean and standard deviation obtained in all the studied papers is presented, both for Java and C++ projects. Therefore, software anomalies are identified comparing the results of software metrics described in those studies. This article contributes by suggesting values for software metrics that, according to the literature, can present high probabilities of failures. Another contribution is to analyze which CK metrics are successfully used (or not) in some activities such as to predict proneness error, analyze the impact of refactoring on metrics and examine the facility of white-box reuse based on metrics. We discovered that, in most of the studied articles, CBO, RFC and WMC are often useful and hierarchical metrics as DIT and NOC are not useful in the implementation of such activities. The results of this paper can be used to guide software development, helping to manage the development and preventing future problems.
Download

Paper Nr: 135
Title:

Networks of Pain in ERP Development

Authors:

Aki Alanne, Tommi Kähkönen and Erkka Niemi

Abstract: Enterprise resource planning (ERP) systems have been providing business benefits through integrated business functions for two decades, but system implementation is still painful for organizations. Even though ERP projects are collaborative efforts conducted by many separate organizations, academic research has not fully investigated ERPs from this perspective. In order to find out the challenges of ERP development networks (EDN), a multiple case study was carried out. We identified three main categories of pain: evolving network, inter-organizational issues, and conflicting objectives. The dynamic nature of the EDN causes challenges when new organizations and individuals enter and leave the project. Relationships between organizations form the base for collaboration, yet conflicting objectives may hinder the development. The main implication of this study is that the network should be managed as a whole in order to avoid the identified pitfalls. Still more research is needed to understand how the EDN efficiently interacts to solve different problems in ERP development.
Download

Paper Nr: 159
Title:

Enhance OpenStack Access Control via Policy Enforcement Based on XACML

Authors:

Hao Wei, Joaquin Salvachua Rodriguez and Antonio Tapiador

Abstract: The cloud computing is driving the future of internet computation, and evolutes the concepts from software to infrastructure. OpenStack is one of promising open-sourced cloud computing platforms. The active developer community and worldwide partners make OpenStack as a booming cloud ecosystem. In OpenStack, it supports JSON file based access control for user authorization. In this paper, we introduce a more powerful and complex access control method, XACML access control mechanism in OpenStack. XACML is an approved OASIS standard for access control language, with the capability of handling all major access control models. It has numerous advantages for nowadays cloud computing environment, include fine-grained authorization policies and implementation independence. This paper puts forward a XACML access control solution in OpenStack, which has Policy Enforcement Point (PEP) embedded in OpenStack cloud service and a XACML engine server with policy storage database. Our implementation allows OpenStack users to choose XACML as an access control method of OpenStack and facilitate the management work on policies.
Download

Paper Nr: 166
Title:

Reuse of Service Concepts Based on Service Patterns

Authors:

Wannessa Rocha Fonseca and Pedro Luiz Pizzigatti Corrêa

Abstract: In the process of service-oriented software development, one of the main tasks is to design services, since errors at this stage can propagate throughout the project. This paper proposes a service specification model in the public sector based on service patterns. The service pattern is an abstract service that represents a generic and reusable description. In this context, a lifecycle of service patterns is proposed, as well as the steps for specifying the service patterns. A case study shows an example of a service pattern in the e-government scenario.
Download

Paper Nr: 171
Title:

A Framework for Concurrent Design of Metamodels and Diagrams - Towards an Agile Method for the Synthesis of Domain Specific Graphical Modeling Languages

Authors:

François Pfister, Marianne Huchard and Clémentine Nebut

Abstract: DSML (Domain Specific Modeling Languages) are an alternative to general purpose modeling languages (e.g. UML or SysML) for describing models with concepts and relations specific to a domain. DSML design is often based on Ecore metamodels, which follow the class-relation paradigm and also require defining a concrete syntax which can be either graphical or textual. In this paper, we focus on graphical concrete syntax, and we introduce an approach and a tool (Diagraph) to assist the design of a graphical DSML. The main principles are: non-intrusive annotations of the metamodel to identify nodes, edges, nesting structures and other graphical information; immediate validation of metamodels by immediate generation of an EMF-GMF instance editor supporting multi-diagramming. We report a comparison experience between Diagraph and Obeo Designer (a commercial proprietary tool), which was conducted as part of a Model Driven Engineering Course.
Download

Paper Nr: 217
Title:

Quality Assessment Technique for Enterprise Information-management System Software

Authors:

E. M. Abakumov and D. M. Agulova

Abstract: The paper represents an overview of existing methods and standards used for the quality assessment of computer software. Quality model, quality requirements and recommendations for the evaluation of software product quality are defined in standards, but there is no unified definition for the algorithm that describes the process of software quality assessment completely and contains particular methods of measurement, ranking and estimation of quality characteristics. So the paper describes the technique that allows obtaining software quality quantitative assessment, defining whether the considered software meets the required quality level, and, in case it is needed to select between equivalent software tools, allows comparing them one with each other.
Download

Paper Nr: 234
Title:

Data Leakage Prevention - A Position to State-of-the-Art Capabilities and Remaining Risk

Authors:

Barbara Hauer

Abstract: Organizations from all around the world are facing a continuous increase of information exposure over the past decades. In order to overcome this thread, out of the box data leakage prevention (DLP) solutions are applied which are used to monitor and to control data access and usage on storage systems, on client endpoints, and in networks. In recent years products from market leaders, such as McAfee, Symantec, Verdasys, and Websense, evolved to enterprise content-aware DLP solutions. However, this paper argues that current out of the box solutions are not able to reliably protect information assets. It is only possible to reduce the probability of various incidents if organizational and technical requirements are accomplished before implementing a DLP solution. To be efficient, DLP should be a concept of information security within the information leakage prevention (ILP) pyramid which is presented in this paper. Furthermore, data must not be equalized with information which requires different strategies for protection. Especially in case of misusing privileges by exploiting an unlocked system or by shoulder surfing, the remaining risk must not to be underestimated after all.
Download

Paper Nr: 240
Title:

Database Design of a Geo-environmental Information System

Authors:

George Roumelis, Thanasis Loukopoulos and Michael Vassilakopoulos

Abstract: Environmental protection from productive investments becomes a major task for enterprises and constitutes a critical competitiveness factor. The region of Central Greece presents many serious and particular environmental problems. An Environmental Geographic Information System is under development that will maintain necessary and available information, including existing environmental legislation, specific data rules, regulations, restrictions and actions of the primary sector, existing activities of the secondary and tertiary sectors and their influences. The system will provide information about the environmental status in each location with respect to water resources, soil and atmosphere, the existence of significant pollution sources, existing surveys, studies and measurements for high risk areas, the land use and legal status of locations and the infrastructure networks. In this paper, we present a Database Design that supports the above mentioned objectives and information provision. More specifically, we present examples of user queries that the system should be able to answer for extraction of useful information, the basic categorization of data that will be maintained by the system, a data model that is able to support such data maintenance and examine how existing indexing structures can be utilized for efficient processing of such queries.
Download

Paper Nr: 249
Title:

Change and Version Management in Variability Models for Modular Ontologies

Authors:

Melanie Langermeier, Thomas Driessen, Heiner Oberkampf, Peter Rosina and Bernhard Bauer

Abstract: Modular ontology management tries to overcome the disadvantages of large ontologies regarding reuse and performance. A possibility for the formalization of the various combinations are variability models, which originate from the software product line domain. Similar to that domain, knowledge models can then be individualized for a specific application through selection and exclusion of modules. However, the ontology repository as well as the requirements of the domain are not stable over time. A process is needed, that enables knowledge engineers and domain experts to adapt the principles of version and change management to the domain of modular ontology management. In this paper, we define the existing change scenarios and provide support for keeping the repository, the variability model and also the configurations consistent using Semantic Web technologies. The approach is presented with a use case from the enterprise architecture domain as running example.
Download

Paper Nr: 269
Title:

Defining a Model for Effective e-Government Services and an Inter-organizational Cooperation in Public Sector

Authors:

Nunzio Casalino, Maurizio Cavallari, Marco De Marco, Mauro Gatti and Giuseppe Taranto

Abstract: Accomplishing interoperability among public information systems is a complex task not only by the variety of technological specifications and by the nature of the organisations in which the systems are implemented, but also because a detailed evaluation and analysis of the multiple aspects involved is lacking. The aim of this paper is to identify and summarize the main aspects regarding the field of interoperability (strategic frameworks, laws, regulations, specific requirements, organizational and technical issues) by means of the location and assessment of works that focus on the identification and analysis of the barriers, the organizational issues and the success and risk factors in information systems (IS) for public sector. Most of the interesting studies, based on the literature review of organisational studies, focus on other related themes such as bond interconnection, information sharing, and process integration in public administration but not on the specific subject of interoperability between European public administrations IS.
Download

Paper Nr: 276
Title:

Impact of Dynamicity and Causality on Cost Drivers in Effort Estimation

Authors:

Suman Roychoudhury, Sagar Sunkle and Vinay Kulkarni

Abstract: Software cost estimation is an important step that decides upon the effective manpower, schedule, pricing, profit and success for executing any medium to large sized project. Depending upon the underlying development methodology (e.g., code-centric, model-driven, product-line etc.) and past experience, every enterprise follows some cost estimation strategy that may be derived and customized from a standard cost model (e.g., COCOMO II). However, most software cost estimation techniques that are done at the start of a project do not consider the dynamicity and causality among cost drivers that can alter the accuracy of estimation. In this paper, we investigate those cost drivers that are time and inter-dependent and use system dynamics to simulate their effect in effort estimation.
Download

Paper Nr: 289
Title:

Vectorization of Content-based Image Retrieval Process Using Neural Network

Authors:

Hanen Karamti, Mohamed Tmar and Faiez Gargouri

Abstract: The rapid development of digitization and data storage techniques resulted in images’ volume increase. In order to cope with this increasing amount of informations, it is necessary to develop tools to accelerate and facilitate the access to information and to ensure the relevance of information available to users. These tools must minimize the problems related to the image indexing used to represent content query information. The present paper is at the heart of this issue. Indeed, we put forward the creation of a new retrieval model based on a neural network which transforms any image retrieval process into a vector space model. The results obtained by this model are illustrated through some experiments.
Download

Paper Nr: 292
Title:

Processes Construction and π-calculus-based Execution and Tracing

Authors:

Leonid Shumsky, Vladimir Roslovtsev and Viacheslav Wolfengagen

Abstract: Many of the state-of-the-art business-process modelling and managing techniques rely on methods that lack sound theoretical basement, though the latter being of advantage, as is acknowledged by more and more people, in practical information system design and implementation. The software (and, in fact, the very processes the software is supposed to automate) tend to become ‘properly designed’, thus ensuring higher degrees of software (and processes) extensibility, adaptability, better verification and execution control. In this paper we discuss a constructive approach to process design and we present process execution semantics based on p-calculus and process analysis and debugging technique based on formalized execution logs
Download

Area 4 - Software Agents and Internet Computing

Full Papers
Paper Nr: 40
Title:

A Risk Analysis Method for Selecting Service Providers in P2P Service Overlay Networks

Authors:

Rafael Giordano Vieira, Omir Correia Alves Junior and Adriano Fiorese

Abstract: In an increasingly competitive market place, the development of collaborative networked environments has become a key factor to companies successfully leverage their business activities. Nevertheless, when these companies get involved in more volatile strategic networks, it is necessary to deal with additional risks that need to be identified, measured, and mitigated through a well defined process. In this sense, this paper aims to specify a method for risk analysis comprising a set of service providers (SPs) in a P2P Service Overlay Network (SON). In this applied, qualitative and essentially exploratory work, the proposed method assesses the level of risk present in a set of previously selected SPs using key performance indicators (KPIs), and measures the viability of a Virtual Organization (VO) formation using those selected SPs. A computational prototype was also specified and used to execute a set of tests to assess the proposed risk analysis method.
Download

Paper Nr: 50
Title:

A Metadata Focused Crawler for Linked Data

Authors:

Raphael do Vale Amaral Gomes, Marco A. Casanova, Giseli Rabello Lopes and Luiz André P. Paes Leme

Abstract: The Linked Data best practices recommend publishers of triplesets to use well-known ontologies in the triplication process and to link their triplesets with other triplesets. However, despite the fact that extensive lists of open ontologies and triplesets are available, most publishers typically do not adopt those ontologies and link their triplesets only with popular ones, such as DBpedia and Geonames. This paper presents a metadata crawler for Linked Data to assist publishers in the triplification and the linkage processes. The crawler provides publishers with a list of the most suitable ontologies and vocabulary terms for triplification, as well as a list of triplesets that the new tripleset can be most likely linked with. The crawler focuses on specific metadata properties, including subclass of, and returns only metadata, hence the classification “metadata focused crawler”.
Download

Paper Nr: 99
Title:

A Reactive and Proactive Approach for Ambient Intelligence

Authors:

Alencar Machado, Daniel Lichtnow, Ana Marilza Pernas, Leandro Krug Wives and José Palazzo Moreira de Oliveira

Abstract: Ambient Intelligence provides technology support and assistance to help people in their daily wellbeing. Equipped with ubiquitous technologies, Ambient Intelligence uses sensors to monitor the environment and to collect data continuously providing systems with updated information. Ideally, these computer-supported environments must detect relevant events to forecast future situations and to act proactively to mitigate or eliminate undesired situations while regarding user’s specific needs. To build a system with reactive and proactive characteristics in Ambient Intelligence, it is important to allow it to be extensible, predictive and to incorporate decision-making capabilities. In this sense, the objective of this work is to propose an approach for providing reactive and proactive behavior in Ambient Intelligence systems. More specifically, we want to provide Situation as a Service in Ambient Assisted Living. In the present work, we illustrate practical aspects of the system’s architecture by describing a home-care scenario in which the system is able to understand the behavior of the user, as the time goes by, and detect relevant (dangerous) situations in order to act reactively and proactively and help users manage their health condition.
Download

Paper Nr: 118
Title:

Business Process Modeling and Instantiation in Home Care Environments

Authors:

Júlia K. Kambara da Silva, Guilherme Medeiros Machado, Lucinéia Heloisa Thom and Leandro Krug Wives

Abstract: There are many studies currently being conducted within the field of Home Care, where houses fulfilled with devices and sensors can help users in their daily lives, even the ones with chronicle diseases and disabilities. One important challenge in this area refers to the selection of the device and functionalities that best meets users’ needs based on their context, location and disabilities. In this sense, this paper presents a novel approach for selecting the most appropriate device for the current user context. In our approach, devices and their functionalities are described and represented by Web services, and business processes are used as guidelines that specify procedures that should be taken in the treatment of a home care patient. Therefore, the issue of what device and which of its corresponding functionalities should be selected is treated as an approach to discover and select Web services based on its syntactic and semantic aspects as well as the user context.
Download

Paper Nr: 251
Title:

Finding Reliable People in Online Communities of Questions and Answers - Analysis of Metrics and Scope Reduction

Authors:

Thiago Baesso Procaci, Sean Wolfgand Matsui Siqueira and Leila Cristina Vasconcelos de Andrade

Abstract: Online communities of questions and answers became important places for users to get information and share knowledge. We investigated metrics and strategies that allow the identification of users that are willing to help and provide good answers in a community, which we call the reliable people. In order to provide better performance on finding these users, we also raised some strategies for scope reduction. Then, we applied these metrics and strategies to three online communities of questions and answers available on the Web, which also provide user reputation grades, so it would be possible to verify the results on finding the reliable people.
Download

Short Papers
Paper Nr: 36
Title:

Using Collaborative Filtering to Overcome the Curse of Dimensionality when Clustering Users in a Group Recommender System

Authors:

Ludovico Boratto and Salvatore Carta

Abstract: A characteristic of most datasets is that the number of data points is much lower than the number of dimensions (e.g., the number of movies rated by a user is much lower than the number of movies in a dataset). Dealing with high-dimensional and sparse data leads to problems in the classification process, known as curse of dimensionality. Previous researches presented approaches that produce group recommendations by clustering users in contexts where groups are not available. In the literature it is widely-known that clustering is one of the classification forms affected by the curse of dimensionality. In this paper we propose an approach to remove sparsity from a dataset before clustering users in group recommendation. This is done by using a Collaborative Filtering approach that predicts the missing data points. In such a way, it is possible to overcome the curse of dimensionality and produce better clusterings. Experimental results show that, by removing sparsity, the accuracy of the group recommendations strongly increases with respect to a system that works on sparse data.
Download

Paper Nr: 46
Title:

Measuring the Success of Social CRM - First Approach and Future Research

Authors:

Torben Küpper

Abstract: Web 2.0 and Social Media provide new opportunities for collaboration and value co-creation. Social Customer Relationship Management (CRM) addresses the opportunities and deals with the integration of Web 2.0 and Social Media within CRM. Social CRM has the potential to enable the, e.g., customer-to-customer support, which results in reducing companies’ service costs. In order to measure the success (e.g., cost-savings) of Social CRM activities (e.g., customer-to-customer support) a Social CRM measurement model is indispensable and a prerequisite step for future research. At present, scholars conduct research on Social CRM measures and attempt to develop a Social CRM measurement model. This paper presents a systematic and rigorous literature review for the research topic – Social CRM measurement model. The major result reveals the lack of extant literature regarding the research topic. The findings disclose the need for a Social CRM measurement model on an evaluation based foundation.
Download

Paper Nr: 64
Title:

Combining the Spray Technique with Routes to Improve the Routing Process in VANETS

Authors:

Maurício José da Silva, Fernando Augusto Teixeira, Saul Delabrida and Ricardo A. Rabelo Oliveira

Abstract: Vehicular networks represent a special type of wireless network that has gained the attention of researchers over the past few years. Routing protocols for this type of network must face several challenges, such as high mobility, high speeds and frequent network disconnections. This paper proposes a vehicular routing algorithm called RouteSpray that in addition to using vehicular routes to help make routing decisions, uses controlled spraying to forward multiple copies of messages, thus ensuring better delivery rates without overloading the network. The results of experiments performed in this study indicate that the RouteSpray algorithm delivered 13.12% more messages than other algorithms reported in the literature. In addition, the RouteSpray algorithm kept the buffer occupation 73.11% lower.
Download

Paper Nr: 82
Title:

Visualization Functionality of Virtual Factories - An Enhancement to Collaborative Business Process Management

Authors:

Ahm Shamsuzzoha, Filipe Ferreira, Sven Abels, Americo Azevedo and Petri Helo

Abstract: This paper focuses on process visualization that is applicable to managing a Virtual Factory (VF) business environment. It briefly provides all aspects of implementing the dashboard user interface that is to be used by the VF partners. The dashboard features state-of-the art business intelligence and provides data visualization, user interfaces and menus to support VF partners to execute collaborative processes. With advanced visualizations that produce quality graphics it offers a variety of information visualizations that brings the process data to life with clarity. This data visualization provides critical operational matrices (e.g. KPIs) required to manage virtual factories. Various technical aspects of this dashboard user interface portal are elaborated within the scope of this research such as installation instructions, technical requirements for the users and developers, execution and usage aspects, limitations and future works. The dashboard user interface portal presents different widgets according to the VF requirements that are to be needed to support the visualization and monitoring of various business processes within a VF. The research work highlighted in this paper is conceptualized, developed and validated within the scope of the European Commission NMP priority of the Seventh RTD Framework Programme for the ADVENTURE (ADaptive Virtual ENterprise ManufacTURing Environment) project.
Download

Paper Nr: 277
Title:

A Multi-agent System to Monitor SLA for Cloud Computing

Authors:

Benjamin Gâteau

Abstract: More than a technological solution Cloud Computing is also an economical advantage and already play an important roles in the information technology’s area. Thereby and in order to ensure a QoS commitment between a provider and a customer, Service Level Agreements (SLA) describe a set of non-functional requirements of the service the customer is buying. In this paper, we describe how we can use Multi-Agent Systems (MAS) to manage SLA and we present the monitoring tool we develop with the SPADE framework.
Download

Paper Nr: 310
Title:

Service Consumer Framework - Managing Service Evolution from a Consumer Perspective

Authors:

George Feuerlicht and Hong Thai Tran

Abstract: As the complexity of service-oriented applications grows, it is becoming essential to develop methods to manage service evolution and to ensure that the impact of changes on existing applications is minimized. Service evolution has been the subject of recent research interest, but most of the research on this topic deals with service evolution from the service provider perspective. There is an equal need to consider this problem from the perspective of service consumers and to develop effective methods that protect service consumer applications from changes in externally provided services. In this paper, we describe initial proposal for Service Consumer Framework that attempts to address this problem by providing resilience to changes in external services as these services are evolved or become temporarily unavailable. The framework incorporates a service router and services adaptors and determines runtime behavior of the system based on design-time decisions recorded in the service repository.
Download

Paper Nr: 17
Title:

CaPLIM: The Next Generation of Product Lifecycle Information Management?

Authors:

Sylvain Kubler and Kary Främling

Abstract: Product Lifecycle Information Management (PLIM) aims to enable all participants and decision-makers to have a clear, shared understanding of the product lifecycle, and to get feedback on product use conditions. Each product, whether as a physical or virtual product is designed to provide a range of services aimed at supporting daily activities of each product stakeholder (e.g., designers, manufacturers, distributors, users, repairers, or still recyclers). Such services are usually considered once, where parameters are fine-tuned once and for all. A future generation of services could attempt to self-adapt to the product context by discovering and exchanging helpful information with other devices and systems within its direct or indirect surrounding. The so-called Internet of Things (IoT) is a tremendous opportunity to support the development of such a new generation of services by taking advantage of powerful concepts such as context-awareness. Embedding context-awareness into the product is a possible solution to learn about the product's context and to make appropriate decisions. However, today, this is not enough because of the large number of objects, systems, networks, and users comprising the IoT that require, more than ever before, standardized ways and interfaces to exchange all kinds of information between all kinds of devices. In an IoT context, this paper opens up new research directions for providing a new generation of PLIM services by investigating context-awareness. The combination of these two visions is referred to as CaPLIM (Context-awareness & PLIM), whose originality lies in the fact that it takes maximum advantage of IoT standards, and particularly of the recent Quantum Lifecycle Management (QLM) standard proposal.
Download

Paper Nr: 32
Title:

QoS-aware Service Composition Based on Sequences of Services

Authors:

Sylvain D'Hondt and Shingo Takada

Abstract: Service composition is an important part of developing Service-oriented Systems. There are two basic approaches for service composition. First, the developer identifies and searches for individual services that can be composed. In the second approach, the developer identifies the global input(s) and output(s) of the entire composition and searches for a composition with the best match. We propose a ``middle of the road'' approach, where we identify and search for ``sequences of services'', each of which is a consecutively executed service that appears within an existing composition stored in a database. Our approach utilizes a database containing Service-oriented Systems. The developer specifies a query containing functional and non-functional requirements in XML format. Then the query is used to search within the database for a sequence of services that matches the requirements. We show the results of an experiment that indicates our approach enabled subjects to find more executable compositions than a tool that searches for services individually.
Download

Paper Nr: 35
Title:

Smart Collaborative Processes Monitoring in Real-time Business Environment - Applications of Internet of Things and Cloud-data Repository

Authors:

Ahm Shamsuzzoha, Sven Abels, Simon Kuspert and Petri Helo

Abstract: In today’s business world there is a growing concern with business collaboration among companies, especially small and medium enterprises (SMEs). The objective of forming and operating such collaborative networks is to achieve market benefit through sharing resources, expertise and knowledge among the networked partners. It is therefore necessary to track and trace each business process within such business networks in a real-time environment in order to enhance their success level and reduce possible risks or uncertainties. Keeping such an objective in mind, this research highlights the basic principles of business process monitoring through smart technologies such as the Internet of Things (IoT) and cloud-based data repository. Smart process monitoring through the combination of Internet of Things technology and cloud-based data repository system is rarely discussed in the field of collaborative business. Within the scope of this research, generic scenarios of both the IoT and cloud-based data storage are described with the objective of implementing them in a collaborative business process monitoring domain. An implementation example is highlighted in this paper, where IoT and cloud-based data storage are showcased in business process monitoring and management. The overall research outcomes and future research directions are also articulated in the conclusion section of this paper.
Download

Paper Nr: 74
Title:

A Semantic Web Model for Ad Hoc Context-aware Virtual Communities - Application to the Smart Place Scenario

Authors:

Pierre Maret, Frédérique Laforest and Dimitri Lanquetin

Abstract: In this paper, we propose a model for an open framework that allows mobile users to create and to participate to context-aware virtual communities. The model we propose and implement is a generic data model fully compliant with the semantic web data model RDF. This model is suited to let mobile end-users use, create and customize virtual communities. We combine fundamentals for a decentralized semantic web social network with context-aware virtual communities and services. Smart cities scenarios are typically targeted with this approach. It can be implemented in places like metro stations, museums, squares, cinemas, etc. to provide ad hoc context-aware information services to mobile users..
Download

Paper Nr: 105
Title:

Capturing Context Information in a Context Aware Virtual Environment

Authors:

Helio H. L. C. Monte-Alto and Elisa H. M. Huzita

Abstract: Designing context aware applications is a great challenge given the complexity of such systems, specially concerning the mechanisms to provide sensing, or capturing, of context information. There are many works in literature toward providing context awareness in physical environments, such as pervasive systems. However, when dealing with virtual environments, such as distributed environments to support collaboration for teams distributed geographically, an increase on complexity of its design comes up. There is a need to model a software system which uses concepts present in physical environments in order to try to reduce the disadvantages of the distribution. This work explores this challenge, focusing on conceiving a solution for context awareness in distributed virtual environments. We also present a model for designing a platform to support the development of multi-agent based context aware virtual environments.
Download

Paper Nr: 127
Title:

e-Commerce Game Model - Balancing Platform Service Charges with Vendor Profitability

Authors:

Zheng Jianya, Daniel L. Li, Li Weigang, Zi-Ke Zhang and Hongbo Xu

Abstract: One of the biggest challenges in e-commerce is to utilize data mining methods for the improvement of profitability for both platform hosts and e-commerce vendors. Taking Alibaba as an example, the more efficient method of operation is to collect hosting service fees from the vendors that use the platform. The platform defines a service fee value and the vendors can decide whether to accept or not. In this sense, it is necessary to create an analytical tool to improve and maximize the profitability of this partnership. This work proposes a dynamic in-cooperative E-Commerce Game Model (E-CGM). In E-CGM, the platform hosting company and the e-commerce vendors have their payoff functions calculated using backwards induction and their activities are simulated in a game where the goal is to achieve the biggest payoff. Taking into consideration various market conditions, E-CGM obtains the Nash equilibrium and calculates the value for which the service fee would yield the most profitable result. By comparing the data mining results obtained from a set of real data provided by Alibaba, E-CGM simulated the expected transaction volume based on a selected service fee. The results demonstrate that the proposed model using game theory is suitable for e-commerce studies and can help improve profitability for the partners of an online business model.
Download

Paper Nr: 174
Title:

Description of Accessible Learning Resources by Using Metadata

Authors:

Salvador Otón, Concha Batanero, Eva Garcia, Antonio Garcia and Roberto Barchino

Abstract: This paper presents IMS Access for All v3.0 specification, which main objective is to simplify the definition of the accessibility metadata for learning objects and the preferences and needs of the users of these objects, thereby achieving an inclusive learning process. The AfAPad tool has been created for helping the accessible content creators to complete the set of specification’s accessibility metadata and to create the XML files that represent them. This tool also helps users to create their XML files with the preferences and needs metadata. This tool has been developed by the authors at the University of Alcala (Spain) in the field of project ESVIAL. This paper exposes the practical steps to be followed by a content creator to perform an accessible training activity, explaining the specifications and standards that can be used and the necessary tools.
Download

Paper Nr: 184
Title:

RESTful User Model API for the Exchange of User’s Preferences among Adaptive Systems

Authors:

Martin Balík and Ivan Jelínek

Abstract: Adaptive Hypermedia Systems observe users’ behavior and provide personalized hypermedia. Users interact with many systems on the Web, and each user-adaptive system builds its own model of user’s preferences and characteristics. There is a need to share the personal information, and the current research is exploring ways to share user models efficiently. In this paper, we present our solution for personal data exchange among multiple hypermedia applications. First, we designed a communication interface based on the REST architectural style, and then, we defined data structures appropriate for the data exchange. Our user model is ontology-based and therefore, the data from multiple providers can be aligned to achieve interoperability.
Download

Paper Nr: 215
Title:

Investigating the Effect of Social Media on Trust Building in Customer-supplier Relationships

Authors:

Fabio Calefato, Filippo Lanubile and Nicole Novielli

Abstract: Trust is a concept that has been widely studied in e-commerce since it represents a key issue in building successful customer-supplier relationships. In this sense, social software represents a powerful channel for establishing a direct communication with customers. As a consequence, companies are now investing in social media for building their social digital brand and strengthening relationships with their customers. In this paper we investigate the role of social media in the process of trust building, with particular attention to the case of small companies. Our findings show that social media contribute to build affective trust more than traditional websites, by fostering the affective commitment of customers.
Download

Paper Nr: 224
Title:

A New Approach Based on Learning Services to Generate Appropriate Learning Paths

Authors:

Chaker Ben Mahmoud, Fathia Bettahar, Marie-Hélène Abel and Faïez Gargouri

Abstract: This article presents a new approach to provide learners learning paths adapted to their profiles. These courses are generated as the automatic composition of learning services. It is made up of three modules: search module, matching module and composition module. Our approach is based on new model of learning service (SWAP) that extends semantic web service (OWL-S) to describe the semantics of learning modules and facilitated the discovery of learning paths adapted to each learner.
Download

Paper Nr: 296
Title:

e-swim: Enterprise Semantic Web Implementation Model - Towards a Systematic Approach to Implement the Semantic Web in Enterprises

Authors:

Reinaldo Ferreira and Isabel Seruca

Abstract: The adoption of Semantic Web technologies constitutes a promising approach to data structuring and integration, both for public and private usage. While these technologies have been around for some time, their adoption is behind overall expectations, particularly in the case of Enterprises. This paper discusses the challenges faced in implementing Semantic Web technologies in Enterprises and proposes an Implementation Model that measures and facilitates that implementation. The advantages of using the model proposed are two-fold: the model serves as a guide for driving the implementation of the Semantic Web as well as it helps to evaluate the impact of the introduction of the technology.
Download

Paper Nr: 307
Title:

Towards Ecosystems based on Open Data as a Service

Authors:

Kiev Gama and Bernadette Farias Lóscio

Abstract: Despite several efforts in contests throughout the world that encourage local communities to develop applications based on government Open Data, the solutions resulting from such initiatives do not have longevity, lacking maintenance and rapidly falling into disuse. This is due mainly to the lack of investment or even a model for monetizing the use of such applications. Therefore, it is necessary to develop a model that fosters the value chain for Open Data aiming an economically self-sustained ecosystem. Such ecosystem should promote new businesses through the creation of systems and applications focused on citizens. This article discusses the creation of software ecosystems for services and applications underpinned by a platform based on Open Data as a Service.
Download

Area 5 - Human-Computer Interaction

Full Papers
Paper Nr: 116
Title:

Gesture Vocabulary for Natural Interaction with Virtual Museums - Case Study: A Process Created and Tested Within a Bilingual Deaf Children School

Authors:

Lucineide Rodrigues da Silva, Laura Sánchez Garcia and Luciano Silva

Abstract: The research described in this paper aimed at creating a gesture interface for a 3D virtual museum developed by a research group of Image Processing. Faced to the challenge of using sound methodologies in order to create a genuine natural interface, the group joined to a Computer Human Interaction group that has worked for seven years focusing the social inclusion and development of Deaf Communities. In this context, the research investigated the state-of-the-art of Natural Interaction and gestures vocabulary creation in related literature and placed the case study at a bilingual school (Brazilian Sign Language and written Portuguese) for deaf children. The paper reports the results of some specially relevant works from literature and describes the process of developing the vocabulary together with its validation. As the main contributions of this research, we can mention the addition of a previous state – the observation of potential users interacting with the physical scenario that motivates the innovative virtual uses in order to investigate the actions and gestures used n the physical environment – to a well known author’s process that has as its starting point the set of expected functions and the exemplification of a more active way of bringing potential users to the stage of defining the right gestures vocabulary, which brings more help than just interacting with users to get their opinion (asking them to match a feature to the gesture they would like to use to employ it or demonstrating a gesture and seeing what feature users would expect that gesture to trigger). Finally, the paper establishes the limitations of the results and proposes future research.
Download

Paper Nr: 119
Title:

Playing Cards and Drawing with Patterns - Situated and Participatory Practices for Designing iDTV Applications

Authors:

Samuel B. Buchdid, Roberto Pereira and M. Cecília C. Baranauskas

Abstract: Making design has become a challenging activity, in part due to the increasingly complexity of the context in which designed solutions will be inserted. Designing iDTV applications is specially demanding because of the scarce theoretical and practical references, problems that are inherent to the technology, and its social and pervasive aspects. In this paper, we investigate the design for iDTV by proposing three participatory practices for supporting a situated design and evaluation of iDTV applications. A case study reports the use of the practices in the real context of a Brazilian broadcasting company, aiming at developing an overlaid iDTV application for one of its TV shows. The practices were articulated in a situated design process that favored the participation of important stakeholders, supporting different design activities: from the problem clarification and organization of requirements to the creation and evaluation of an interactive prototype. The results suggest the practices’ usefulness for supporting design activities, indicate the benefits of a situated and participatory design for iDTV applications, and may inspire researchers and designers in other contexts.
Download

Paper Nr: 150
Title:

Video Stream Transmodality

Authors:

Pierre-Olivier Rocher, Christophe Gravier, Julien Subercaze and Marius Preda

Abstract: In this paper we introduce the concept of video stream transmodality. Transmodality is the partitioning of an image into regions that are expected to present a better entropy using different coding schemes, depending on their structural density, at constant bandwidth. Our contribution is a transmoder i.e. an algorithm able to perform transmodality on a video stream. The transmoder includes different video coding adapted optimizations. We evaluate our proposal with different kinds of video (in content term), and we show that we are able to save up to 8% of bandwidth for the same PSNR in comparison with state of the art video encoding baselines.
Download

Paper Nr: 156
Title:

Assisting Speech Therapy for Autism Spectrum Disorders with an Augmented Reality Application

Authors:

Camilla Almeida da Silva, António Ramires Fernandes and Ana Paula Grohmann

Abstract: Graphics based systems of Augmented and Alternative Communication are widely used to promote communication of people with Autism Spectrum Disorders. However, studies indicate the inability of some of these people in understanding the used symbols. This study discusses an integration of Augmented Reality in communication interventions, by relating elements of Augmented and Alternative Communication and Applied Behaviour Analysis strategies. An Augmented Reality based interactive system to support interventions is discussed, and the report of its usage in interventions with children suffering from Autism Spectrum Disorders is presented.
Download

Paper Nr: 182
Title:

Adding Semantic Relations among Design Patterns

Authors:

Marcos Alexandre Rose Silva and Junia Coutinho Anacleto

Abstract: Design patterns have been used to support design decisions to solve recurring design problems adopting the successful solutions stated in design patterns. One of the main characteristics of design patterns is to allow the patterns' content understanding because they are written using a common language, i.e., not specialized, and they bring examples to support the comprehension of the solutions. On the other hand, to understand the correlation among these design patterns, usually organized through nodes and edges as in a graph, is not a simple task. In this context, this paper presents a semantic approach, based on how humans organize their knowledge, to connect design patterns and define those relationships according to our intellectual structure and function. A feasibility study, described here, shows evidences that semantic relations allow organizing patterns to support the comprehension of patterns connections, as well as, the name of these relations are able to express their meaning.
Download

Paper Nr: 195
Title:

Automatic Interpretation Biodiversity Spreadsheets Based on Recognition of Construction Patterns

Authors:

Ivelize Rocha Bernardo, André Santanchè and Maria Cecília Calani Baranauskas

Abstract: Spreadsheets are widely adopted as "popular databases", where authors shape their solutions interactively. Although spreadsheets have characteristics that facilitate their adaptation by the author, they are not designed to integrate data across independent spreadsheets. In biology, we observed a significant amount of biodiversity data in spreadsheets treated as isolated entities with different tabular organizations, but with high potential for data articulation. In order to promote interoperability among these spreadsheets, we propose in this paper a technique based on pattern recognition of spreadsheets belonging to the biodiversity domain. It can be exploited to identify the spreadsheet in a higher level of abstraction – e.g., it is possible to identify the nature a spreadsheet as catalog or collection of specimen – improving the interoperability process. The paper details evidences of construction patterns of spreadsheets as well as proposes a semantic representation to them.
Download

Short Papers
Paper Nr: 134
Title:

Applications of the REST Framework to Test Technology Activation in Different ICT Domains

Authors:

Antonio Ghezzi, Andrea Cavallaro, Andrea Rangone and Raffaello Balocco

Abstract: As innovations based on technology multiply, research on technology diffusion evolves both downstream – i.e. covering adoption and use – and upstream – i.e. focusing on the antecedents of diffusion. In the latter domain, the study from Ghezzi et al. (2013) proposed to revisit traditional technology diffusion theory to include the concept of “technology activation”, which investigates the external determinants influencing the introduction of technology-based innovations. Such determinants are included in the Regulation, Environment, Strategy, Technology (REST) framework. This study aims at proposing an application of the REST framework to the Mobile Video Calls and the MiniDisc industries. This application is meant to further validate the framework and test the validity of the concept of technology activation in different ICT domains.
Download

Paper Nr: 169
Title:

New Approaches for Geographic Location Propagation in Digital Photograph Collections

Authors:

Davi Oliveira Serrano de Andrade, Hugo Feitosa de Figueirêdo, Cláudio de Souza Baptista and Anselmo Cardoso de Paiva

Abstract: The integration of GPS in smartphones, tablets and digital cameras becomes more present, resulting in a large amount of multimedia files. As GPS receivers may not work well indoors, this problem may generate incorrect locations, very distant from the real location where the picture was taken, or even generate no location at all. So, to deal with these inconsistencies, this work proposes two novel location propagation techniques. These techniques were validated through a comparative analysis with two other techniques. Some metrics were used to validate the techniques: precision, recall and accuracy in the photographs location propagation. The results prove that the correct choice for location propagation technique depends on the importance of each metric and on the system user profile. Besides the choice of the correct technique, we also show that the order of the photographs that will receive the location propagation must be random.
Download

Paper Nr: 173
Title:

A Study on the Last 11 Years of ICEIS Conference - As Revealed by Its Words

Authors:

Julián Esteban Gutiérrez Posada and Maria Cecília Calani Baranauskas

Abstract: The analysis of scientific knowledge documented as journal articles, conference papers or book chapters is important for the research community to build an understanding of their field of interest. The International Conference on Enterprise Information Systems (ICEIS) is currently in its 16th edition, and it has built the state of the art in the field through scientific contributions coming from different focuses, authors and respective institutions. This work investigates the content of the Enterprise Information Systems Conference (ICEIS) by analysing data coming from two sources: the Springer Books series of selected papers from the 2003-2011 conferences, and the last three editions of the Conference Proceedings. For a visual glimpse of the themes present in the contributions, and a starter for further analysis, we used the expressive power of tagclouds on the paper titles. Results enabled to build a roadmap into the field, which may inform researchers, and practitioners who are starting work in related areas, and even experts who want to build on it.
Download

Paper Nr: 231
Title:

Psychological Effect of Robot Interruption in Game

Authors:

Mitsuharu Matsumoto and Hiroyuki Yasuda

Abstract: In this paper, we report psychological effect of robot interruption on human. Although many robots are developed to help people in daily life, such robots sometimes make users live a reactive life. On the contrary, some researchers developed robots that depend on users. These types of robots require users’ assists to do their tasks and users need to be active due to its dependence like children. Children not only require our help to do their tasks but also interrupt us. In spite of their interruption, people come to like children and would like to interact with children. To achieve long-term interaction between human and robot, we expect that adequate interruption to users may have some merits rather than helping users at all times. To investigate our hypothesis, we developed two types of robot and designed a simple game with the robots. Throughout the experiments, users have stronger motivation to interact with robot that interrupted users than the robot that did not interrupt them.
Download

Paper Nr: 16
Title:

An Approach to Circumstantial Knowledge Management for Human-Like Interaction

Authors:

Alejandro Baldominos, Javier Calle and Dolores Cuadra

Abstract: This paper proposes the design of a general-purpose domain-independent knowledge model formalizing and managing the circumstantial knowledge involved in the human interaction process, i.e., a Situation Model. Its design is aimed to be embodied into a human-like interaction system, thus enriching the quality of the interaction by providing context-aware features to the interaction system. The proposal differs from similar work in that it is supported by the spatio-temporal databases technology. Additionally, since the proposed model requires to be fed with real knowledge obtained from each specific interaction domain, this paper also proposes an edition tool for acquiring and managing that circumstantial knowledge. The tool also supports the simulation over the model to check the correctness and completeness of the acquired knowledge. Finally, some scenario examples are provided in order to illustrate how the Situation Model works, and to gain perspective on its future possibilities of application in different systems where context-aware services can make a difference.
Download

Paper Nr: 72
Title:

The Response Systems in the Student’s Learning/Teaching Process - A Case Study in a Portuguese School

Authors:

Paula Azevedo and Maria João Ferreira

Abstract: Over the past few years there has been a large investment in information and communication technologies applied in the teaching/learning process. In this context the response systems appear as an innovative tool associated with different methods and strategies. Response systems are technological products designed to support communication and interactivity, generating enormous potential when applied in the teaching/learning process. The student motivation increases when this technology is used leading to a greater participation and consequently to a better and faster acquisition of concepts. Collaborative and cooperative attitudes between student/student, student/teacher and student/class are increased when response system are used in the context of the classroom. The use of response systems and their implications for the teaching/learning process are some of the challenges that teachers are facing nowadays as a driving agent in the implementation of this technology at school. This article examines the use of response systems in the student’s learning/teaching process, exploring their use in a Portuguese school.
Download

Paper Nr: 192
Title:

Mintzatek, Text-to-Speech Conversion Tool Adapted to Users with Motor Impairments

Authors:

J. Eduardo Pérez, Myriam Arrue and Julio Abascal

Abstract: Text-to-speech (TTS) conversion software tools are capable of generating synthetic voice from written text. These tools are essential for some groups of impaired users who have speech difficulties. In some cases, this limitation is caused by some kind of motor impairment. However, current TTS tools are not fully accessible as contain barriers for those users with limited mobility in upper extremities. This paper presents the most significant accessibility barriers detected for this specific user group. In addition, an accessible TTS tool, Mintzatek, has been implemented based on User-Centered Design (UCD) process. The user interface of the developed tool is adapted to users with limited mobility in upper extremities. All the development process has been guided by two real motor impaired users with plenty of experience in the use of assistive technologies.
Download

Paper Nr: 193
Title:

A MDA-based Approach for Enabling Accessibility Adaptation of User Interface for Disabled People

Authors:

Lamia Zouhaier, Yousra Hlaoui Bendaly and Leila Jemni Ben Ayed

Abstract: In order to eliminate accessibility barriers that may exist in the user interface at runtime, we propose, in this paper, to integrate accessibility into an infrastructure of adaptation of User Interfaces. Hence, we propose a model driven approach which consists of generating, automatically, accessibility adapted User Interfaces. To reach this goal, based on MDA principals, we develop different meta-model transformations to provide an adapted User Interface model according to received accessibility context information and a given non adapted User Interface.
Download

Paper Nr: 230
Title:

Administration of Government Subsidies Using Contactless Bank Cards

Authors:

Aleksejs Zacepins, Nikolajs Bumanis and Irina Arhipova

Abstract: Subsidization of major and minor government branches is common strategy with the aim to optimize government funds, increase residents’ welfare and overall infrastructures’ efficiency, including public transportation system. Within the different countries subsidization is being approached using specific models of calculation and payment. However, most of them use the same subsidy administration approaches – cash transfers or social services. The aim of this paper is to describe proposed improvements of transport subsidy administration approach by implementation of e-cards for payments. It is proposed to improve subsidy payment procedure by promoting that subsidy should be paid directly to subsidy receiver. This will allow managing only real transactions and only subsidy receiver is interested in subsidy utilization. Proposed approach to process the subsidy administration and payments can be realized by using existing banking infrastructure and novel product as electronic cards.
Download

Paper Nr: 268
Title:

e-Learning Material Presentation and Visualization Types and Schemes

Authors:

Nauris Paulins, Signe Balina and Irina Arhipova

Abstract: Multimedia and content visualisation provide ability to transform electronic materials into more dynamic format. This can provide positive aspect on learning, but also can overload the limited information processing capacity in human brains. Cognitive load in technology-enhanced learning is closely related to the learning styles of learners. This study examines interactions between learning styles of students and how these are related to student’s working memory and cognitive traits. To investigate the learning styles of learners the Felder- Soloman questionnaire was chosen. It allows analyse students’ learning styles with respect to the Felder-Silverman learning style model, which is the most appropriate for a web-based learning. Also the interaciton between cognitive traits and learning styles is analysed. The results of this analysis prove the importance of multimodal learning in technology-enhanced learning. Also some relationships between learners with higher working memory capacity and learners with lower working memory capacity were demonstrated. The results will help to improve students’ model for better adaptivity of learning materials.
Download

Paper Nr: 282
Title:

Expert vs Novice Evaluators - Comparison of Heuristic Evaluation Assessment

Authors:

Magdalena Borys and Maciej Laskowski

Abstract: In this paper authors propose the comparison between results of website heuristic evaluation performed by small group of experts and large group of novice evaluators. Normally, heuristic evaluation is performed by few experts and requires their knowledge and experience to apply heuristics effectively. However, research involving usability experts is usually very costly. Therefore, authors propose the experiment in order to contrast the results of evaluation performed by novice evaluators that are familiar with assessed website to the results obtained from expert evaluators in order to verify if they are comparable. The usability of website was evaluated using the authors’ heuristics with extended list of control questions.
Download

Paper Nr: 285
Title:

Meta Model of e-Learning Materials Development

Authors:

Signe Balina, Irina Arhipova, Inga Meirane and Edgars Salna

Abstract: The multitude of software tools is available for the creation of learning resources. However the majority of these tools provided by different software producers do not have a unified mechanism by means of which it would be possible to search and reuse the existing learning resources or their elements. To solve this problem the structures of descriptive data can be used. The aim of this paper is to describe a meta-model of e-learning objects and e-learning formats that could be used in the creation of e-learning materials compatible with various e-learning standards. The meta-data models that are used in widely-known learning resources’ repositories and their structure’s metadata standards providing cross-system compatibility have been evaluated. The key metadata standards of learning objects were identified and their comparative analysis was performed. The e-learning material logical model was created and the essential demands for e-learning object’s data repository were defined. The technologies and their provided electronic learning objects’ classification systems were investigated for the future development of e-learning materials. The scheme of e-LM development process was obtained, which provides the transformation of different modules.
Download

Paper Nr: 290
Title:

Do Desperate Students Trade Their Privacy for a Hope? - An Evidence of the Privacy Settings Influence on the User Performance

Authors:

Tomáš Obšívač, Hana Bydžovská and Michal Brandejs

Abstract: Maintaining people's privacy should be the top priority not only in the context of Information Systems (IS) design. Sometimes, however, certain level of privacy can be traded for a gain in another IS quality or aspect. We present a real world example of IS with user maintained level of privacy and an evidence of its usage, correlated with users' performance. Recent students' and applicants' privacy settings in an educational IS were examined. According to our findings, a part of students voluntarily disclose their presence in the courses enrolled and on the examination dates registered. Surprisingly, the study results of the disclosed students are worse then the results of undisclosed ones. In the correspondence with our thesis, disclosed applicants have better entrance exam results.
Download

Paper Nr: 297
Title:

Handling Human Factors in Cloud-based Collaborative Enterprise Information Systems

Authors:

Sergio L. Antonaya, Crescencio Bravo Santos and Jesús Gallardo Casero

Abstract: Many business sectors are currently facing emerging globalized scenarios that require effective coordination of heterogeneous teams, involved in complex collaborative processes. For most of those processes, organizations do not count with software tools providing an adequate support for collaboration needs from a human-centred perspective. In this context, the recently born field of Collaborative Networks has introduced some lines of work that must be deeply explored in order to improve collaborative processes support in Enterprise Information Systems. Taking this paradigm as reference, in this article we provide a review of the main areas related to collaborative work, enumerate some of the most common collaborative software tools that are being adopted in organizations worldwide, and finally present a framework for the modeling and development of Cloud Computing based Organizational Collaborative Systems as a solid basis for the handling of human factors in global organizations.
Download

Paper Nr: 306
Title:

A Study on the Use of Personas as a Usability Evaluation Method

Authors:

Thaíssa Ribeiro and Patrícia de Souza

Abstract: The user modelling technique known as personas has obtained excellent results over the last years. Literature about this theme shows different forms of using the concept of personas in the moment of software conceiving, however, the good results obtained with this technique allow to catch a glimpse of a more diverse use of this method. In this context, this study performed a usability evaluation method of Facebook's privacy configurations, the most popular social network worldwide, aiming to verify the use of the personas concept as a usability assessment technique.
Download

Area 6 - Enterprise Architecture

Full Papers
Paper Nr: 78
Title:

Evolving a Core Banking Enterprise Architecture - Leveraging Business Events Exploitation

Authors:

Beatriz San Miguel, Jose M. del Alamo and Juan C. Yelmo

Abstract: Business information has become a critical asset for companies and it has even more value when obtained and exploited in real time. This paper analyses how to integrate this information into an existing banking Enterprise Architecture, following an event-driven approach, and entails the study of three main issues: the definition of business events, the specification of a reference architecture, which identifies the specific integration points, and the description of a governance approach to manage the new elements. All the proposed solutions have been validated with a proof-of-concept test bed in an open source environment. It is based on a case study of the banking sector that allows an operational validation to be carried out, as well as ensuring compliance with non-functional requirements. We have focused these requirements on performance.
Download

Paper Nr: 93
Title:

ETA Framework - Enterprise Transformation Assessment

Authors:

Ricardo Dionísio and José Tribolet

Abstract: In this paper we present the η Framework which aims at enabling a holistic vision of Enterprise Transformation (ET) related to the adoption of Technological Artefacts. This framework is based on a Benefit-Driven approach to ET led by Stakeholders. Therefore, we focus on three interrelated components: (1) Stakeholders and corresponding classification according to their level of influence and attitude towards an artefact; (2) ET which encompasses five dimensions, namely Governance Changes, Business Model Changes, Business Process Changes, Structure Changes, and Resource Changes; and (3) Benefits classified according to their different degree of explicitness and hence importance to each stakeholder. In order to assess ET in a feasible way, we advocate mapping every single change with its corresponding benefit. Subsequently, these pairs of changes and benefits are assigned to a group of “Change Owners”, who are responsible for ensuring that ET is measured and successfully achieved. Finally, we summarize the four phases of ET Lifecycle (Envision, Engage, Transform, and Optimise phase) as well as the corresponding steps required to properly apply the η Framework.
Download

Paper Nr: 103
Title:

PRIMROSe - A Tool for Enterprise Architecture Analysis and Diagnosis

Authors:

David Naranjo, Mario Sánchez and Jorge Villalobos

Abstract: Enterprise Models are the central asset that supports Enterprise Architecture, as they embody enterprise and IT knowledge and decisions. Static analysis over this kind of models is made by inspecting certain properties and patterns, with the goal of gaining understanding and support decision making through evidence. However, this is not a straightforward process, as the model in its raw form is rarely suitable for analysis due to its complexity and size. As a consequence, current approaches focus on partial views and queries over this model, leading to partial assessments of the architecture. In this paper, we propose a different approach to EA analysis, which consists on the incremental assessment of the architecture based on the interaction of the user with visualizations of the whole model. We implemented our approach in a visual analysis tool, PRIMROSe, where analysts can rapidly prototype custom functions that operate on topological properties of the model, combine partial insights for sounder assessments, associate these findings to visual attributes, and interact with the model under several visualization techniques.
Download

Paper Nr: 120
Title:

Deriving Service Level Agreements from Business Level Agreements - An Approach Towards Strategic Alignment in Organizations

Authors:

Vitor Almeida Barros, Marcelo Fantinato, Guilherme M. B. Salles and João Porto de Albuquerque

Abstract: Business Process Management (BPM) can help organizations in their attempts to align strategies between business and information technology areas. It is not only necessary to address functional properties during the BPM life-cycle, but also process quality and operating constraints, which are usually grouped together as Non-Functional Properties (NFP). However, the most prestigious languages for business process modelling are unable to represent these NFPs, and this creates a gap between the degree of success in identifying functional properties and NFPs as well as between the process modelling and its implementation. We have attempted to fill this gap by proposing the StrAli-BPM (Strategic Alignment with BPM) approach, which is divided in two parts – BLA@BPMN and BLA2SLA: the former seeks to extend the BPMN language by embodying NFPs, in the form of BLAs (Business Level Agreements); and the latter semi-automatically derives a set of SLAs (Service Level Agreements), linked with web services, from a pre-defined BLA. This paper outlines the BLA2SLA part of the StrAli-BPM approach. In addition, it includes a prototype tool developed to validate BLA2SLA and the results of an experiment undertaken to validate it.
Download

Paper Nr: 138
Title:

An Assessment Framework for Business Model Ontologies to Ensure the Viability of Business Models

Authors:

A. D'Souza, N. R. T. P. van Beest, G. B. Huitema, J. C. Wortmann and H. Velthuijsen

Abstract: Organisations operate in an increasingly dynamic environment. Consequently, the business models span several organisations, dealing with multiple stakeholders and their competing interests. As a result, the enterprise information systems supporting this new market setting are highly distributed, and their components are owned and managed by different stakeholders. For successful businesses to exist it is crucial that their enterprise architectures are derived from and aligned with viable business models. Business model ontologies (BMOs) are effective tools for designing and evaluating business models. However, the viability perspective has been largely neglected. In this paper, current BMOs have been assessed on their capabilities to support the design and evaluation of viable business models. As such, a list of criteria is derived from literature to evaluate BMOs from a viability perspective. These criteria are subsequently applied to six well-established BMOs, to identify a BMO best suited for design and evaluation of viable business models. The analysis reveals that, although none of the BMOs satisfy all the criteria, e3-value is the most appropriate BMO for designing and evaluating business models from a viability perspective. Furthermore, the identified deficits provide clear areas for enhancing the assessed BMOs from a viability perspective.
Download

Paper Nr: 164
Title:

IT and Data Governance - Towards an Integrated Approach

Authors:

Ivonne Kroeschel and Sang-Kyu Thomas Choi

Abstract: Whereas IT Governance has continuously been a major topic on top of management agendas for years, the issue of Data Governance (DG) has only been marginally investigated in scientific research so far. Existing research on DG also often takes a rather one-sided perspective by focusing on data quality management issues only or neglecting business practice needs. However, we argue that DG as a highly relevant issue for both research and practice has to be integrated into a broader context of governance functions within the organization. We propose a consolidated role-model linking IT and Data Governance roles and tasks which is meant to be a first step towards the extension of Corporate and IT Governance functions by an additional data-centric perspective and thus explicitly accounting for the role of data in generating business value. The model is derived from scientific and practical literature sources as well as case studies to integrate both scientific and practical insights on DG.

Paper Nr: 206
Title:

Supporting Process Model Development with Enterprise-Specific Ontologies

Authors:

Nadejda Alkhaldi, Sven Casteleyn and Frederik Gailly

Abstract: Within an enterprise, different models – even of the same type - are typically created by different modellers. Those models use different terminology, are based on different semantics and are thus hard to integrate. A possible solution is using an enterprise-specific ontology as a reference during model creation. This allows basing all the models created within one enterprise upon a shared semantic repository, mitigating the need for model integration and promoting interoperability. The challenge here is that the enterprise-specific ontology can be very extensive, making it hard for the modeller to select the appropriate ontology concepts to associate with model elements. In this paper we focus on process modelling, and develop a method that uses four different matching mechanisms to suggest the most relevant enterprise-specific ontology concepts to the modeller while he is creating the model. The first two utilize string and semantic matching techniques (i.e., synonyms) to compare the BPMN construct’s label with enterprise-specific ontology concepts. The other two exploit the formally defined grounding of the enterprise ontology in a core ontology to make suggestions, based on the BPMN construct type and the relative position (in the model). We show how our method leads to semantically annotated process models, and demonstrate it using an ontology in the financial domain.
Download

Paper Nr: 239
Title:

Understanding Enterprise Architecture through Bodies of Knowledge - A Conceptual Model

Authors:

Camila Leles de Rezende Rohlfs, Gerd Gröener and Fernando Silva Parreiras

Abstract: There is extensive interest in modeling enterprises from a holistic perspective, showing not only the IT infrastructure of an organization, but also how this IT infrastructure supports business processes and how it contributes to the realization of products and services. This interest has led to a large number of papers reporting on Enterprise Architecture. In this paper, we propose a new conceptual model to describe enterprises from a holistic perspective. The proposed model is based on relationships between the ArchiMate language and bodies of knowledge (BOKs). The conceptual model allows to understand how the bodies of knowledge relate to the enterprise architecture. For this, we propose criteria that allow to relate the bodies of knowledge to the perspectives that are defined in the ArchiMate language. Based on the proposed model, this work shows how bodies of knowledge can be used cooperatively inside an enterprise architecture to improve the quality of internal processes in order to generate value for the interests that support their strategic planning. Studies indicate the benefits of ArchiMate in enterprise modeling. Existing studies have already shown the benefits of using BOKs to represent and share available knowledge in and across enterprises. In this work, we go one step further and show how the conceptual alignment of BOKs and ArchiMate advance the understanding of enterprise architectures.
Download

Short Papers
Paper Nr: 20
Title:

Behavior-based Decomposition of BPMN 2.0 Control Flow

Authors:

Jan Kubovy, Dagmar Auer and Josef Küng

Abstract: The Business Process Model and Notation (BPMN) is a well-established industry standard in the area of Business Process Management (BPM). However, still with the current version 2.0 of BPMN, problems and contradictions with the underlying semantics of the meta-model can be identified. This paper shows an alternative approach for modeling the BPMN meta-model, using behavior-based decomposition. The focus in this paper is on control flow. We use Abstract State Machines (ASM) to describe the decomposition of the merging and splitting behavior of the different BPMN flow node types, such as parallel, exclusive, inclusive and complex, as defined in the BPMN 2.0 standard, resulting in behavior patterns. Furthermore an example for the composition of different gateway types is given using these behavior patterns.
Download

Paper Nr: 33
Title:

Testing Conformance of EJB 3 Enterprise Application Servers

Authors:

Sander de Putter, Serguei Roubtsov and Alexander Serebrenik

Abstract: Enterprise JavaBeans (EJB) is a component technology used for enterprise application development. EJB is currently being implemented by such application servers as GlassFish, OpenEJB, JBoss, WebLogic and Apache Geronimo. Through the entire history EJB claimed its adherence to the “write once, run anywhere” philosophy of Java suggesting that an application developed for and deployed on one application server should be easily portable to a different application server. Therefore, one could have expected different application servers to adhere to the EJB specification. Adherence to this and related Java EE specifications is subject of the “Java EE 6 Full Profile” compatibility testing carried by Oracle. However, anecdotal evidence of discrepancies between the specification and certified implementations such as GlassFish, has been reported in the literature. In this paper we present an approach allowing one to go beyond the level of anecdotal knowledge and test requirements for EJB application servers with focus on portability. We apply the methodology developed to test how well two popular “Java EE 6 Full Profile”-compatible EJB application servers, GlassFish and JBoss, conform to the requirements in the EJB specification. The results are alarming: both application servers failed on a number of tests, violating the specification. Moreover, in GlassFish conformance to a requirement varies depending on whether a local or a remote application is used. Lack of conformance to the EJB specification compromises the portability of the EJB applications, deviates from the portability philosophy of Java, leads to unexpected behaviour, and hinders the learning process of novice EJB developers.
Download

Paper Nr: 68
Title:

A Practical Framework for Business Process Management Suites Selection Using Fuzzy TOPSIS Approach

Authors:

Ahad Zare Ravasan, Saeed Rouhani and Homa Hamidi

Abstract: Nowadays, there is a growing interest in Business Process Management Suites (BPMSs) implementation in organizations. In order to implement a BPMS in an organization successfully, it is essential to select a suitable BPMS. Evaluation and selection of the BPMS packages is complicated and time consuming decision making process. This paper presents an approach for dealing with such a problem. This approach introduces functional, non-functional and fuzzy evaluation method for BPMS selection. The presented BPM lifecycle based approach breaks down BPMS selection criteria into two broad categories namely functional (process strategy development, process discovery, process modeling, process design, process deployment, process operation and analysis) and non-functional requirements (quality, technical, vendor, implementation) including totally 48 selection criteria. A facile Fuzzy Technique for Order Preference by Similarity to Ideal Solution (FTOPSIS) is customized for BPMS selection based on identified criteria. The proposed approach is applied to a local Iranian company in oil industry in order to select and acquire a BPMS and the provided numerical example illustrates the applicability of the approach for BPMS selection. The approach can help practitioners assess BPMSs more properly and have a better software acquisition decision.
Download

Paper Nr: 91
Title:

Architecture Principles Compliance Analysis

Authors:

João Alves, André Vasconcelos and Pedro Sousa

Abstract: The architecture principles play a key role in the enterprise architecture evolution. However, the architecture does not always address the principles intentions, which could result in unplanned deviations. Through the related work is perceptible the nonexistence of an architecture analysis based on architecture principles. Hereupon, this research proposes an architecture analysis to evaluate the architecture compliance with architecture principles. The proposed analysis, based on ArchiMate consists in the principle formalization where the principle expected impact is recognized. This analysis enables to identify the principle compliant elements in an enterprise architecture description. This analysis has been applied in one of the largest Portuguese insurance companies to analyse the compliance of some specific architectures. The analysis feasibility presents this research as a contribution to the architecture principles field.
Download

Paper Nr: 97
Title:

Modeling Value Creation with Enterprise Architecture

Authors:

P. M. Singh, H. Jonkers, M. E. Iacob and M. J. van Sinderen

Abstract: Firms may not succeed in business if strategies are not properly implemented in practice. Every firm needs to know, represent and master its value creation logic, not only to stay in business but also to keep growing. This paper is about focusing on an important topic in the field of strategic management and economics: value creation. We develop a value creation framework and then use the ArchiMate enterprise architecture modeling standard to model value creation, using a four step method. The output of this method is a new model, the value creation model, which represents value creation by a firm. We demonstrate the use of the method with an example case. Potential uses of the value creation model, including traceability, sensitivity analysis and a networked enterprise architecture, are discussed in detail.
Download

Paper Nr: 129
Title:

Business Rules for Business Governance

Authors:

Naveen Prakash, Deepak Kumar Sharma and Dheerendra Singh

Abstract: To reduce the gap between the business-oriented view of business rules of business people and the technical orientation of technical people, we introduce a Business layer on top of the CIM layer of MDA. This facilitates an investigation into the features of business-oriented business rules. We underpin our work with a four dimensional framework of business rules consisting of the domain, system, representation, and application dimensions. Since our focus is on features of business rules, our interest is the domain dimension. This dimension provides a number of attributes of business rules but we concentrate on the governance/guidance attribute to develop the features needed for capturing this attribute in business rules. We express governance concepts in three levels. The Governance model is the top most level and consists of governance objects, governance criteria, and the governance relationship between these. We obtain BIGm by instantiating the Governance model based on concepts of the Business Motivation Model. Finally, BIGm is instantiated to yield BOGm. We illustrate our business rules with examples from the library management domain.
Download

Paper Nr: 139
Title:

Extreme Enterprise Architecture Planning (XEAP) - Extrapolating Agile Characteristics to the Development of Enterprise Architectures

Authors:

Hugo Ramos and André Vasconcelos

Abstract: When developing enterprise architectures, in the same way as software products, companies have to deal a constant growth on the clients demand for faster results, while facing, at the same time, a big uncertainty on the requirements surrounding the project. This paper tries to investigate the similarities between the difficulties faced in both industries of enterprise architecture (EA) and software development, and propose an extension to an existent EA development methodology, in order to address those difficulties using particular agile software development methodologies characteristics. This new extension tries to introduce agile characteristics such as several iterations, solution partitioning and constant client feedback in order to deliver faster results and have a bigger capacity of response to the change of requirements, when compared with the standard methodologies. To do so, the first iteration is based on a reference model and the next ones follow the Enterprise Architecture Planning (EAP) methodology steps and are adaptable to the business itself. After presenting our proposal we make the demonstration of the methodology developed, applying it to a real-world problem of local organization called Cascais Ambiente, responsible for the maintenance of the environmental health in Cascais city.
Download

Paper Nr: 143
Title:

Extending BPMN 2.0 Meta-models for Process Version Modelling

Authors:

Imen Ben said, Mohamed Amine Chaâbane, Eric Andonoff and Rafik Bouaziz

Abstract: This paper introduces BPMN4V (BPMN for Versions), an extension of BPMN for modelling variability (flexibility) of processes before their use in an organizational context or before their publication over the cloud as services. More precisely, this paper motivates the importance of modelling variability of processes using versions and introduces the versioning pattern to be used to reach this objective. It also presents BPMN4V, giving provided extensions to BPMN2.0 meta model, both considering versions of intra and inter-organizational processes. An example illustrating the instantiation of the proposed meta-model is given for each kind of process.
Download

Paper Nr: 179
Title:

Declarative Versus Imperative Business Process Languages - A Controlled Experiment

Authors:

Natália C. Silva, César A. L. de Oliveira, Fabiane A. L. A. Albino and Ricardo M. F. Lima

Abstract: It has been argued that traditional workflows lack of flexibility to cope with complex and changing environments found in several business domains. The declarative approach surged with the aim of enabling more flexible business process management systems. Processes are designed in terms of activities and rules that constrain their execution. As such, declarative models are less rigid and prescriptive than workflows, since this approach focus on modeling what must be done but not how. Despite these arguments, there is no quantitative evidence that the benefits provided by current declarative approaches outperform the features of traditional workflows. In this work, we present the results of a controlled experiment conducted to empirically compare Workflow and Declarative approaches to business process modeling. Our findings suggest that there is no signficative difference from adopting one approach or the other.
Download

Paper Nr: 181
Title:

Ontologies and Information Visualization for Strategic Alliances Monitoring and Benchmarking

Authors:

Barbara Livieri, Mario A. Bochicchio and Antonella Longo

Abstract: Cooperation among firms is universally seen as a catalyst of competitive advantages. However, 50% of alliances fails. This is often due to the lack of tools and methods to quantitatively track the effects of Strategic Alliances (SAs) on firms, to the inherent complexity of a comprehensive analysis of SAa and to the difficulty to link strategic alliances goals with Key Performance Indicators (KPIs). Nonetheless, performance management and performance measurement have a key role in the assessment of the achievement of alliances’ goals and of the impact of SAs on firms. In this context, the aim of this paper is to discuss how advanced information processing techniques (e.g. ontologies, taxonomies and information visualization) can be used for SAs monitoring and benchmarking. In particular, we propose an ontology for KPIs, rendered through data visualization tools, and a taxonomy for SAs. This allowed us to develop an interpretative framework able to support both SAs and firm managers to understand how to monitor their alliance and which KPIs to use. Finally, we discuss the pertinence and the coherency of the approach referring to the literature.
Download

Paper Nr: 228
Title:

Formalization of Validation Extension Metamodel for Enterprise Architecture Frameworks

Authors:

Samia Oussena, Joe Essien and Peter Komisarczuk

Abstract: Formalization of Enterprise Architecture (EA) concepts as a whole is an area which has continued to constitute a major obstacle in understanding the principles that guide its adaptations. Ubiquitous use of terms such as models, meta-models, meta-meta-models, frameworks in the description of EA taxonomies and the relationship between the various artefacts has not been exclusive or cohesive. Consequently variant interpretations of schemas, conflicting methodologies, disparate implementation have ensued. Incongruent simulation of alignment between dynamic business architectures, heterogeneous application systems and validation techniques has been prevalent. The divergent and widespread paradigm of EA domiciliation in practice makes it even more challenging to adopt a generic formalized constructs in which models can be interpreted and verified (Martin et al., 2004). The unavailability of a unified EA modelling language able to describe a wide range of Information Technology domains compounds these challenges leading to exponentiations of EA perspectives. This paper seeks to present a formalization of concepts towards addressing validation concerns of EA through the use of ontologies and queries based on constraints specified in the model’s motivation taxonomy. The paper is based on experimental research and grounded on EA taxonomies created using the ArchiMate modelling language and open source web ontology. It delves into the use of semantics triples, Resource Description Framework Schema and relational graphs to map EA taxonomy artefacts into classes and slots using end-to-end conventional formalization approach applicable within heterogeneous EA domains. The paper also expounds on a proposal that postulates implementation of the approach, enables formalized traceability of EA validation and contributes to effective validation of EA through refined taxonomy semantics, mappings and alignment of motivation.
Download

Paper Nr: 229
Title:

A Knowledge Management Framework for Knowledge-Intensive SMEs

Authors:

Thang Le Dinh, Thai Ho Van and Éliane Moreau

Abstract: Nowadays knowledge-intensive enterprises, which offer knowledge-based products and services to the market, play a vital role in the knowledge-based economy. Effective knowledge management has become a key success factor for those enterprises in particular and the whole economy in general. Knowledge management is important for both large and small and medium knowledge-intensive enterprises; however, there is still a little focus on this topic in knowledge-intensive small and medium enterprises (SMEs). In this study, the authors propose an integrated framework as a foundation for designing an appropriate knowledge management solution for knowledge-intensive SMEs. The paper begins with a theoretical background and the research design and then continues with the characteristics of the framework. Accordingly, the principal components of the framework corresponding to design science research such as the constructs, model, method and instantiations are illustrated. The paper ends with the conclusions and future work.
Download

Paper Nr: 253
Title:

Understanding the Role of Business – IT Alignment in Organisational Agility

Authors:

Charles Crick and Eng Chew

Abstract: Extant research shows business-IT alignment to be both an enabler and inhibitor of overall organisational agility and has pointed to the need for finer grained perspectives to fully elucidate the relationship. This paper posits the view that, firstly, current approaches to reasoning about where rigidities are present that are preventing organisational agility are lacking in both granularity and sound ontology. Secondly, that in order to obtain the necessary granular view, the socio-technical dimension of the business-IT relationship must be examined. An initial conceptual model behind ongoing research into this topical problem area is presented.
Download

Paper Nr: 257
Title:

Models to Aid Decision Making in Enterprises

Authors:

Suman Roychoudhury, Asha Rajbhoj, Vinay Kulkarni and Deepali Kholkar

Abstract: Enterprises are complex heterogeneous entities consisting of multiple stakeholders with each performing a particular role to meet the desired overall objective. With increased dynamics that enterprises are witnessing, it is becoming progressively difficult to maintain synchrony within the enterprise for it to function effectively. Current practice is to rely on human expertise which is time-, cost-, and effort-wise expensive and also lacks in certainty. Use of machine-manipulable models that can aid in pro-active decision-making could be an alternative. In this paper, we describe such a prescriptive decision making facility that makes use of different modeling techniques and illustrate the same with an industrial case study.
Download

Paper Nr: 271
Title:

Towards a General Framework for Business Tests

Authors:

Marijke Swennen, Benoît Depaire, Koen Vanhoof and Mieke Jans

Abstract: Testing and controlling business processes, activities, data and results is becoming increasingly important for companies. Based on the literature, business tests can be divided into three domains, i.e. performance, risk and compliance and separate domain-specific frameworks have been developed. These different domains and frameworks hint at some aspects that need to be taken into account when managing business tests in a company. In this paper we identify the most important concepts concerning business tests and their management and we provide a first conceptual business test model. We do this based on an archival research study in which we analyse business tests performed by an international consultancy company.
Download

Paper Nr: 279
Title:

A Protocol for Command and Control Systems Integration

Authors:

Patrick Lara and Ricardo Choren

Abstract: Integration of Command and Control (C2) systems is a real need in any Joint Operation. Solutions are typically based on data exchange, usually using the Joint Consultation, Command and Control Information Exchange Data Model (JC3IEDM), a data model established by NATO. The use of this model with service oriented architecture (SOA) has been consolidated as one of the key factors to achieve command and control systems integration. However, these solutions require great computer process capacity and broadband networks, which are usually hard to find in military tactical environments. This paper presents approaches of integration systems available, comparing their technologies, pointing out their advantages and disadvantages, and finally proposes requirements for a generic protocol to allow message handling between command and control systems, using the JC3IEDM.
Download

Paper Nr: 299
Title:

Business-IT Alignment and Service Oriented Architecture - A Proposal of a Service-Oriented Strategic Alignment Model

Authors:

Llanos Cuenca, Andrés Boza, Angel Ortiz and Jos J. M. Trienekens

Abstract: Since its inception, SOA has been postulated as the solution to the problems of alignment between business and IT. However, these problems still remain, especially at external level where the business strategy should be aligned with the IT strategy. Based on the Henderson and Venkatraman’s strategic alignment model and the literature review of strategic aspects in SOA this paper proposes a Service-Oriented Strategic Alignment Model (SOSAM) in order to achieve business services and information technology external strategic alignment. The business strategy includes the definition of business service scope, the distinctive business service competencies and the business service governance; and the IT strategy includes the definition of technology service scope, the service systemic competencies and the service governance.
Download

Paper Nr: 39
Title:

Investigation of IT Sourcing, Relationship Management and Contractual Governance Approaches - State of the Art Literature Review

Authors:

Matthias Wißotzki, Felix Timm, Jörn Wiebring and Hasan Koç

Abstract: The field of IT sourcing and its related management disciplines like supplier and contract management increasingly gains attention from research around the world regarding to the development of research activity. Influence factors as well as vital competences in relation to the IT sourcing success are investigated. This analysis intends to give a transparent and comprehensive overview about recent research topics like relationship management, reveals limitations and analyses new research phenomena like multisourcing.
Download

Paper Nr: 62
Title:

Towards Multi-level Organizational Control Framework to Manage the Business Transaction Workarounds

Authors:

Sérgio Guerreiro

Abstract: Organizations strive to find solutions that perform their business processes more efficiently and effective. Steering the organizational operation using a priori prescribed models derives from the classical control engineering theories. These approaches are valid for business information systems domain but require contextual adaptation for dealing with concerns such as change management. In the context of business transaction, the models prescribe the design freedom restrictions for producing a new service or product, and share a common understanding between the stakeholders that have diverse interpretations of it. However, for many and diverse reasons, organizational actors perform workarounds at operation time that could be extremely different from the previous prescribed business transaction models. This paper reviews the organizational control related work and synthesizes it in a conceptual framework. The goal is to establish a set of concepts, and their relationships, to identify workarounds occurring at operation time and then feedback the organizational management with reviewed models, where the control solution encompasses three competence levels: enterprise governance, business rules and access control.
Download

Paper Nr: 75
Title:

Using Activity Diagrams and DEMO to Capture Relevant Measures in an Organizational Control - A Case Study on Remote Assistance Service

Authors:

António Gonçalves, Pedro Sousa and Anacleto Correia

Abstract: This paper makes a proposal how to find dysfunctions in the operation of an organization. For that it uses relevant work done in DEMO (Dynamic Essential Modelling of Organizations approach) and as a novelty it introduces some of the Activity Theory concepts such as the contradiction concept. The DEMO method to construct an organization´s control is named GOD (Generation, Operationalization & Discontinuation). The GOD method aims at the diagnosis of the organization’s dysfunction, i.e., deviations from what is the expected operation, and also prepares the organization for an adequate response to such dysfunctions, so that it can continue to work. Dysfunctions are found by the declaration of control rules (i.e., norms) of some organization measures and monitoring feasible values for that rules. For example a measure could be the “income per month” of the organization, a norm could be “min income per month” and viability value could be “higher than 5000 Euros”. Notwithstanding the existing of GOD, it is not clear how to choose the proper measures, norms and the control values and how to relate them with operation of organization. To solve this challenge we propose to use Activity Theory concepts such as contradictions to propose a method to choose and monitor useful measures, norms and viability values. We will use the proposed solution via a real case study of a service (e.g., www.True-Kare.com) that allows someone to provide a remote assistance to another person by using a mobile phone.
Download

Paper Nr: 81
Title:

Collaborative Evaluation to Build Closed Repositories on Business Process Models

Authors:

Hugo Ordoñez, Juan Carlos Corrales, Carlos Cobos, Leandro Krug Wives and Lucineia Thom

Abstract: Nowadays, many companies define, model and use business processes (BP) for several tasks. BP management has become an important research area and researchers have focused their attention on the development of mechanisms for searching BP models on repositories. Despite the positive results of the current mechanisms, there is no defined collaborative methodology to create a closed repository evaluation for these search mechanisms. This kind of repository contains some closed BP predefined lists representing queries and ideal answers to these queries with the most relevant BPs based on a set of evaluation metrics. This paper describes a methodology for creating such repositories. To apply the proposed methodology, we built a Web tool that allows to a set of evaluators to make relevance judgments in a collaborative way for each one of the items returned according to predefined queries. The evaluation metrics used can measure the consensus degree in the results, therefore confirming the methodology feasibility to create an open access, scalable and expandable closed BP repository with new BP models that can be reusable in future research.
Download

Paper Nr: 83
Title:

Evaluation Concept of the Enterprise Architecture Management Capability Navigator

Authors:

Matthias Wißotzki and Hasan Koç

Abstract: Organizational knowledge is a crucial aspect for the strategic planning of an enterprise. The enterprise architecture management (EAM) deals with all perspectives of the enterprise architecture with regard to planning, transforming and monitoring. Maturity models are established instruments for assessing these processes in organizations. Applying the maturity model development process (MMDP), we are in the course of a new maturity model construction. Within this work, we first concretize the building blocks of the MMDP and present the first initiations of the Enterprise Architecture Capability Navigator (EACN). Afterwards, we discuss the need for an evaluation concept and present the results of the first EACN evaluation iteration.
Download

Paper Nr: 92
Title:

Towards Business Process Model Extension with Cost Perspective Based on Process Mining - Petri Net Model Case

Authors:

Dhafer Thabet, Sonia Ayachi Ghannouchi and Henda Hajjami Ben Ghézala

Abstract: Business process improvement has ever been the major organizations concern to enhance efficiency, flexibility and competitiveness. Business process management is a contemporary approach which includes different techniques to support business process improvement along its lifecycle phases. Process mining is a maturing technology based on event logs analysis giving insight on what is really happening at the operational level. Process model extension is one of the three process mining types, which provides different perspectives of the business process. Otherwise, cost perspective is one of the relevant information needed by organization managers to take the suited process improvement decisions. In this paper, we propose an approach allowing Petri Net model extension with cost information using process mining extension technique. Besides, we present the main details about its implementation and the test case results.
Download

Paper Nr: 112
Title:

Operational Alignment Framework for Improving Business Performance of an Organisation

Authors:

Jakkapun Kwanroengjai, Kecheng Liu, Chekfoung Tan and Lily Sun

Abstract: Business strategies are vital for an organisation in the dynamic business environment today. However, most organisations are still facing issues in effectively executing the business strategies. The misalignment of operational factors such people, business operations, and IT systems, is one major problem that hinders the best performance of an organisation and degrades the value of business strategies. Therefore, this paper aims to produce an operational alignment framework, in order to ensure the business and IT components are operationally aligned. It contains a set of operational alignment components and its assessment methods. An operational alignment map is produced to identify the root cause of the alignment issues in an organisation. A case study in a Thai University Healthcare Centre is used for validating the operational alignment framework.
Download

Paper Nr: 137
Title:

Assurance in Collaborative ICT-enabled Service Chains

Authors:

Y. W. van Wijk, N. R. T. P. van Beest, K. F. C. de Bakker and J. C. Wortmann

Abstract: Assurance is an essential condition for trust in a collaborative ICT-enabled service business. Insufficient assurance can cause organizational vulnerability, inefficiency and major loss of business revenues. Especially complex and extensive composite ICT-enabled services are confronted with a major increase of business- and discontinuity risks. To mitigate these risks, this paper presents a conceptual solution for assurance and governance in ICT-enabled service chains, by designing an assurance framework based on business strategy, the management of risk-control obligations and the control and audit of the service chain. After presenting the design of the new assurance framework, the business implications are explained. In this context, the feasibility and relevance of the framework are validated at two large public companies in The Netherlands. The paper shows that a new assurance and governance approach for ICT-enabled service chains is required in practice and theory, where assurance can be obtained via the conceptual assurance framework for ICT-enabled service chains.
Download

Paper Nr: 187
Title:

An Integrated Data Management for Enterprise Systems

Authors:

Martin Boissier, Jens Krueger, Johannes Wust and Hasso Plattner

Abstract: Over past decades, higher demands on performance for enterprise systems have led to an increased architectural complexity. Demands as real-time analytics or graph computation add further complexity to the technology stack by adding redundancy and distributing business data over multiple components. We argue that enterprises need to simplify data management and reduce complexity as well as data redundancy. We propose a structured approach using the shearing layer concept with a unified data management to improve adaptability as well as maintainability.
Download

Paper Nr: 211
Title:

Methodology for Developing and Application Outsourcing in the Cloud Using SOA

Authors:

Ana Gonzalo Nuño and Concepción M. Gascueña

Abstract: New technologies such as, the new Information and Communication Technology ICT, break new paths and redefines the way we understand business, the Cloud Computing is one of them. The on demand resource gathering and the per usage payment scheme are now commonplace, and allows companies to save on their ICT investments. Despite the importance of this issue, we still lack methodologies that help companies, to develop applications oriented for its exploitation in the Cloud. In this study we aim to fill this gap and propose a methodology for the development of ICT applications, which are directed towards a business model, and further outsourcing in the Cloud. In the former the Development of SOA applications, we take, as a baseline scenario, a business model from which to obtain a business process model. To this end, we use software engineering tools; and in the latter The Outsourcing we propose a guide that would facilitate uploading business models into the Cloud; to this end we describe a SOA governance model, which controls the SOA. Additionally we propose a Cloud government that integrates Service Level Agreements SLAs, plus SOA governance, and Cloud architecture. Finally we apply our methodology in an example illustrating our proposal. We believe that our proposal can be used as a guide/pattern for the development of business applications.
Download

Paper Nr: 232
Title:

Cyber-physical Information Systems for Enterprise Engineering - Cyber-physical Applications Timing

Authors:

Miroslav Sveda and Patrik Halfar

Abstract: This paper discusses the role of time in industrial cyber-physical applications of information systems using SCADA as a representative example. It restates the basics of the notion of time, which is important not only in general but particularly in enterprise domain, and focuses on industrial applications. Also, the manuscript brings SCADA concepts and relates them to cyber-physical system architectures. The case study demonstrates some time-related techniques in frame of a simple but real-world application.
Download

Paper Nr: 243
Title:

Trade off Between Risk Management, Value Creation and Strategic Alignment in Project Portfolio Management

Authors:

Khadija Benaija and Laila Kjiri

Abstract: Projects portfolio management allows the company to select, prioritize, integrate, manage and check its projects in a multi project context. In this paper, we consider a very important part of the projects portfolio management namely the selection of projects. Indeed, the greatest challenge for managers today is to be sure that the projects initiated achieve the strategic and financial objectives of the company. Our paper proposes a framework for project selection based on three main criteria: value creation, risk management and alignment with the business strategy. After a brief review of the literature regarding the basic concepts used, we propose bivariate analyses: risk-value, risk-alignment and value-alignment. Our contribution in this paper is to design a framework that realizes the trade-off between the three criteria.
Download

Paper Nr: 244
Title:

CRISTAL-iSE - Provenance Applied in Industry

Authors:

Jetendr Shamdasani, Andrew Branson, Richard McClatchey, Coralie Blanc, Florent Martin, Pierre Bornand, Sandra Massonnat, Olivier Gattaz and Patrick Emin

Abstract: This paper presents the CRISTAL-iSE project as a framework for the management of provenance information in industry. The project itself is a research collaboration between academia and industry. A key factor in the project is the use of a system known as CRISTAL which is a mature system based on proven description driven principles. A crucial element in the description driven approach is that the fact that objects (Items) are described at runtime enabling managed systems to be both dynamic and flexible. Another factor is the notion that all Items in CRISTAL are stored and versioned, therefore enabling a provenance collection system. In this paper a concrete application, called Agilium, is briefly described and a future application CIMAG-RA is presented which will harness the power of both CRISTAL and Agilium.
Download

Paper Nr: 270
Title:

e-Strategy - An Enterprise Engineer Approach to Strategic Management

Authors:

Rodrigo Pereira and André Vasconcelos

Abstract: Organizations need to ensure that their strategies are aligned with their overall business and information systems. There lies the need to have a solution that helps organizations assess if their products or services strategies are supported by the respective business processes and by consequence, if the IT supports those processes. We propose a specific EA definition that uses Marketing principles in order to describe the products strategy and assess the alignment between it and the rest of the EA. We demonstrate our proposal through a set of models built according to our viewpoints. For evaluation we shall use: (1) Case Study, (2) Papers, (3) Interviews and (4) Moody and Shanks Framework.

Paper Nr: 301
Title:

Environmental Disclosure - From the Accounting to the Report Perspective

Authors:

Francisco Carreira, Ana Damião, Rute Abreu and Fátima David

Abstract: This paper focus on the environmental disclosure (ED) promoted by firms, due to the strong demand for information and identification of the relevant data that pursuit the new legal requirements. The methodology is separate, by one side, on the theoretical framework based on the disclosure of environmental information (EI) and the true and fair view based on the accounting perspective. Indeed, the paper provides an understanding of the Patten (2002), Clarkson et al. (2008) and Monteiro (2007) researches. And, by the other side, the empirical analysis, at longitudinal and exploratory level, measures the degree of disclosure of the environmental information based on the report perspective. The authors present an Environmental Disclosure Index (EDI) and discuss the increase of the environmental reporting (ER) over the time and disclosure level of items published in the firms’ annual reports listed on the Lisbon Euronext Stock Market, during the period of 2007-2009.
Download

Paper Nr: 302
Title:

Simplified Business Information - A Technical Position in Accounting and Taxation

Authors:

Fátima David, Rute Abreu and Francisco Carreira

Abstract: This paper is focused on Simplified Business Information (SBI) – “Informação Empresarial Simplificada” (IES), hereinafter IES, in the accounting and taxation context. IES is a new way for firms to deliver business information online to public services, by using a totally dematerialized procedure. The theoretical framework of this paper is based on accounting and taxation information that evaluate the impact of the online submission of the IES files, since firms fulfil, at once, four different obligations: 1) Deposit of annual accounts in the Commercial Registry of the Ministry of Justice; 2) Delivery of annual fiscal declaration to the Ministry of Finances and Public Administration; 3) Delivery of annual information to National Statistics Institute for statistical purposes; and 4) Delivery of information to the Portuguese Central Bank. Thus, the paper attempts to provide an understanding the adoption of the dematerialized procedure, because the increasing of a firm’s activity and changes in its accounting and taxation environment require new attitudes of disclosure information, as a key factor that negatively and positively influences the accounting and taxation regime, particularly on Portugal.
Download