ICEIS 2009 Abstracts


Area 1 - Databases and Information Systems Integration

Full Papers
Paper Nr: 40
Title:

MIDAS: A middleware for information systems with QoS concerns

Authors:

Luis Fernando Orleans and Geraldo Zimbrão

Abstract: One of the most difficult tasks in the design of information systems is how to control the behaviour of the back-end storage engine, usually a relational database. As the load on the database increases, the longer issued transactions will take to execute, mainly because the presence of a high number of locks required to provide isolation and concurrency. In this paper we present MIDAS, a middleware designed to manage the behaviour of database servers, focusing primarily on guaranteeing transaction execution within an specified amount of time (deadline). MIDAS was developed for Java applications that connects to storage engines through JDBC. It provides a transparent QoS layer and can be adopted with very few code modifications. All transactions issued by the application are captured, forcing them to pass through an Admission Control (AC) mechanism. To accomplish such QoS constraints, we propose a novel AC strategy, called 2-Phase Admission Control (2PAC), that minimizes the amount of transactions that exceed the established maximum time by accepting only those transactions that are not expected to miss their deadlines. We also implemented an enhancement over 2PAC, called diffserv – which gives priority to small transactions and can adopted when their occurrences are not often.

Paper Nr: 108
Title:

INSTANCE-BASED OWL SCHEMA MATCHING

Authors:

Luiz André P. Paes Leme, Marco A. Casanova, Karin Koogan Breitman and Antonio Furtado

Abstract: Schema matching is a fundamental issue in many database applications, such as query mediation and data warehousing. It becomes a difficult challenge when different vocabularies are used to refer to the same real-world concepts. In this context, a convenient approach, sometimes called extensional, instance-based or semantic, is to detect how the same real world objects are represented in different databases and to use the information thus obtained to match the schemas. This paper describes an instance-based schema matching technique for OWL schemas. The technique is based on similarity functions and is backed up by experimental results with real data downloaded from data sources found on the Web.

Paper Nr: 123
Title:

The Integrative Role of IT in Product and Process Innovation: Growth and Productivity outcomes for manufacturing SMEs

Authors:

Louis Raymond, Anne-Marie Croteau and Francois Bergeron

Abstract: The assimilation of IT for business process integration plays an integrative role by providing an organization with the ability to exploit innovation opportunities with the purpose of increasing their growth and productivity. Based on survey data obtained from 309 Canadian manufacturing SMEs, this study aims at a deeper understanding of the assimilation of IT for business process integration with regard to product and process innovation. The first objective is to identify the effect of the assimilation of IT for business process integration on growth and productivity. The second objective is to verify if the assimilation of IT for business process integration varies amongst low, medium and high-tech SMEs. Results indicate that the assimilation of IT for business process integration depends upon the type of innovation. It also varies as per the technological intensity of the firms. The assimilation of IT for business process integration has two effects: it increases the growth of manufacturing SMEs by enabling product innovation; but it decreases their productivity by impeding the process innovation.

Paper Nr: 141
Title:

Vectorizing Instance-Based Integration Processes

Authors:

Matthias Boehm, Wolfgang Lehner, Dirk Habich, Uwe Wloka and Steffen Preissler

Abstract: The inefficiency of integration processes—as an abstraction of workflow-based integration tasks—is often reasoned by low resource utilization and significant waiting times for external systems. Due to the increasing use of integration processes within IT infrastructures, the throughput optimization has high influence on the overall performance of such an infrastructure. In the area of computational engineering, low resource utilization is addressed with vectorization techniques. In this paper, we introduce the concept of vectorization in the context of integration processes in order to achieve a higher degree of parallelism. Here, transactional behavior and serialized execution must be ensured. In conclusion of our evaluation, the message throughput can be significantly increased.

Paper Nr: 142
Title:

Invisible Deployment of Integration Processes

Authors:

Matthias Boehm, Dirk Habich, Wolfgang Lehner and Uwe Wloka

Abstract: Due to the changing scope of data management towards the management of heterogeneous and distributed systems and applications, integration processes gain in importance. This is particularly true for those processes used as abstractions of workflow-based integration tasks; these are widely applied in practice. In such scenarios, a typical IT infrastructure comprises multiple integration systems with overlapping functionalities. The major problems in this area are high development effort, low portability and inefficiency. Therefore, in this paper, we introduce the vision of invisible deployment that addresses the virtualization of multiple, heterogeneous, physical integration systems into a single logical integration system. This vision comprises several challenging issues in the fields of deployment aspects as well as runtime aspects. Here, we describe those challenges, discuss possible solutions and present a detailed system architecture for that approach. As a result, the development effort can be reduced and the portability as well as the performance can be improved significantly.

Paper Nr: 148
Title:

Customizing Enterprise Software as a Service Applications: Back-end Extension in a multi-tenancy Environment

Authors:

Jürgen Müller, Jens Krueger, Sebastian Enderlein, Marco Helmich and Alexander Zeier

Abstract: Since the emerge of Salesforce.com, more and more business applications tend to move towards Software as a Service. In order to target Small and Medium-sized Enterprises, platform providers need to lower their operational costs and establish an ecosystem of partners, customizing their generic solution, to push their products into spot markets. This paper categorizes customization options, identifies cornerstones of a customizable, multi-tenancy aware infrastructure, proposes a framework that encapsulates multi-tenancy, and introduces a technique for partner back-end customizations with regard to a given real-world scenario.

Paper Nr: 193
Title:

Pattern-based Refactoring of Legacy Software Systems

Authors:

Sascha Hunold, Björn Krellner, Thomas Rauber, Thomas Reichel and Gudula Rünger

Abstract: Rearchitecturing large software systems becomes more and more complex after years of development and a growing size of the code base. Nonetheless, a constant adaptation of software in production is needed to cope with new requirements. Thus, refactoring legacy code requires tool support to help developers performing this demanding task. Since the code base of legacy software systems is far beyond the size that developers can handle manually we present an approach to perform refactoring tasks automatically. In the pattern-based transformation the abstract syntax tree of a legacy software system is scanned for a particular software pattern. If the pattern is found it is automatically substituted by a target pattern. In particular, we focus on software refactorings to move methods or groups of methods and dependent member variables. The main objective of this refactoring is to reduce the number of dependencies within a software architecture which leads to a less coupled architecture. We demonstrate the effectiveness of our approach in a case study.

Paper Nr: 196
Title:

Natural and Multi-layered Approach to Detect Changes in Tree-based Textual Documents

Authors:

Angelo Di Iorio, Michele Schirinzi, Carlo Marchetti and Fabio Vitali

Abstract: Several efficient and very powerful algorithms exist for detecting changes in tree-based textual documents, such as those encoded in XML. An important aspect is still underestimated in their design and implementation: the quality of the output, in terms of readability, clearness and accuracy for human users. Such requirement is particularly relevant when diff-ing literary documents, such as books, reports, reviews, acts, and so on. This paper introduces the concept of ’naturalness’ in diff-ing tree-based textual documents, and discusses a new extensible set of changes which can and should be detected. A naturalness-based algorithm is presented, as well as its application for diff-ing XML-encoded legislative documents. The algorithm, called JNDiff, proved to detect significantly better matchings (since new operations are recognized) and to be very efficient.

Paper Nr: 198
Title:

CrimsonHex: A Service Oriented Repository of Specialised Learning Objects

Authors:

José P. Leal and Ricardo Queirós

Abstract: The corner stone of the interoperability of eLearning systems is the standard definition of learning objects. Nevertheless, for some domains this standard is insufficient to fully describe all the assets, especially when they are used as input for other eLearning services. On the other hand, a standard definition of learning objects in not enough to ensure interoperability among eLearning systems; they must also use a standard API to exchange learning objects. This paper presents the design and implementation of a service oriented repository of learning objects called crimsonHex. This repository is fully compliant with the existing interoperability standards and supports new definitions of learning objects for specialized domains. We illustrate this feature with the definition of programming problems as learning objects and its validation by the repository. This repository is also prepared to store usage data on learning objects to tailor the presentation order and adapt it to learner profiles.

Paper Nr: 213
Title:

A SCALABLE PARAMETRIC-RBAC ARCHITECTURE FOR THE PROPAGATION OF A MULTI-MODALITY, MULTI-RESOURCE INFORMATICS SYSTEM

Authors:

Remo Mueller, Guo-Qiang Zhang and Van Anh Tran

Abstract: We present a scalable architecture called X-MIMI for the propagation of MIMI (Multi-modality, Multi-resource, Informatics In- frastructure System) to the biomedical research community. MIMI is a web-based system for managing the latest instruments and resources used by clinical and translational investigators. To deploy MIMI broadly, X-MIMI utilizes a parametric Role-Based Access Control model to de- centralize the management of user-role assignment, facilitating the de- ployment and system administration in a flexible manner that minimizes operational overhead. We use Formal Concept Analysis to specify the semantics of roles according to their permissions, resulting in a lattice hierarchy that dictates the cascades of RBAC authority. Additional com- ponents of the architecture are based on the Model-View-Controller pat- tern, implemented in Ruby-on-Rails. The X-MIMI architecture provides a uniform setup interface for centers and facilities, as well as a set of seamlessly integrated scientific and administrative functionalities in a Web 2.0 environment.

Paper Nr: 228
Title:

MINABLE DATAWAREHOUSE

Authors:

Jai Kang, James Kang and David Morgan

Abstract: Data warehouses have been widely used in various capacities such as large corporations or public institutions. These systems contain large and rich datasets that are often used by several data mining techniques to discover interesting patterns. However, before data mining techniques can be applied to data warehouses, arduous and convoluted preprocessing techniques must be completed. Thus, we propose a minable data warehouse that integrates the preprocessing stage in a data mining technique within the cleansing and transformation process in a data warehouse. This framework will allow data mining techniques to be computed without any additional preprocessing steps. We present our proposed framework using a synthetically generated dataset and a classical data mining technique called Apriori to discover association rules within instant messaging datasets.

Paper Nr: 236
Title:

A Step Forward in Semi-Automatic Metamodel Matching: Algorithms and Tool

Authors:

José de Sousa, Denivaldo Lopes, Denivaldo Lopes, Zair Abdelouahab, Daniela B. Claro and José de Sousa Jr

Abstract: In recent years the complexity of producing softwares systems has increased due the continuous evolution of the requirements, the creation of new technologies and integration with legacy systems. When complexity increases the phases of software development, maintenance and evolution become more difficult to deal with, i.e. they became more subject to error-prone factors. Recently, Model Driven Architecture (MDA) has made the management of this complexity possible thanks to models and the transformation of Platform-Independent Model (PIM) in Platform-Specific Models (PSM). However, the manual creation of transformation definitions is a programming activity which is error-prone because it is a manual task. In the MDA context, the solution is to provide semi-automatic creation of a mapping specification that can be used to generate transformation definitions in a specific transformation language. In this paper, we present an algorithm to match metamodels and enhancements in the MT4MDE and SAMT4MDE tool in order to implement this matching algorithm.

Paper Nr: 238
Title:

A Study of Indexing Strategies for Hybrid Data Spaces

Authors:

Sakti Pramanik, Sakti Pramanik, Qiang Zhu and Gang Qian

Abstract: Different indexing techniques have been proposed to index either the continuous data space (CDS) or the non-ordered discrete data space (NDDS). However, modern database applications sometimes require indexing the hybrid data space (HDS), which involves both continuous and non-ordered discrete subspaces. In this paper, the structure and heuristics of the ND-tree, which is a recently-proposed indexing technique for NDDSs, are first extended to the HDS. A novel power value adjustment strategy is then used to make the continuous and discrete dimensions comparable and controllable in the HDS. An estimation model is developed to predict the box query performance of the hybrid indexing. Our experimental results show that the original ND-tree's heuristics are effective in supporting efficient box queries in the hybrid data space, and could be further improved with our proposed strategies to address the unique characteristics of the HDS.

Paper Nr: 299
Title:

Relaxing XML Preference Queries for Cooperative Retrieval

Authors:

SungRan Cho and Wolf-Tilo Balke

Abstract: Today XML is an essential technology for knowledge management within enterprises and dissemination of data over the Web. Therefore the efficient evaluation of XML queries has been thoroughly researched. But given the ever growing amount of information available in different sources, also querying becomes more complex. In contrast to simple exact match retrieval, approximate matches become far more appropriate over collections of complex XML documents. Only recently approximate XML query processing has been proposed where structure and value are subject to necessary relaxations. All the possible query relaxations determined by the user's preferences are generated in a way that predicates are progressively relaxed until a suitable set of best possible results is retrieved. In this paper we present a novel framework for developing preference relaxations to the query permitting additional flexibility in order to fulfill a user’s wishes. Additionally we design IPX, an interface for XML preference query processing, that enables users to express and formulate complex user preferences, and provides a first solution for the aspects of XML preference query processing that allow preference querying and returning ranked answers.

Paper Nr: 461
Title:

DEXIN--An Extensible Framework for Distributed XQuery over Heterogeneous Data Sources

Authors:

Muhammad I. Ali, Schahram Dustdar, Reinhard Pichler and Hong-Linh Truong

Abstract: In the Web environment, rich, diverse sources of heterogeneous and distributed data are ubiquitous. In fact, even the information characterizing a single entity - like, for example, the information related to a Web service - is normally scattered over various data sources using various languages such as XML, RDF, and OWL. Hence, there is a strong need for Web applications to handle queries over heterogeneous, autonomous, and distributed data sources. However, existing techniques do not provide sufficient support for this task. In this paper we present DeXIN, an extensible framework for providing integrated access over heterogeneous, autonomous, and distributed web data sources, which can be utilized for data integration in modern Web applications and Service Oriented Architecture. DeXIN extends the XQuery language by supporting the specification and execution of SPARQL queries inside XQuery, thus facilitating the query of data modeled in XML, RDF, and OWL. DeXIN facilitates data integration in distributed Web and Service Oriented environment by avoiding the transfer of large amounts of data to a central server for centralized data integration and exonerates the transformation of huge amount of data into common format for integrated access. We also present typical application scenarios and report on experiments with DeXIN. These experiments demonstrate the ease of use and the good performance of our framework.

Paper Nr: 496
Title:

Dimensional Templates in Data Warehouses - Automating the Multidimensional Design of Data Warehouse Prototypes

Authors:

Rui Oliveira, Fátima Rodrigues, Paulo Martins and João P. Moura

Abstract: Prototypes are valuable tools in Data Warehouse (DW) projects. DW prototypes can help end-users to get an accurate preview of a future DW system, along with its advantages and constraints. However, DW prototypes have considerably smaller development time windows when compared to complete DW projects. This puts additional pressure on the achievement of the expected prototypes' high quality standards, especially at the highly time consuming multidimensional design: in it, a thin margin for harmful unreflected decisions exists. Some devised methods for automating DW multidimensional design can be used to accelerate this stage, yet they are more suitable to DW full projects rather than to prototypes, due to the effort, cost and expertise they require. This paper proposes the semi-automation of DW multidimensional designs using templates. We believe this approach better fits the development speed and cost constraints of DW prototyping since templates are pre-built highly adaptable and highly reusable solutions.

Paper Nr: 560
Title:

MULTIVIEWS COMPONENTS FOR USER-AWARE WEB SERVICES

Authors:

Bouchra El Asri, Adil Kenzi, Mahmoud Nassar, Abdelaziz Kriouile and Abdelaziz Barrahmoune

Abstract: Component based software (CBS) intends to meet the need of reusability and productivity; Web service technologies allows interoperability. This work addresses the development of CBS using web services technologies. Undeniably, web service may interact with several types of web service clients. The central problem is, therefore, how to handle the multidimensional aspect of web service clients’ needs and requirements. To tackle this problem, we propose the concept of multiview component as a first class modeling entity that allows the capture of the various needs of web service clients by separating their concerns. In this paper, we propose a model driven approach for the development of user-aware web services on the basis of the multiview component concept. So, we describe how multiview component based PIM are transformed into two PSMs for the purpose of the automatic generation of both the user-aware web service description and implementation. We specify transformations as a collection of transformation rules implemented using ATL as a model transformation language.

Paper Nr: 575
Title:

KNOWLEDGE BASED QUERY PROCESSING IN LARGE SCALE VIRTUAL ORGANIZATIONS

Authors:

Alexandra Pomares, Claudia Roncancio, José Abasolo and María Villamil

Abstract: This work concerns query processing to support data sharing in large scale Virtual Organizations(VO). Characterization of VO’s data sharing contexts reflects the coexistence of factors like sources overlapping, uncertain data location, and fuzzy copies in dynamic large scale environments that hinder query processing. Existing results on distributed query evaluation are useful for VOs, but there is no appropriate solution combining high semantic level and dynamic large scale environments required by VOs. This paper proposes a characterization of VOs data sources, called Data Profile, and a query processing strategy (called QPro2e) for large scale VOs with complex data profiles. QPro2e uses an evolving distributed knowledge base describing data sources roles w.r.t shared domain concepts. It allows the identification of logical data source clusters which improve query evaluation in presence of a very large number of data sources

Paper Nr: 597
Title:

Applying Recommendation Technology in OLAP Systems

Authors:

Houssem Jerbi, Olivier Teste, Gilles Zurfluh and Franck Ravat

Abstract: OLAP systems offering multidimensional and large information space cannot solely rely on standard navigation but need to apply recommendations to make the analysis process easy and to help users quickly find relevant data for decision-making. In this paper, we propose a recommendation methodology that aims at assisting the user during his decision-support analysis. The system helps the user in querying multidimensional data and exposes him to the most interesting patterns, i.e. it provides to the user anticipatory as well as alternative decision-support data. We provide a preference-based approach to apply such methodology.

Paper Nr: 606
Title:

CLASSIFICATION AND PREDICTION OF SOFTWARE COST THROUGH FUZZY DECISION TREES

Authors:

Efi Papatheocharous and Andreas Andreou

Abstract: This work addresses the issue of software effort prediction via fuzzy decision trees generated using historical project data samples. Moreover, the effect that various numerical and nominal project characteristics used as predictors have on software development effort is investigated utilizing the classification rules extracted. The approach attempts to achieve successful classification of past project data into homogeneous clusters so as to provide accurate and reliable cost estimates within each cluster. Two algorithms, namely CHAID and CART, are applied on approximately 1000 project empirical software cost data records. The data first passed through analysis and pre-processing activities and then used for generating fuzzy decision trees instances. Then an evaluation method is performed based on the prediction accuracy of the classification rules produced. Even though the experimentation follows a heuristic approach, the trees built were found to fit the data quite successfully, while the predicted effort values approximate well the actual effort. Therefore, the model proposed may be used for future cost predictions and better allocation and control of project resources.

Paper Nr: 615
Title:

s-OLAP: a System for Supporting Approximate OLAP Query Evaluation on Very Large Data Warehouses via Probabilistic Synopses

Authors:

Alfredo Cuzzocrea

Abstract: In this paper, we propose s-OLAP, a multi-user middleware system for supporting approximate range aggregate queries on data cubes. The application scenario of s-OLAP is a networked and heterogeneous very large data warehousing environment where applying traditional algorithms for processing OLAP queries is too much expensive and not convenient because of the size of the multidimensional data, and the computational cost needed for accessing and processing them. s-OLAP relies on intelligent data representation and processing techniques, among are: (i) the amenity of exploiting the Karhunen-Loeve Transform (KLT) for obtaining dimensionality reduction of data cubes, and (ii) the definition of a probabilistic framework that allows us to provide a rigorous theoretical basis for ensuring probabilistic guarantees over the degree of approximation of the retrieved answers, which is a critical point in the context of approximate query answering techniques.

Short Papers
Paper Nr: 112
Title:

EXPERIENCES OF ERP USE IN SMALL ENTERPRISES

Authors:

Päivi Iskanius, Matti Möttönen and Raija Halonen

Abstract: This paper investigates the role of Enterprise Resource Planning (ERP) systems in the context of small and medium size enterprises (SMEs). The paper reports on research findings from a case study that has been conducted in 14 SMEs, operating in steel manufacturing and woodworking. By dividing the enterprises into three different groups; medium-sized, small, and micro enterprises, this study provides a richer understanding of enterprise size related issues in motivations, risks and challenges of ERP adoption.
Download

Paper Nr: 124
Title:

BUSINESS INTELLIGENCE BASED ON A WI-FI REAL TIME POSITIONING ENGINE - A Practical Application in a Major Retail Company

Authors:

Vasco Vinhas, Pedro Abreu and Pedro Mendes

Abstract: Collecting relevant data to perform business intelligence on a real time basis has always been a crucial objective for managers responsible for economic activities on large spaces. Following this emergent need, the authors propose a platform to perform data gathering and analysis on the location of people and assets by automatic means. The developed system is retail business oriented and has a fairly distributed architecture. It couples the core elements of a real-time Wi-Fi based location system with a set of developed functional views so to better explicit the information that one can observe for each tracked entity, the undertaken path on the space, demographic concentration patterns. Tests were conducted on a real production environment as a partnership outcome with a major player in the retail sector and the obtained results were completely satisfactory having the managers confirmed the provided knowledge relevance.
Download

Paper Nr: 126
Title:

DIRECTED ACYCLIC GRAPHS AND DISJOINT CHAINS

Authors:

Yangjun Chen

Abstract: The problem of decomposing a DAG (directed acyclic graph) into a set of disjoint chains has many applications in data engineering. One of them is the compression of transitive closures to support reachability queries on whether a given node v in a directed graph G is reachable from another node u through a path in G. Recently, an interesting algorithm is proposed by Chen et al. [Y. Chen and Y. Chen, An Efficient Algorithm for Answering Graph Reachability Queries, Proceedings of ICDE, 2008, pp. 893 - 902], which claims to be able to decompose G into a minimal set of disjoint chains in O(n2 + bn) time, where n is the number of the nodes of G, and b is G’s width, defined to be the size of a largest node subset U of G such that for every pair of nodes u, v Œ U, there does not exist a path from u to v or from v to u. However, in some cases, it fails to do so. In this paper, we analyze this algorithm and show the problem. More importantly, a new algorithm is discussed, which can always find a minimal set of disjoint chains in the same time complexity as Chen’s.
Download

Paper Nr: 152
Title:

AN OBJECT MODEL FOR THE MANAGEMENT OF DIGITAL IMAGES

Authors:

Souheil Khaddaj and Andreas Hoppe

Abstract: With digital image volumes rising dramatically there exists an important and urgent need for novel techniques and mechanisms that provide efficient storage and retrieval facilities of the voluminous data generated daily. It is already widely accepted that the use of data abstraction in object oriented modelling enables real world objects to be well represented in information systems. In this work we are particularly interested with the use of object oriented techniques for the management of digital images. Object orientation is well suited for such systems, which require the ability to handle multiple type content. This paper aims to investigate a conceptual model, based on object versioning techniques, which will represent the semantics in order to allow the continuity and pattern of changes of images to be determined over time.
Download

Paper Nr: 155
Title:

A MapReduce framework for change propagation in geographic databases

Authors:

Mario Vacca, Ferdinando Di Martino and Giuseppe Polese

Abstract: Updating a schema is a very important activity which occurs naturally during the life cycle of database systems, due to different causes. A challenging problem arising when a schema evolves is the change propagation problem, i.e. the updating of the database ground instances to make them consistent with the evolved schema. Spatial datasets, a stored representation of geographical areas, are VLDBs and so the change propagation process, involving an enormous mass of data among geographical distributed nodes, is very expensive and call for efficient processing. Moreover, the problem of designing languages and tools for spatial data sets change propagation is relevant, for the shortage of tools for schema evolution, and, in particular, for the limitations of those for spatial data sets. In this paper, we take in account both efficiency and limitations and we propose an instance update language, based on the efficient and popular Map-Reduce Google programming paradigm, which allows to perform in a parallel way a wide category of schema changes. A system embodying the language has been implementing.
Download

Paper Nr: 161
Title:

Establishing Trust Networks based on Data Quality Criteria for Selecting Data Suppliers

Authors:

Ricardo Pérez-Castillo, Ismael Caballero, Eugenio Verbo, Ignacio G. Rodríguez De Guzmán, Macario Polo and Mario Piattini

Abstract: Nowadays, organizations may haveWeb portals tailoring several websites where a wide variety of information is integrated. These portals are typically composed of a set of Web applications and services that interchange data among them. In this setting, there is no way to find out how the quality of the interchanged data is going to evolve successively. A framework is proposed for establishing trust networks based on the Data Quality (DQ) levels of the interchanged data. We shall consider two kinds of DQ: inherent DQ and pragmatic DQ. Making a decision about the selection of the most suitable data supplier will be based on the estimation of the best expected pragmatic DQ levels. In addition, an example is presented to ilustrate framework operation.
Download

Paper Nr: 168
Title:

Algorithms for Efficient Top-k Spatial Preference Query Execution in a Heterogeneous Distributed Environment

Authors:

Marcin Gorawski and Kamil Dowlaszewicz

Abstract: Top-k spatial preference queries allow searching for objects on the basis of their neighbourhoods’ character. They find k objects whose neighbouring objects satisfy the query conditions to the greatest extent. The execution of the queries is complex and lengthy as it requires performing numerous accesses to index structures and data. Existing algorithms therefore employ various optimization techniques. The algorithms assume, however, that all data sets required to execute the query are aggregated in one location. In reality data is often distributed on remote nodes like for example data accumulated by different organizations. This motivated developing algorithm capable of efficiently executing the queries in a heterogeneous distributed environment. The paper describes the specifics of operating in such environment, presents the developed algorithm, describes the mechanisms it employs and discusses the results of conducted experiments.
Download

Paper Nr: 185
Title:

AN INFORMATION SYSTEM FOR THE MANAGEMENT OF CHANGES DURING THE DESIGN OF BUILDING PROJECTS

Authors:

Essam Zaneldin

Abstract: Design is an important stage in a project's life cycle with the greatest impact on the overall performance and cost. For several reasons, changes introduced by design participants are imminent. Despite the importance of coordinating these changes among the different participants during the design stage, current practice exhibits severe information transfer problems. Since corrections to finalized designs or even designs at late stages in the process are extremely costly, it is less costly to spend the effort in managing changes and producing highly coordinated and easily constructible designs. To support this objective, this paper presents an information system with a built-in database for representing design information, including design rationale and history of changes, to support the management of changes during the design of building projects. The components of the system are discussed and possible future extensions to the present study are presented. This research is expected to help engineering and design-build firms to effectively manage design changes and produce better coordinated and constructible designs with less cost and time.
Download

Paper Nr: 189
Title:

EFFICIENT SYSTEMINTEGRATION USING SEMANTIC REQUIREMENTS AND CAPABILITY MODELS: An approach for integrating heterogeneous Business Services

Authors:

Thomas Moser, Richard Mordinyi, Alexander Mikula and Stefan Biffl

Abstract: Business system designers want to integrate heterogeneous legacy systems to provide flexible business ser-vices cheaper and faster. Unfortunately, modern integration technologies represent important integration knowledge only implicitly making solutions harder to understand, verify, and maintain. In this paper we propose a data-driven approach, “Semantically-Enabled Externalization of Knowledge” (SEEK), that expli-citly models the semantics of integration requirements & capabilities, and data transformations between he-terogeneous legacy systems. Goal of SEEK is to make the systems integration process more efficient by providing tool support for quality assurance (QA) steps and generation of system configurations. Based on use cases from industry partners, we compare the SEEK approach with UML-based modeling. In the evalua-tion context SEEK was found to be more effective to make expert knowledge on system requirements and capabilities available for more efficient tool support and reuse.
Download

Paper Nr: 197
Title:

SEMANTIC FRAMEWORK FOR INFORMATION INTEGRATION Using Service-Oriented Analysis and Design

Authors:

Prima Gustiene, Irina Peltomaa and Heli Helaakoski

Abstract: Today’s dynamic markets demand from companies’ new ways of thinking, adaptation of new technologies and more flexible production. These business drivers can be met effectively and efficiently only if people and enterprise resources, such as information systems collaborate together. The gap between organizational business aspects and information technology causes problems for companies to reach their goals. Information systems have increasingly important role in realization of business processes demands which leads to demand of close interaction and understanding between organizational and technical components. It is critical for enterprise interoperability, where semantic integration of information and technology is the prerequisite for successful collaboration. The paper presents a new semantic framework for better quality of semantic interoperability.
Download

Paper Nr: 207
Title:

Injecting Semantics into Event-Driven Architectures

Authors:

Juergen Dunkel, Alberto Fernández, Ruben Ortiz and Sascha Ossowski

Abstract: Event-driven architectures (EDA) have been proposed as a new architectural paradigm for event-based systems to process complex event streams. However, EDA have not yet reached the maturity of well-established software architectures because methodologies, models and standards are still missing. Despite the fact that EDA-based systems are essentially built on events, there is a lack of a general event modeling approach. In this paper we put forward a semantic approach to event modeling that is expressive enough to cover a broad variety of domains. Our approach is based on semantically rich event models using ontologies that allow the representation of structural properties of event types and constraints between them. Then, we argue in favour of a declarative approach to complex event processing that draws upon well established rule languages such as JESS and integrates the structural event model. We illustrate the adequacy of our approach with relation to a prototype for an event-based road traffic management system.
Download

Paper Nr: 230
Title:

C3: A METAMODEL FOR ARCHITECTURE DESCRIPTION LANGUAGE BASED ON FIRST-ORDER CONNECTOR TYPES

Authors:

Abdelkrim Amirat

Abstract: To provide hierarchical description from different software architectural viewpoints we need more than one abstraction hierarchy and connection mechanisms to support the interactions among components. Also, these mechanisms will support the refinement and traceability of architectural elements through the different levels of each hierarchy. Current methods and tools provide poor support for the challenge posed by developing system using hierarchical description. This paper describes an architecture-centric approach allowing the user to describe the logical architecture view where a physical architecture view is generated automatically for all application instances of the logical architecture.
Download

Paper Nr: 243
Title:

QUERY MELTING: A NEW PARADIGM FOR GIS MULTIPLE QUERY OPTIMIZATION

Authors:

Haifa E. Elariss, Darrel Greenhill and Souheil Khaddaj

Abstract: Recently, non-expert mobile-user applications have been developed to query Geographic Information Systems (GIS) particularly Location Based Services where users ask questions related to their position whether they are moving (dynamic) or not (static). A new Iconic Visual Query Language (IVQL) has been developed to handle proximity analysis queries that find k-nearest-neighbours and objects within a buffer area. Each operator in IVQL queries corresponds to an execution plan to be evaluated by the GIS server. Since commonalities exist between the execution plans, the same operations are executed many times leading to slow results. Hence, the need arises to develop a multi-user dynamic complex query optimizer that handles commonalities and processes the queries faster especially with the large-scale of mobile-users. We present a new query processor, a generic optimization framework for GIS and a middleware, which employs the new Query Melting paradigm (QM) that is based on the sharing paradigm and push-down optimization strategy. QM is implemented through a new Melting-Ruler strategy that works at the low-level, melts repetitions in plans to share spatial areas, temporal intervals, objects, intermediate results, maps, user locations, and functions, then re-orders them to get time-cost effective results, and is illustrated using a sample tourist GIS system.
Download

Paper Nr: 261
Title:

Modeling Web Documents as Objects for Automatic Web Content Extraction

Authors:

Estella Annoni and C. I. Ezeife

Abstract: Traditionally, mining web page contents involves modeling their contents to discover the underlying knowledge. Data extraction proposals represent web data in a formal structure such as database structures specific to application domains. Those models fail to catch the full diversity of web data structures which can be composed of different types of contents, and can be also unstructured. In fact, with these proposals, it is not possible to focus on a given type of contents, to work on data of different structures and to mine on data of different application domains as required to mine efficiently a given content type or web documents from different domains. On top of that, since web pages are designed to be understood by users, this paper considers modeling of web document presentations expressed through HTML tag attributes as useful for an efficient web content mining. Hence, this paper provides a general framework composed of an object-oriented web data model based on HTML tags and algorithms for web content and web presentation object extraction from any given web document. From the HTML code of a web document, web objects are extracted for mining, regardless of the domain.
Download

Paper Nr: 286
Title:

TOWARD A QUALITY MODEL FOR CBSE - Conceptual Model Proposal

Authors:

María R. Ramírez, Luis E. Mendoza, Maryoly Ortega, Maria Angelica Perez de Ovalles, Kenyer Domínguez and Anna Grimán

Abstract: In this paper, which is part of a research in progress, we analyze the conceptual elements behind Component-Based Software Engineering (CBSE) and propose a model that will support its quality evaluation. The conceptual model proposed integrates the product perspective, a view that includes components and Component-Based Software (CBS), as well as the process perspective, a view that represents the component and CBS development life cycle. The model proposal was developed under a systemic approach that will allow for assessing and improving products and processes immersed in CBSE. Future actions include proposing metrics to operationalize the model and validate them through a case study. The model application will allow studying the behavior of each perspective and the relationships among them.
Download

Paper Nr: 298
Title:

OPTIMIZATION OF SPARQL BY USING CORESPARQL

Authors:

Jinghua Groppe, Sven Groppe and Jan Kolbaum

Abstract: SPARQL is becoming an important query language for RDF data. Query optimization to speed up query processing has been an important research topic for all query languages. In order to optimize SPARQL queries, we suggest a core fragment of the SPARQL language, which we call the coreSPARQL language. coreSPARQL has the same expressive power as SPARQL, but eliminates redundant language constructs of SPARQL. SPARQL engines and optimization approaches will benefit from using coreSPARQL, because fewer cases need to be considered when processing coreSPARQL queries and the coreSPARQL syntax is machine-friendly. In this paper, we present an approach to automatically transforming SPARQL to coreSPARQL, and develop a set of rewriting rules to optimize coreSPRQL queries. Our experimental results show that our optimization of SPARQL speeds up RDF querying.
Download

Paper Nr: 311
Title:

FedDW: A Tool for Querying Federations of Data Warehouses

Authors:

Stefan Berger and Michael Schrefl

Abstract: Recently, Federated Data Warehouses – collections of autonomous and heterogeneous Data Marts – have become increasingly attractive as they enable the exchange of business information across organization boundaries. The advantage of federated architectures is that users may access the global, mediated schema with OLAP applications, while the Data Marts need not be changed and retain full autonomy. Although the underlying concepts are mature, tool support for Federated DWs has been poor so far. This paper presents the prototype of the “FedDW” Query Tool that supports distributed query processing in federations of ROLAP Data Marts. It acts as middleware component that reformulates user queries according to semantic correspondences between the autonomous Data Marts. We explain FedDW’s architecture, demonstrate a use-case and explain our implementation. We regard our proof-of-concept prototype as a first step towards the development of industrial strength query tools for DW federations.
Download

Paper Nr: 318
Title:

An User-centric and Semantic-driven Query Rewriting Over Proteomics XML Sources

Authors:

Hassan Badir, Kunale Kudagba and Omar El BEQQALI

Abstract: Querying and sharing Web proteomics data is not an easy task. Given that, several data sources can be used to answer the same sub-goals in the Global query, it is obvious that we can have many candidates rewritings. The user-query is formulated using Concepts an Properties related to Proteomics research (Domain Ontology). Semantic mappings describe the contents of underlying sources. In this paper, we propose a characterization of query rewriting problem using semantic mappings as an associated hypergraph. Hence, the generation of candidates rewritings can be formulated as the discovery of minimal Transversals of an hypergraph. We exploit and adapt algorithms available in Hypergraph Theory to find all candidates rewritings from a query answering problem. Then, in future work, some relevant criteria could be help to determine optimal and qualitative rewritings, according to user needs, and sources performances.
Download

Paper Nr: 323
Title:

A PSO-BASED RESOURCE SCHEDULING ALGORITHM FOR PARALLEL QUERY PROCESSING ON GRIDS

Authors:

J. P. C., Gilberto Martinez-Luna and Nareli Cruz-Cortes

Abstract: The accelerated development in Grid computing has positioned them as as promising next generation computing platforms. Grid computing contains resource management, task scheduling, security problems, information management and so on. In the context of database query processing, existing parallelisation techniques can not operate well in Grid environments, because the way they select machines and allocate queries. This is due to the geographic distribution of resources that are owned by different organizations. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. It is a big challenge for efficient scheduling algorithm design and implementation. In this paper, a heuristic approach based on particle swarm optimization algorithm is adopted to solving parallel query scheduling problem in Grid environment.
Download

Paper Nr: 359
Title:

APPLYING INFORMATION RETRIEVAL FOR MARKET BASKET RECOMMENDER SYSTEMS

Authors:

Tapio Pitkäranta

Abstract: Coded data sets form the basis for many well known applications from healthcare prospective payment system to recommender systems in online shopping. Previous studies on coded data sets have introduced methods for the analysis of rather small data sets. This study proposes applying information retrieval methods for enabling high performance analysis of data masses that scale beyond traditional approaches. An essential component in today’s data warehouses to which coded data sets are collected is a database management system (DBMS). This study presents experimental results how information retrieval indexes scale and outperform common database schemas with a leading commercial DBMS engine in analysis of coded data sets. The results show that flexible analysis of hundreds of millions of coded data sets is possible with a regular desktop hardware.
Download

Paper Nr: 372
Title:

SYMBOLIC EXECUTION FOR DYNAMIC, EVOLUTIONARY TEST DATA GENERATION

Authors:

Anastasis Sofokleous, Andreas Andreou and Antonis Kouras

Abstract: This paper combines the advantages of symbolic execution with search based testing to produce automatically test data for JAVA programs. A framework is proposed comprising two systems which collaborate to generate test data. The first system is a program analyser capable of performing dynamic and static program analysis. The program analyser creates the control flow graph of the source code under testing and uses a symbolic transformation to simplify the graph and generate paths as independent control flow graphs. The second system is a test data generator that aims to create a set of test cases for covering each path. The implementation details of the framework, as well as the relevant experiments carried out on a number of JAVA programs are presented. The experimental results demonstrate the efficiency and efficacy of the framework and show that it can outperform the performance of related approaches.
Download

Paper Nr: 375
Title:

A BIT-SELECTOR TECHNIQUE FOR PERFORMANCE OPTIMIZATION OF DECISION-SUPPORT QUERIES

Authors:

Ricardo Santos and Jorge Bernardino

Abstract: As data warehouses are growing into the multi-terabyte range, adequate performance for decision support queries remains challenging for database query processors. A large amount of wide-ranging techniques have been used in research to overcome this problem. Bit-based techniques such as bitmap indexes and bitmap join indexes have been used and are generally accepted as standard common practice for optimizing data warehouses. These techniques are very promising due to their relatively low overhead and fast bitwise operations. In this paper, we propose a new technique which performs optimized row selection for decision support queries, introducing a bit-based attribute into the fact table. This attribute’s value for each row is set according to its relevance for processing each decision support query by using bitwise operations. Simply inserting a new column in the fact table’s structure and using bitwise operations for performing row selection makes it a simple and practical technique, which is easy to implement in any Database Management System. The experimental results, using benchmark TPC-H, demonstrates that it is an efficient optimization method which significantly improves query performance.
Download

Paper Nr: 396
Title:

A DOMAIN SPECIFIC LANGUAGE FOR THE I* FRAMEWORK

Authors:

João Araújo, Vasco Amaral, Carlos Nunes and Carla Silva

Abstract: The i* framework proposes a goal-oriented analysis method for requirements engineering. It is a systematic approach to discover and structure requirements at organizational level where nonfunctional requirements and their relations are specified. A Domain Specific Language (DSL) has the purpose to specify and model concepts in some domain, having several advantages in relation to general purpose languages, such as allow expressing a solution in the desired language and at the desired abstraction level. In order to create such a DSL, normally it is necessary to start by specifying its syntax by means of a metamodel to be given as input to the language workbenches that generate the corresponding edtors for it. With a proper editor for the language we can specify models with the proposed notation. This paper presents a DSL for the i* framework, with the purpose to handle complexity and scalability of its concrete models by introducing some inovations in the i* framework metamodel like mechanisms that will help to manage the models scalability.
Download

Paper Nr: 460
Title:

EXTENDING THE UML-GEOFRAME DATA MODEL FOR CONCEPTUAL MODELING OF NETWORK APPLICATIONS

Authors:

Sergio Stempliuc, Jugurta Lisboa-Filho, Karla Borges and Marcus Andrade

Abstract: This paper presents an extension of the UML-GeoFrame data model that includes a set of new constructors to allow the definition of conceptual schemas for spatial database applications whose elements relationship forms a network.. Also, it is discussed how the GeoFrame conceptual framework is changed with the inclusion of new metaclasses and the corresponding stereotypes related to network elements. The extension proposed in this paper is evaluated using a class diagram for a water distribution company.
Download

Paper Nr: 478
Title:

INTEGRATION METHOD AMONG BSC, CMMI AND SIX SIGMA USING GQM TO SUPPORT MEASUREMENT DEFINITION (MIBCIS)

Authors:

Leonardo Romeu, Jorge Audy and Andressa Covatti

Abstract: The software quality area has presented various studies and surveys in different fronts, either about products or processes. There are many initiatives in the area of software process improvement, which might be more than often conflicting in an organization. If we observe some of the existing models and methodologies in the market, the CMMI Model and the Six Sigma Methodology stand head and shoulders above the rest for being complemented. While CMMI focuses on organization and on process management and Six Sigma has its focus on the client and on the financial results, both highlight the importance of the data produced for decision making. This study presents a method for the integrated implementation of the CMMI Model and the Six Sigma Methodology for programs of process improvement, having as a backup measurement and assessment techniques such as the Balanced Scorecard (BSC) and the Goal-Question-Metric (GQM).
Download

Paper Nr: 484
Title:

gisEIEL: An Open Source GIS for Territorial Management

Authors:

Pedro A. González, Miguel Lorenzo, Miguel R. Luaces, José Ignacio Lamas Fonte and David Trillo

Abstract: The provincial government A Coruña, in Spain, has been working on the last years in the construction of a geographic information system for the management of its territory. The result of this work are three software products: WebEIEL, gisEIEL and the ideAC node. WebEIEL is the web application that publishes the information on the Internet. gisEIEL is the desktop application that is used by the provincial government and the municipalities to create, query, visualize, analyze and update the information in the system. Finally, the ideAC node is a spatial data infrastructure that uses international standards to publish the information as a part of the Spanish spatial data infrastructure. In this paper, we describe the functionality and the architecture of the system and we present the problems that we had to face during the development of the system and the solutions that we applied.
Download

Paper Nr: 492
Title:

ASSESSING WORKFLOW MANAGEMENT SYSTEMS: A QUANTITATIVE ANALYSIS OF A WORFKLOW EVALUATION MODEL

Authors:

Stephan Poelmans and Hajo A. Reijers

Abstract: Despite the enormous interest in workflow management systems and their widespread adoption by industry, few research studies are available that empirically assess the effectiveness and acceptance of this technology. Our work exactly aims at providing such insights and this paper presents some of our preliminary quantitative findings. Using a theory-based workflow success model, we have studied the impact of operational workflow technologies on end-users in terms of perceived usefulness, end-user satisfaction and perceived organisational benefits. A survey instrument was used to gather a sample of 246 end-users from two different organizations. Our findings show that the considered workflow applications are generally accepted and positively evaluated. Using partial least squares analysis, the success model was well supported, making it a usefull instrument to evaluate future workflow projects.
Download

Paper Nr: 504
Title:

EFFICIENT COMMUNITY MANAGEMENT IN AN INDUSTRIAL DESIGN ENGINEERING WIKI

Authors:

Regine Vroom, Adrie Kooijman and Raymond Jelierse

Abstract: Industrial design engineers use a wide variety of research fields when making decisions that will eventually have significant impact on their designs. Obviously, designers cannot master every field, so they are therefore often looking for a simple set of rules of thumb on a particular subject. For this reason a wiki has been set up: www.wikid.eu. Whilst Wikipedia already offers a lot of this information, there is a distinct difference between WikID and Wikipedia; Wikipedia aims to be an encyclopaedia, and therefore tries to be as complete as possible. WikID aims to be a design tool. It offers information in a compact manner tailored to its user group, being the Industrial Designers. The main subjects of this paper are the research on how to create an efficient structure for the community of WikID and the creation of a tool for managing the community. With the new functionality for managing group memberships and viewing information on users, it will be easier to maintain the community. This will also help in creating a better community which will be more inviting to participate in, provided that the assumptions made in this area hold true.
Download

Paper Nr: 509
Title:

Is the application of aspect-oriented software development beneficial? First experimental results

Authors:

Sebastian Kleinschmager and Stefan Hanenberg

Abstract: Aspect-oriented software development is an approach which addresses the construction of software artifacts which traditional software engineering constructs fail to modularize: the so-called crosscutting concerns. However, although aspect-orientation claims to permit a better modularization of crosscutting concerns, it is still not clear whether the application of aspect-oriented constructs has a measurable, positive impact on the construction of software artifacts. This paper addresses this issue by an empirical study which compares the specification of crosscutting concerns using traditional composi-tion techniques and aspect-oriented composition techniques using the object-oriented programming language Java and the aspect-oriented programming lan-guage AspectJ.
Download

Paper Nr: 528
Title:

INTRODUCING REAL-TIME BUSINESS CASE DATABASE - An approach to improve system maintenance of complex application landscapes

Authors:

Oliver Daute

Abstract: While system maintenance of single systems is under control nowadays, new challenges come up due to the use of linked up software applications in order to implement business scenarios. Numerous business processes exchange data across complex application landscapes, for that they use various applications and compute data. The technology underneath has to provide a stable environment maintaining diverse software, databases and operating system components. The challenge is to keep the application environment under control at any given time. The goal is to avoid incidents to business processes and to sustain the application landscape with regard to smaller and larger changes. For system maintenance of complex environments information about process run-states is indispensable, for example when parts of a system environment must be restored. This paper introduces the Real-Time Business Case Database (RT-BCDB) to control business processes and improve maintenance activities in complex application landscapes. It is about a concept, to gain more transparency and visibility of business processes activities. RT-BCDB stores information about business cases and theirs run-states continuously. Service frameworks such as IT Service Management (ITIL) can benefit of RT-BCDB as well.
Download

Paper Nr: 535
Title:

AN EXTENSION OF ONTOLOGY BASED DATABASES TO HANDLE PREFERENCES

Authors:

Dilek Tapucu, Yamine Ait-ameur, Stéphane Jean and Murat Osman UNALIR

Abstract: Ontologies have been defined to make explicit the semantics of data. With the emergence of the SemanticWeb, the amount of ontological data (or instances) available has increased. To manage such data, Ontology Based DataBases (OBDBs), that store ontologies and their instance data in the same repository have been proposed. These databases are associated with exploitation languages supporting description, querying, etc. on both ontologies and data. However, usually queries return a big amount of data that may be sorted in order to find the relevant ones. Moreover, in the current, few approaches considering user preferences when querying have been developed. Yet this problem is fundamental for many applications especially in the e-commerce domain. In this paper, we first propose an extension of an existing OBDB, called OntoDB through extension of their ontology model in order to support semantic description of preferences. Secondly, an extension of an ontology based query language, called OntoQL defined on OntoDB for querying ontological data with preferences is presented. Finally, an implementation of the proposed extensions are described.
Download

Paper Nr: 544
Title:

A USER-DRIVEN AND A SEMANTIC-BASED ONTOLOGY MAPPING EVOLUTION APPROACH

Authors:

Hélio Martins and Nuno Silva

Abstract: Systems or software agents do not always agree on the information being shared, justifying the use of distinct ontologies for the same domain. For achieving interoperability, declarative mappings are used as a basis for exchanging information between systems. However, in dynamic environments like the Web and the Semantic Web, ontologies constantly evolve, potentially leading to invalid ontology mappings. This paper presents two approaches for managing ontology mapping evolution: a user-centric approach in which the user defines the mapping evolution strategies to be applied automatically by the system, and a semantic-based approach, in which the ontology’s evolution logs are exploited to capture the semantics of changes and then adapted to (and applied on at) the ontology mapping evolution process.
Download

Paper Nr: 562
Title:

A SERVICE-BASED APPROACH FOR DATA INTEGRATION BASED ON BUSINESS PROCESS MODELS

Authors:

Hesley Py, Lucia Castro, Fernanda Baião and Asterio Tanaka

Abstract: Business-IT alignment is gaining more importance in enterprises, and is already considered essential for efficiently achieving enterprise goals. This led organizations to follow Enterprise Architecture approaches, with the Information Architecture as one of its pillars. Information architecture aims at providing an integrated and holistic view of the business information, and this requires applying a data integration approach. However, despite several works on data integration research, the problem is far from being solved. The highly heterogeneous computer environments present new challenges such as distinct DBMSs, distinct data models, distinct schemas and distinct semantics, all in the same scenario. On the other hand, new issues in enterprise environment, such as the emergence of BPM and SOA approaches, contribute to a new solution for the problem. This paper presents a service-based approach for data integration, in which the services are derived from the organization’s business process models. The proposed approach comprises a framework of different types of services (data services, concept services), a method for data integration service identification from process models, and a metaschema needed for the automation and customization of the proposed approach in a specific organization. We focus on handling heterogeneities with regard to different DBMSs and differences among data models, schemas and semantics
Download

Paper Nr: 567
Title:

AUTOMATIC DERIVATION OF SPRING-OSGI BASED WEB ENTERPRISE APPLICATIONS

Authors:

Elder Cirilo, Uirá Kulesza and Carlos J. Pereira de Lucena

Abstract: Component-based technologies (CBTs) are nowadays widely adopted in the development of different kinds of applications. They provide functionalities to facilitate the management of the application components and their different configurations. Spring and OSGi are two relevant examples of CBTs in the mainstream scenario. In this paper, we explore the use of Spring/OSGi technologies in the context of automatic product derivation. We illustrated through a typical web-based enterprise application: (i) how different models of a feature-based product derivation tool can be automatically generated based on the configuration files of Spring and OSGi, and Java annotations; and (ii) how the different abstractions provided by these CBTs can be related to a feature model with the aim to automatically derive an Spring/OSGi based application or product line.
Download

Paper Nr: 585
Title:

Grounding and Making Sense of Agile Software Development

Authors:

Mark Woodman and Aboubakr Moteleb

Abstract: The paper explores areas of strategic frameworks for sense-making, knowledge management and Grounded Theory methodologies to offer a rationalization of some aspects of agile software development. In a variety of projects where knowledge management form part of the solution we have begun to see activities and principles that closely correspond to many aspects of the wide family of agile development methods. We offer reflection on why as a community we are attracted to agile methods and consider why they work.
Download

Paper Nr: 633
Title:

KEYMANTIC: A KEYWORD-BASED SEARCH ENGINE USING STRUCTURAL KNOWLEDGE

Authors:

Francesco Guerra, Mirko Orsini, Claudio Sartori, Sonia Bergamaschi and Antonio Sala

Abstract: Traditional techniques for query formulation need the knowledge of the database contents, i.e. which data are stored in the data source and how they are represented. In this paper, we discuss the development of a keyword-based search engine for structured data sources. The idea is to couple the ease of use and flexibility of keyword-based search with metadata extracted from data schemata and extensional knowledge which constitute a semantic network of knowledge. Translating keywords into SQL statements, we will develop a search engine that is effective, semantic-based, and applicable also when instance are not continuously available, such as in integrated data sources or in data sources extracted from the deep web.
Download

Paper Nr: 641
Title:

ESPACE: Web-scale Integration One Step At A Time

Authors:

Kajal Claypool, Jeremy Mineweaser, Dan Van Hook, Elke Rundensteiner and Michael Scarito

Abstract: This paper presents ESpace, a community collaboration infrastructure, that provides a social, collaborative network that allows its users to harness the collective knowledge to promote communities of interest and expertise within the enterprise. ESpace is a prototype for a pay-as-you-go integration framework that supports loosely to tightly integrated resources within the same infrastructure, where loose integration is supported in the sense of pulling resources on the web together, based on the tag meta-information associated with them. This is but the first step in enabling web-scale pay-as-you-go integration by providing fine-grained analysis and integrating substructures within resources – achieving tighter integration for select resources on the user’s behest.
Download

Paper Nr: 52
Title:

GENERIC APPROACH TO AUTOMATIC INDEX UPDATING IN OODBMS

Authors:

Tomasz Kowalski, Radosław Adamus, Kamil Kuliberda and Jacek Wiślicki

Abstract: In this paper, we describe a robust approach to the problem of the automatic index updating, i.e. maintaining cohesion between data and indices. Introducing object-oriented notions (classes, inheritance, polymorphism, class methods, etc.) in databases allows defining more complex selection predicates; nevertheless, in order to facilitate selection process through indices, index updating requires substantial revising. Inadequate index maintenance can lead to serious errors in query processing what has been shown on the example of Oracle 11g ORDBMS. The authors work is based on the Stack-Based Architecture (SBA) and has been implemented and tested in the ODRA (Object Database for Rapid Applications development) OODBMS prototype.
Download

Paper Nr: 127
Title:

Semi-Supervised Information Extraction from Variable-Length Web-Page Lists

Authors:

Daniel Nikovski, Alan Esenther and Akihiro Baba

Abstract: We propose two methods for constructing automated programs for extraction of information from a class of web pages that are very common and of high practical significance --- variable-length lists of records with identical structure. Whereas most existing methods would require multiple example instances of the target web page in order to be able to construct extraction rules, our algorithms require only a single example instance. The first method analyzes the document object model (DOM) tree of the web page to identify repeatable structure that includes all of the specified data fields of interest. The second method provides an interactive way of discovering the list node of the DOM tree by visualizing the correspondence between portions of XPath expressions and visual elements in the web page. Both methods construct extraction rules in the form of XPath expressions, facilitating ease of deployment and integration with other information systems.
Download

Paper Nr: 157
Title:

TOWARDS A COMMON PUBLIC SERVICE INFRASTRUCTURE FOR SWISS UNIVERSITIES

Authors:

Florian Schnabel, Uwe Heck and Eva Bucherer

Abstract: Due to the Bologna Declaration and the according procedures of performance management and output funding universities are undergoing organisational changes both within and across the universities. The need for an appropriate organisational structure and for efficient and effective processes makes the support through a correspondent IT essential. The IT environment of Swiss universities is currently dominated by a high level of decentralisation and a high degree of proprietary solutions. Economies of scale through joint development or shared services remain untapped. Also the increasingly essential integration of applications to support either university-internal or cross-organizational processes is hindered. In this paper we propose an approach for a comprehensive service-oriented architecture for swiss universities to overcome the current situation and to cope with organizational and technical challenges. We further present an application scenario revealing how Swiss universities will benefit of the proposed architecture.
Download

Paper Nr: 200
Title:

AN ARCHITECTURE FOR THE RAPID DEVELOPMENT OF XML-BASED WEB APPLICATIONS

Authors:

José P. Leal and Jorge B. Gonçalves

Abstract: Our research goal is the generation of working web applications from high level specifications. Based on our experience in using XML transformations for that purpose, we applied this approach to the rapid development of database management applications. The result is an architecture that defines of a web application as a set of XML transformations, and generates these transformations using second order transformations from a database schema. We used the Model-View-Controller architectural pattern to assign different roles to transformations, and defined a pipeline of transformations to process an HTTP request. The definition of these transformations is based on a correspondence between data-oriented XML Schema definitions and the Entity-Relationship model. Using this correspondence we were able produce transformations that implement database operations, forms interfaces generators and application controllers, as well as the second order trans- formations that produce all of them. This paper includes also a description of a RAD system following this architecture that allowed us to perform a critical evaluation of this proposal.
Download

Paper Nr: 211
Title:

ASSESSING DATABASES IN .NET: COMPARING APPROACHES

Authors:

Daniela da Cruz and Pedro R. Henriques

Abstract: Language-Integrated Query (LINQ) appeared recently as the new language of the .NET framework — is the new kid of the town. This query-language, an extension to C# and Visual Basic, allows the query expressions to benefit from the features previously available only to imperative code — the rich metadata, IntelliSense, compile-time syntax checking, and static typing. In this paper, we intend to compare the methods provided by .NET to query databases (LINQ, SQL and Objects). This comparison will be done in terms of performance and in terms of the approach used. To guide this comparison, a running-example will be used.
Download

Paper Nr: 246
Title:

Automatic Detection of Duplicated Attributes in Ontology

Authors:

Irina Astrova and Arne Koschel

Abstract: Semantic heterogeneity is the ambiguous interpretation of terms describing the meaning of data in heterogeneous data sources such as databases. This is a well-known problem in data integration. A recent solution to this problem is to use ontologies, which is called ontology-based data integration. However, ontologies can contain duplicated attributes, which can lead to improper integration results. This paper proposes a novel approach that analyzes a workload of queries over an ontology to automatically calculate (semantic) distances between attributes, which are then used for duplicate detection.
Download

Paper Nr: 268
Title:

Efficiently Locating Web Services Using A Sequence-based Schema Matching Approach

Authors:

Alsayed Algergawy, Gunter Saake and Eike Schallehn

Abstract: Locating desiredWeb services has become a challenging research problem due to the vast number of available Web services within an organization and on the Web. This necessitates the need for developing flexible, effective, and efficient Web service discovery frameworks. To this purpose, both the semantic description and the structure information of Web services should be exploited in an efficient manner. This paper presents a flexible and efficient service discovery approach, which is based on the use of the Pr¨ufer encoding method to construct a one-to-one correspondence between Web services and sequence representations. In this paper, we describe and experimentally evaluate our Web service discovery approach.
Download

Paper Nr: 328
Title:

A FLEXIBLE EVENT-CONDITION-ACTION (ECA) RULE PROCESSING MECHANISM BASED ON A DYNAMICALLY RECONFIGURABLE STRUCTURE

Authors:

Xiang Li, Ying Qiao and Hongan Wang

Abstract: Adding and deleting Event-Condition-Action (ECA) rules, i.e. modifications of processing structures in an active database are expected to happen on-the-fly, to cause minimum impact on existing processing procedures of ECA rules. In this paper, we present a flexible ECA rule processing mechanism in active database. It uses a dynamically reconfigurable structure, called unit-mail graph (UMG) and a middleware, called Unit Modification and Management Layer (UMML) to localize the impact of adding and deleting ECA rules so as to support on-the-fly rule modification. The ECA rule processing mechanism can continue to work when the user adds or deletes the rules. This makes active database to be able to react to external events arriving at the system during rule modification. We also use a smart home environment to evaluate our work.
Download

Paper Nr: 477
Title:

AN MDA APPROACH FOR OBJECT-RELATIONAL MAPPING

Authors:

Catalin Strimbei and Marin Fotache

Abstract: This paper reviews several emergent approaches that attempt to capitalize on SQL data “engineering” standard in current object-to-object relational mapping methodologies. As a particular contribution we will discuss a slightly different OR mapping approach, based on ORDBMS extension mechanisms that allow to publish new data structures as Abstract Data Types (ADT).
Download

Paper Nr: 483
Title:

INNOVATIVE PROCESS EXECUTION IN SERVICE-ORIENTED ENVIRONMENTS

Authors:

Dirk Habich, Wolfgang Lehner, Steffen Preissler and Hannes Voigt

Abstract: Today's information systems are often built on the foundation of service-oriented environments. Although the fundamental purpose of an information system is the processing of data and information, the service-oriented architecture (SOA) does not treat data as a core first class citizen. Current SOA technologies support neither the explicit modeling of data flows in common business process modeling languages (such as BPMN) nor the usage of specialized data transformation and propagation technologies (for instance ETL-tools) on the process execution layer (BPEL). In this paper, we introduce our data-aware approach on the execution perspective as well as on the modeling perspective of business processes.
Download

Paper Nr: 630
Title:

TISM, a Tool for Information Systems Management

Authors:

António Trigo, João Barroso and João Varajão

Abstract: The complexity of Information Technology and Information Systems within organizations keeps growing rapidly. As a result, the work of the Chief Information Officer is becoming increasingly difficult, since he has to manage multiple technologies and perform several activities of different nature. In this position paper, we prove the development of a new tool for Chief Information Officers, which will systematize and aggregate the enterprise Information Systems Function information.
Download

Area 2 - Artificial Intelligence and Decision Support Systems

Full Papers
Paper Nr: 85
Title:

A Self-Learning System for Object Categorization

Authors:

Danil Prokhorov

Abstract: We propose a learning system for object categorization which utilizes information from multiple sensors. The system learns not only prior to its deployment in a supervised mode but also in a self-learning mode. A competition based neural network learning algorithm is used to distinguish between representations of different categories. We illustrate the system application on an example of image categorization. A radar guides a selection of candidate images provided by the camera for subsequent analysis by our learning method. Radar information gets coupled with navigational information for improved localization of objects during self-learning.

Paper Nr: 88
Title:

A SELF-TUNING OF MEMBERSHIP FUNCTIONS FOR MEDICAL DIAGNOSIS

Authors:

Nuanwan Soonthornphisaj

Abstract: In this paper, a self-tuning of membership functions for fuzzy logic is proposed for medical diagnosis. Our algorithm uses decision tree as a tool to generate three kinds of membership functions which are triangular, bell shape and Gaussian curve. The system can automatically select the best form of membership function for the classification process that can provide the best classification result. The advantage of our system is that it doesn’t need the expert to create a membership functions for each feature. But the system can create various membership functions using learning algorithm that learns from the training set. In some domains, user can provide prior knowledge that can be used to enhance the performance of the classifier. However, in medical domain, we found that some diseases are difficult to diagnose. It would not be a problem if that disease has been completely explored in medical area. In order to rule out the patient, we need a domain expert to provide the membership function for many attributes obtained from the laboratory test. Since the disease has not been completely explored in medical area, the membership function provided by the expert might be biased and lead to the poor classification performance. The performance of our proposed algorithm has been investigated on 2 medical data sets. The experimental results show that our approach can effectively enhance the classification performance compare to neural networks and the traditional fuzzy logic.

Paper Nr: 128
Title:

INSOLVENCY PREDICTION OF IRISH COMPANIES USING BACKPROPAGATION AND FUZZY ARTMAP NEURAL NETWORKS

Authors:

Anatoli Nachev, Borislav Stoyanov and Seamus Hill

Abstract: This study explores experimentally the potential of BPNNs and Fuzzy ARTMAP neural networks to predict insolvency of Irish firms. We used financial information for Irish companies for a period of six years, preprocessed properly in order to be used with neural networks. Prediction results show that with certain network parameters the Fuzzy ARTMAP model outperforms BPNN. It outperforms also self-organising feature maps as reported by other studies that use the same dataset. Accuracy of predictions was validated by ROC analysis, AUC metrics, and leave-one-out cross-validation.

Paper Nr: 135
Title:

FREQUENT SUBGRAPH-BASED APPROACH FOR CLASSIFYING VIETNAMESE TEXT DOCUMENTS

Authors:

Tu H. Nguyen and Kiem Hoang

Abstract: In this paper we present a simple approach for Vietnamese text classification without word segmentation, based on frequent subgraph mining techniques. A graph-based instead of traditional vector-based model is used for document representation. The classification model employs structural patterns (subgraphs) and Dice measure of similarity to identify a class of documents. This method is evaluated on Vietnamese data set for measuring classification accuracy. Results show that it can outperform k-NN algorithm (based on vector, hybrid document representation) in terms of accuracy and classification time.

Paper Nr: 173
Title:

RANDOM PROJECTION ENSEMBLE CLASSIFIERS

Authors:

Alon Schclar and Lior Rokach

Abstract: We introduce a novel ensemble model based on random projections. The contribution of using random projections is two-fold. First, the randomness provides the diversity which is required for the construction of an ensemble model. Second, random projections embed the original set into a space of lower dimension while preserving the dataset’s geometrical structure to a given distortion. This reduces the computational complexity of the model construction as well as the complexity of the classification. Furthermore, dimensionality reduction removes noisy features from the data and also represents the information which is inherent in the raw data by using a small number of features. The noise removal increases the accuracy of the classifier. The proposed scheme was tested using WEKA based procedures that were applied to 16 benchmark dataset from the UCI repository.

Paper Nr: 188
Title:

KNOWLEDGE REUSE IN DATA MINING PROJECTS AND ITS PRACTICAL APPLICATION

Authors:

Rodrigo Cunha, Paulo Adeodato and Silvio Romero De Lemos Meira

Abstract: The objective of this paper is providing an integrated environment for knowledge reuse in KDD, for preventing recurrence of known errors and reinforcing project successes, based on previous experience. It combines methodologies from project management, data warehousing, data mining and knowledge representation. Different from purely algorithmic papers, this one focuses on performance metrics used for managerial such as the time taken for solution development, the amount of files not automatically managed and other, while preserving equivalent performance on the technical solution quality metrics. This environment has been validated with metadata collected from previous KDD projects developed and deployed for real world applications by the development team members. The case study carried out in actual contracted projects have shown that this environment assesses the risk of failure for new projects, controls and documents all the KDD project development process and helps understanding the conditions that lead KDD projects to success or failure.

Paper Nr: 231
Title:

Enhancing Text Clustering Performance Using Semantic Similarity

Authors:

Walaa Gad and Mohamed Kamel

Abstract: Text documents clustering can be challenging due to complex linguistics properties of the text documents. Most of clustering techniques are based on traditional bag of words to represent the documents. In such document representation, ambiguity, synonymy and semantic similarities may not be captured using traditional text mining techniques that are based on words and/or phrases frequencies in the text. In this paper, we propose a semantic similarity based model to capture the semantic of the text. The proposed model in conjunction with lexical ontology solves the synonyms and hypernyms problems. It utilizes WordNet as an ontology and uses the adapted Lesk algorithm to examine and extract the relationships between terms. The proposed model reflects the relationships by the semantic weighs added to the term frequency weight to represent the semantic similarity between terms. Experiments using the proposed semantic similarity based model in text clustering are conducted. The obtained results show promising performance improvements compared to the traditional vector space model as well as other existing methods that include semantic similarity measures in text clustering.

Paper Nr: 233
Title:

Stereo Matching Using Synchronous Hopfield Neural Network

Authors:

Te-Hsiu Sun

Abstract: Deriving depth information has been an important issue in computer vision. In this area, stereo vision is an important technique for 3D information acquisition. This paper presents a scnaline-based stereo matching technique using synchronous Hopfield neural networks (SHNN). Feature points are extracted and selected using the Sobel operator and a user-defined threshold for a pair of scanned images. Then, the scanline-based stereo matching problem is formulated as an optimization task where an energy function, including dissimilarity, continuity, disparity and uniqueness mapping properties, is minimized. Finally, the incorrect matches are eliminated by applying a false target removing rule. The proposed method is verified with an experiment using several commonly used stereo images. The experimental results show that the proposed method solves effectively the stereo matching problem and is applicable to various areas.

Paper Nr: 245
Title:

Monotonic Monitoring of Discrete-Event Systems with Uncertain Temporal Observations

Authors:

Marina Zanella and Gianfranco Lamperti

Abstract: In discrete-event system monitoring, the observation is fragmented over time and a set of candidate diagnoses is output at the reception of each fragment (so as to allow for possible control and recovery actions). When the observation is uncertain (typically, a DAG with partial temporal ordering) a problem arises about the significance of the monitoring output: two sets of diagnoses, relevant to two consecutive observation fragments, may be unrelated to one another, and, even worse, they may be unrelated to the actual diagnosis. To cope with this problem, the notion of monotonic monitoring is introduced, which is supported by specific constraints on the fragmentation of the uncertain temporal observation, leading to the notion of stratification. The paper shows that only under stratified observations can significant monitoring results be guaranteed.

Paper Nr: 250
Title:

A SERVICE COMPOSITION FRAMEWORK FOR DECISION MAKING UNDER UNCERTAINTY

Authors:

Malak Al-Nory, Alexander Brodsky and Hadon Nash

Abstract: Proposed and developed is a service composition framework for decision-making under uncertainty, which is applicable to stochastic optimization of supply chains. Also developed is a library of modeling components which include Scenario, Random Environment, and Stochastic Service. Service models are classes in the Java programming language extended with decision variables, assertions, and business objective constructs. The constructor of a stochastic service formulates a recourse stochastic program and finds the optimal instantiation of real values into the service initial and corrective decision variables leading to the optimal business objective. The optimization is not done by repeated simulation runs, but rather by automatic compilation of the simulation model in Java into a mathematical programming model in AMPL and solving it using an external solver.

Paper Nr: 295
Title:

A MULTI-CRITERIA RESOURCE SELECTION METHOD FOR SOFTWARE PROJECTS USING FUZZY LOGIC

Authors:

Daniel A. Callegari and Ricardo M. Bastos

Abstract: When planning a software project, we must assign resources to tasks. Resource selection is a fundamental step to resource allocation since we first need to find the most suitable candidates for each task before deciding who will actually perform them. In order to rank available resources, we have to evaluate their skills and define the corresponding selection criteria for the tasks. While being the choice of many approaches, representing skill levels by means of ordinal scales and defining selection criteria using binary operations imply some limitations. Pure mathematical approaches are difficult to model and suffer from a partial loss in meaning in terms of knowledge representation. Fuzzy Logic, as an extension to classical sets and logic, uses linguistic variables and a continuous range of truth values for decision and set membership. It allows handling inherent uncertainties in this process, while hiding the complexity from the final user. In this paper we show how Fuzzy Logic can be applied to the resource selection problem. A prototype was built to demonstrate and evaluate the results.

Paper Nr: 382
Title:

AN OPTIMIZED HYBRID KOHONEN NEURAL NETWORK FOR AMBIGUITY DETECTION IN CLUSTER ANALYSIS USING SIMULATED ANNEALING

Authors:

Ehsan Mohebi

Abstract: One of the popular tools in the exploratory phase of Data mining and Pattern Recognition is the Kohonen Self Organizing Map (SOM). The SOM maps the input space into a 2-dimensional grid and forms clusters. Recently experiments represented that to catch the ambiguity involved in cluster analysis, it is not necessary to have crisp boundaries in some clustering operations. In this paper to overcome the ambiguity involved in cluster analysis, a combination of Rough set Theory and Simulated Annealing is proposed that has been applied on the output grid of SOM. Experiments show that the proposed two-stage algorithm, first using SOM to produce the prototypes then applying rough set and SA in the second stage in order to assign the overlapped data to true clusters they belong to, outperforms the proposed crisp clustering algorithms (i.e. I-SOM) and reduces the errors.

Paper Nr: 441
Title:

INTERACTIVE QUALITY ANALYSIS IN THE AUTOMOTIVE INDUSTRY: Concept and Design of an Interactive, Web-based Data Mining Application

Authors:

Steffen Fritzsche, Markus Mueller and Carsten Lanquillon

Abstract: In this paper we present an interactive, web-based data mining application that supports quality analysis in the automotive industry. Our tool is designed to help automotive engineers in their task of identifying the root cause of quality issues. Knowing what exactly caused a problem and identifying vehicles that are most likely to be affected by the issue, helps in planning and implementing effective service actions. We show how data mining can be applied in the given application domain, point out the key role of interactivity and propose an appropriate software architecture.

Paper Nr: 466
Title:

NARFO ALGORITHM: MINING NON-REDUNDANT AND GENERALIZED ASSOCIATION RULES BASED ON FUZZY ONTOLOGIES

Authors:

Rafael G. Miani, Marilde Terezinha Prado Santos, Cristiane A. Yaguinuma and Mauro Biajiz

Abstract: Traditional approaches for mining generalized association rules are based only on database contents, and focus on exact matches among items. However, in many applications, the use of some background knowledge, as ontologies, can enlarge the discovery process and generate semantically richer rules. In this way, this paper proposes the NARFO algorithm, a new algorithm for mining non-redundant and generalized association rules based on fuzzy ontologies. Fuzzy ontology is used as background knowledge, to aid the discovery process and the generation of rules. One contribution of this work is the generalization of non-frequent itemsets that helps to extract important and meaningful knowledge. NARFO algorithm also contributes at post-processing stage with its generalization and redundancy treatment. Our experiments showed that the number of rules had been reduced considerably, without redundancy, obtaining 49.45% of reduction for a very low minimum support value (0.05) in comparison with XSSDM algorithm.

Paper Nr: 516
Title:

AUTOMATED CONSTRUCTION OF PROCESS GOAL TREES FROM EPC-MODELS TO FACILITATE EXTRACTION OF PROCESS PATTERNS

Authors:

Andreas Bögl, Michael Schrefl, Gustav Pomberger and Norbert Weber

Abstract: A system that enables reuse of process solutions should be able to retrieve “common” or “best practice” pattern solutions (common modelling practices) from existing process descriptions for a certain business goal. A manual extraction of common modelling practices is labour-intensive, tedious and cumbersome. This paper presents an approach for an automated extraction of process goals from Event-driven Process Chains (EPC) and its annotation to EPC functions and events. In order to facilitate goal reasoning for the identification of common modelling practices an algorithm (G-Tree-Construction) is proposed that constructs a hierarchical goal tree.

Short Papers
Paper Nr: 58
Title:

AUTOMATIC INFORMATION PROCESSING AND UNDERSTANDING IN COGNITIVE BUSINESS SYSTEMS

Authors:

Lidia Ogiela, Ryszard Tadeusiewicz and Marek Ogiela

Abstract: The concept of new generation in area of information systems is automatic understanding systems (AUS) to the attention of the computer sciences community as a new possibility for the systems analysis and design. The novelty of this new idea is in the previously used method of automatic understanding in the area of medical image analysis, classification and interpretation, to a more general and needed area of systems analysis. The concept of the AUS systems approach is, in essence, different from other approaches such as, for example, those based on neural networks, pattern analysis, image interpretation or machine learning. AUS enables the determination of the meaning of analysed data, both numeric and descriptive. Cognitive methods, on which the AUS concept and construct are based, have roots in the psychological and neurophysiological processes of understanding and describing analysed data as they take place in the human brain.
Download

Paper Nr: 60
Title:

Detecting domestic violence

Authors:

Paul Elzinga, Guido Dedene, Stijn Viaene and Jonas Poelmans

Abstract: Over 90% of the case data from police inquiries is stored as unstructured text in police databases. We use the combination of Formal Concept Analysis and Emergent Self Organizing Maps for exploring a dataset of unstructured police reports out of the Amsterdam-Amstelland police region in the Netherlands. In this paper, we specifically aim at making the reader familiar with how we used these two tools for browsing the dataset and how we discovered useful patterns for labelling cases as domestic or as non-domestic violence.
Download

Paper Nr: 68
Title:

USING QUALITY COSTS IN A MULTI-AGENT SYSTEM FOR AN AIRLINE OPERATIONS CONTROL

Authors:

Antonio M. Castro and Eugénio Oliveira

Abstract: The Airline Operations Control Centre (AOCC) tries to solve unexpected problems that might occur during the airline operation. Problems related to aircrafts, crewmembers and passengers are common and the actions towards the solution of these problems are usually known as operations recovery. Usually, the AOCC tries to minimize the operational costs while satisfying all the required rules. In this paper we present the implementation of a Distributed Multi-Agent System (MAS) representing the existing roles in an AOCC. This MAS has several specialized software agents that implement different algorithms, competing to find the best solution for each problem that include not only operational costs but, also, quality costs so that passenger satisfaction can be considered in the final decision. We present a real case study where a crew recovery problem is solved. We show that it is possible to find valid solutions, with better passenger satisfaction and, in certain conditions, without increasing significantly the operational costs.
Download

Paper Nr: 120
Title:

Frequency Assignment Optimization using the Swarm Intelligence Multi-agent Based Algorithm (SIMBA)

Authors:

Grant B. O'Reilly

Abstract: The swarm intelligence multi-agent based algorithm (SIMBA) is presented in this paper. The SIMBA utilizes swarm intelligence and a multi-agent system (MAS) to optimize the frequency assignment problem (FAP). The SIMBA optimises by considering both local and global i.e. collective solutions in the optimization process. Stigmergy single cell optimization (SSCO) is also used by the individual agents in SIMBA. SSCO enables the agents to recognize interference patterns in the frequency assignment structure that is being optimized and to augment it with frequency selections that minimized the interference. The changing configurations of the frequency assignment structure acts as a source of information that aids the agents when making further decisions. Due to the increasing demand of cellular communication services and the available frequency spectrum optimal frequency assignment is necessary. The SIMBA was used to optimize the fixed-spectrum frequency assignment problem (FS-FAP) in cellular radio networks. The results produced by the SIMBA were benchmarked against the COST 259 Siemens scenarios. The frequency assignment solutions produced by the SIMBA were also implemented in a commercial cellular radio network and the results are presented.
Download

Paper Nr: 121
Title:

A new heuristic function in Ant-Miner approach

Authors:

Urszula Boryczka and Jan Kozak

Abstract: In this paper, a novel rule discovery system that utilizes the Ant Colony Optimization (ACO) is presented. The ACO is a metaheuristic inspired by the behavior of real ants, where they search optimal solutions by considering both local heuristic and previous knowledge, observed by pheromone changes. In our approach we want to ensure the good performance of Ant-Miner by applying the new versions of heuristic functions in a main rule. We want to emphasize the role of the heuristic function by analyzing the influence of different propositions of these functions to the performance of Ant-Miner. The comparative study will be done using the 5 data sets from the UCI Machine Learning repository.
Download

Paper Nr: 129
Title:

FORMULATING ASPECTS OF PAYPAL IN THE LOGIC FRAMEWORK OF GBMF

Authors:

Min Li and Christopher Hogger

Abstract: Logic-based modelling methods can benefit business organizations in constructing models offering flexible knowledge representation supported by correct and effective inference. It remains a continuing research issue as to how best to apply logic-based formalization to informal/semi-formal business modelling. In this paper, we formulate aspects of the general business specification of PayPal in logic programming by applying this in logic-based GBMF which is a declarative, context-independent, implementable and highly expressive framework for modelling high-level aspects of business. In particular, we introduce the primary PayPal business concepts and relations; specify simple but essential PayPal business processes associated with a knowledge base, and set core business rules and controls to simulate the PayPal case in a fully automatic manner. This specific modelling method gives the advantages of general-purpose expressiveness and well-understood execution regimes, avoiding the need for a special-purpose engine supporting a specialized modelling language.
Download

Paper Nr: 133
Title:

AN AGENT-BASED SYSTEM FOR HEALTHCARE PROCESS MANAGEMENT

Authors:

Bian Wu, Maggie M. Wang and Hongmin Yun

Abstract: An effective approach for healthcare process management is the key to delivery of high-quality services in healthcare. An agent-based and process-oriented system is presented in this study to facilitate dynamic and interactive processes in healthcare environment. The system is developed in three layers: the agent layer for healthcare process management, the database layer for maintenance of medical records and knowledge, and the interface layer for human-computer interaction. The treatment of primary open angle glaucoma is used as an example to demonstrate the effectiveness of approach.
Download

Paper Nr: 136
Title:

AOI BASED NEUROFUZZY SYSTEM TO EVALUATE SOLDER JOINT QUALITY

Authors:

Girolamo Fornarelli, Gioacchino Brunetti, Domenico Maiullari, Giuseppe Acciani and Antonio Giaquinto

Abstract: Surface Mount Technology is extensively used in the production of Printed Circuit Boards due to the high level of density in the electronic device integration. In such production process several defects could occur on the final electronic components, compromising their correct working. In this paper a neurofuzzy solution to process information deriving from an automatic optical system is proposed. The designed solution provides a Quality Index of a solder joint, by reproducing the modus operandi of an expert and making it automatic. Moreover, the considered solution presents some attractive advantages: a complex acquisition system is not needed, reducing the equipment costs and shifting the assessment of a solder joint on the fuzzy parts. Finally, the typical low computational costs of the fuzzy systems could satisfy urgent time constrains in the in-line detection of some industrial productive processes.
Download

Paper Nr: 145
Title:

AN ORDER CLUSTERING SYSTEM USING ART2 NEURAL NETWORK AND PARTICLE SWARM OPTIMIZATION METHODN

Authors:

R. J. Kuo, T. W. Huang, M. J. Wang and Tung-Lai Hu

Abstract: Surface mount technology (SMT) production system set up is quite time consuming for industrial personal computers (PC) because of high level of customization. Therefore, this study intends to propose a novel two-stage clustering algorithm for grouping the orders together before scheduling in order to reduce the SMT setup time. The first stage first uses the adaptive resonance theory 2 (ART2) neural network for finding the number of clusters and then feed the results to the second stage, which uses particle swarm K-means optimization (PSKO) algorithm. An internationally well-known industrial PC manufacturer provided the related evaluation information. The results show that the proposed clustering method outperforms other three clustering algorithms. Through order clustering, scheduling products belonging to the same cluster together can reduce the production time and the machine idle time.
Download

Paper Nr: 162
Title:

Using UML Class Diagram as a Knowledge Engineering Tool

Authors:

Thomas Raimbault, Stéphane Loiseau and David Genest

Abstract: UML class diagram is the de facto standard, including in Knowledge Engineering, for modeling structural knowledge of systems. Attaching importance to visual representation and based on a previous work, where we have given a logical defined extension of UML class diagram to represent queries and constraints into the UML visual environment, we present here how using the model of conceptuals graphs to answer queries and to check constraints in concrete terms.
Download

Paper Nr: 167
Title:

K-ANNOTATIONS, An Approach for Conceptual Knowledge Implementation using Metadata Annotations

Authors:

Eduardo S. Estima de Castro, Roberto Tom Price and Mara Abel

Abstract: A number of Knowledge Engineering methodologies have been proposed during the last decades. These methodologies use different languages for knowledge modelling. As most of these languages are based on logic, knowledge models defined using theses languages cannot be easily converted to the Object-Oriented (OO) paradigm. This brings a relevant problem to the development phase of KS projects: several complex knowledge systems are developed using OO languages. So, even if the conceptual model can be modelled using the logical paradigm, it is important to provide a standard knowledge representation with the OO paradigm. This paper introduces the k-annotations, an approach for conceptual knowledge implementation using metadata annotations and the aspect oriented paradigm. The proposed approach allows the development of the conceptual model using the OO paradigm and it establishes a standard path to implement this model. The main goal of the approach is to provide ways to reuse both the knowledge design and related programming code of the model based on a single model representation.
Download

Paper Nr: 170
Title:

ANT PAGERANK ALGORITHM

Authors:

Mahmoud Z. Abdo, Manal Ahmed Ismail and Mohamed Ebraheem Eladawy

Abstract: The amount of global information in the World Wide Web is growing at an incredible rate. Millions of results are returned from search engines. The rank of pages in the search engines is very important. One of the basic rank algorithms is PageRank algorithm. This paper proposes an enhancement of PageRank algorithm to speed up the computational process. The enhancement of PageRank algorithm depends on using the Ant algorithm. On average, this technique yields about 7.5 out of ten relevant pages to the query topic, and the total time reduced by 19.9 %.
Download

Paper Nr: 176
Title:

STUDY ON IMAGE CLASSIFICATION BASED ON SVM AND THE FUSION OF MULTIPLE FEATURES

Authors:

Dequan Zheng, Tiejun Zhao, Sheng Li and Yufeng Li

Abstract: In this paper, an adaptive feature-weight adjusted image categorization algorithm was proposed, which is based on the SVM and the fusion of multiple features. Firstly, classifier of each feature was separately constructed, automatically learned the weight coefficient of each feature by training data set then, final, a complexity classifier was constructed by combining the separate classifier and the corresponding weight coefficient. The experiment result showed that our scheme improved the performance of image categorization and had adaptive ability comparing with general approach. Moreover, the scheme has certain robustness because of avoiding the impact brought by various dimension of each feature.
Download

Paper Nr: 183
Title:

A FUZZY-GUIDED GENETIC ALGORITHM FOR QUALITY ENHANCEMENT IN THE SUPPLY CHAIN

Authors:

Cassandra Tang and H.C.W Lau

Abstract: To respond to the globalization and fierce competition, manufacturers gradually realize the challenge of demanding customers who strongly seek for products of high-quality and low-cost, which implicitly calls for the quality improvement of the products in a cost-effective way. Traditional methods focused on specified process optimization for quality enhancement instead of emphasizing the organizational collaboration to ensure qualitative performance. This paper introduces artificial intelligence (AI) approach to attain quality enhancement by automating the selection of process parameters within the supply chain. The originality of this research is providing an optimal configuration of process parameters along the supply chain and delivering qualified outputs to raise customer satisfaction.
Download

Paper Nr: 186
Title:

OPTIMUM DCT COMPRESSION OF MEDICAL IMAGES USING NEURAL NETWORKS

Authors:

Adnan Khashman and Kamil Dimililer

Abstract: Medical imaging requires storage of large quantities of digitized data Efficient storage and transmission of medical images in telemedicine is of utmost importance however,. Due to the constrained bandwidth and storage capacity, a medical image must be compressed before transmission or storage. An ideal image compression system must yield high quality compressed images with high compression ratio; this can be achieved using DCT-based image compression, however the contents of the image affects the choice of an optimum compression ratio. In this paper, a neural network is trained to relate the x-ray image contents to their optimum compression ratio. Once trained, the optimum DCT compression ratio of the x-ray image can be chosen upon presenting the image to the network. Experimental results suggest that out proposed system, can be efficiently used to compress x-rays while maintaining high image quality.
Download

Paper Nr: 210
Title:

A Mining Framework to detect non-technical losses in Power Utilities

Authors:

Felix Biscarri, Ignacio Monedero, Carlos León de Mora, Juan Ignacio Guerrero, Jesús Biscarri and Rocío Millán

Abstract: This paper deals with the characterization of customers in power companies in order to detect consumption Non-Technical Losses (NTL). A new framework is presented, to find relevant knowledge about the particular characteristics of the electric power customers. The authors uses two innovative statistical estimators to weigh variability and trend of the customer consumption. The final classification model is presented by a rule set, based on discovering association rules in the data. The work is illustrated by a case study considering a real data base.
Download

Paper Nr: 216
Title:

Intelligent Surveillance for Trajectory Analysis

Authors:

Javier Alonso Albusac Jiménez, José Jesus Castro-Schez, Lorenzo M. López-López, David Vallejo and Luis Jiménez Linares

Abstract: Recently, there is a growing interest in the development and deployment of intelligent surveillance systems capable of finding out and analyzing simple and complex events that take place on scenes monitored by cameras. Within this context, the use of expert knowledge may offer a realistic solution when dealing with the design of a surveillance system. In this paper, we briefly describe the architecture of an intelligent surveillance system based on normality components and expert knowledge. These components specify how a certain object must ideally behave according to one concept. A specific normality component which analyzes the trajectories followed by objects is studied in depth in order to analyze behaviors in an outdoor environment. The analysis of trajectories in the surveillance context is an interesting issue because any moving object has always a goal in an environment, and it usually goes towards one destination to achieve it.
Download

Paper Nr: 234
Title:

USING GRA FOR 2D INVARIANT OBJECT RECOGNITION

Authors:

Te-Hsiu Sun, C.H. Tang, J.C. Liu and Fang-Chih Tien

Abstract: Invariant features are vital to domain of pattern recognition. This research develops a vision-based invariant recognizer for 2D object. We perform a recognition method which adopted KRA invariant feature extractor and used grey relational analysis. The feature extraction is to derive translation, rotation, and scaling-free features through the sequential boundary and is described with its K-curvature. Our work represents the object profile with the K-curvature to obtain the position invariant property; and then the transformation of autocorrelation is to ensure orientation-invariant property. Experimental also reveals that proposed method with either GRA or MD methods offers distinctiveness and effectiveness for part recognition.
Download

Paper Nr: 244
Title:

AN INVESTIGATION INTO DYNAMIC CUSTOMER REQUIREMENT USING COMPUTATIONAL INTELLIGENCE

Authors:

Yih T. Chong and Chun-Hsien Chen

Abstract: The twenty-first century is marked by fast evolution of customer tastes and needs. Research has shown that customer requirements could vary in the temporal space between product conceptualisation and market introduction. In markets characterized by fast changing consumer needs, products generated might often not fit the consumer needs as the companies have originally expected. This paper advocates the proactive management and analysis of the dynamic customer requirements in bid to lower the risk inherent in developing products for fast shifting markets. A customer requirements analysis and forecast (CRAF) system that can address the issue is introduced in this paper. Computational intelligence methodologies, viz. artificial immune system and artificial neural network, are found to be potential techniques in handling and analysing dynamic customer requirements. The investigation aims to support product development functions in the pursuit of generating products for near future markets.
Download

Paper Nr: 247
Title:

The Role of Data Mining Techniques in Emergency Management

Authors:

Ning Chen and an chen

Abstract: Emergency management is becoming more and more attractive in both theory and practice due to the frequently occurring incidents in the world. The objective of emergency management is to make optimal decisions to decrease or diminish harm caused by incidents. Nowadays the overwhelming amount of information leads to a great need of effective data analysis for the purpose of well informed decision. The potential of data mining has been demonstrated through the success of decision-making module in present-day emergency management systems. In this paper, we review advanced data mining techniques applied in emergency management and indicate some promising future research directions.
Download

Paper Nr: 266
Title:

A decision support system for multi-plant assembly sequence planning using a pso approach

Authors:

Yuan-Jye Tseng, Feng-Yi Huang, Feng-Yi Huang and Jian-Yu Chen

Abstract: In a multi-plant collaborative manufacturing system in a global logistics chain, a product can be manufactured and assembled at different plants located at various locations. In this research, a decision support system for multi-plant assembly sequence planning is presented. The multi-plant assembly sequence planning model integrates two tasks, assembly sequence planning and plant assignment. In assembly sequence planning, the components and assembly operations are sequenced according to the operational constraints and precedence constraints to achieve assembly cost objectives. In plant assignment, the components and assembly operations are assigned to the suitable plants under the constraints of plant capabilities to achieve multi-plant cost objectives. A particle swarm optimization (PSO) solution approach is presented by encoding a particle using a position matrix defined by the numbers of components and plants. The PSO algorithm simultaneously performs assembly sequence planning and plant assignment with an objective of minimizing the total of assembly operational costs and multi-plant costs. The main contribution lies in the new multi-plant assembly sequence planning model and the new PSO solution method. The test results show that the presented method is feasible and efficient for solving the multi-plant assembly sequence planning problem. In this paper, an example product is tested and illustrated.
Download

Paper Nr: 294
Title:

TERM WEIGHTING: NOVEL FUZZY LOGIC BASED METHOD VS. CLASSICAL TF-IDF METHOD FOR WEB INFORMATION EXTRACTION

Authors:

Jorge Ropero, Ariel Gomez, Alejandro Carrasco Muñoz and Carlos León de Mora

Abstract: Solving Term Weighting problem is one of the most important tasks for Information Retrieval and Information Extraction. Tipically, the TF-IDF method have been widely used for determining the weight of a term. In this paper, we propose a novel alternative fuzzy logic based method. The main advantage for the proposed method is the obtention of better results, especially in terms of extracting not only the most suitable information but also related information. This method will be used for the design of a Web Intelligent Agent which will soon start to work for the University of Seville web page.
Download

Paper Nr: 297
Title:

DECISION SUPPORT SYSTEM FOR CLASSIFICATION OF NATURAL RISK IN MARITIME CONSTRUCTION

Authors:

Marco García, Andrés Alonso Quintanilla, Amelia Bilbao Terol and Alfredo Alguero

Abstract: The objective of this paper is the prevention of workplace hazards in maritime works – ports, drilling and others – that may arise from the natural surroundings: tides, wind, visibility, rain and so on. On the basis of both historical and predicted data in certain variables, a system has been designed that uses data mining techniques to provide prior decision-making support as to whether to execute given work on a particular day. The system also yields a numerical evaluation of the risk of performing the activity according to the additional circumstances affecting it: the number of workers and the machinery involved, the estimated monetary cost of an accident and so on. The computer tool used as a framework is powerful and versatile, allowing the user to define the activities engaged in work, each with the variables and types deemed suitable. Each variable can be fed data directly by the user or automatically from text files, tables, chromatic maps on web sites, data loggers, etc. The tool can also define alternative models for risk prognosis based on functional formulas of considerable complexity.
Download

Paper Nr: 300
Title:

Building tailored ontologies from very large knowledge resources

Authors:

Victoria Nebot and Rafael Berlanga Llavori

Abstract: Nowadays very large domain knowledge resources are being developed in domains like Biomedicine. Users and applications can benefit enormously from these repositories in very different tasks, such as visualization, vocabulary homogenizing and classification. However, due to their large size and lack of formal semantics, they cannot be properly managed and exploited. Instead, it is necessary to provide small and useful logic-based ontologies from these large knowledge resource so that they become manageable and the user can take benefit from the semantics encoded. In this work we present a novel framework for efficiently indexing and generating ontologies according to the user requirements. Moreover, the generated ontologies are encoded using OWL logic-based axioms so that ontologies are provided with reasoning capabilities. Such a framework relies on an interval labeling scheme that efficiently manages the transitive relationships present in the domain knowledge resources. We have evaluated the proposed framework over the Unified Medical Language System (UMLS). Results show very good performance and scalability, demonstrating the applicability of the proposed framework in real scenarios.
Download

Paper Nr: 315
Title:

A PROJECTION-BASED HYBRID SEQUENTIAL PATTERNS MINING ALGORITHM

Authors:

Chichang Jou

Abstract: Sequential pattern mining finds frequently occurring patterns of item sequences from serial orders of items in the transaction database. The set of frequent hybrid sequential patterns obtained by previous researches either is incomplete or does not scale with growing database sizes. We design and implement a Projection-based Hybrid Sequential PAttern Mining algorithm, PHSPAM, to remedy these problems. PHSPAM first builds Supplemented Frequent One Sequence itemset to collect items that may appear in frequent hybrid sequential patterns. The mining procedure is then performed recursively in the pattern growth manner to calculate the support of patterns through projected position arrays, projected support arrays, and projected databases. We compare the results and performances of PHSPAM with those of other hybrid sequential pattern mining algorithms, GFP2 and CHSPAM.
Download

Paper Nr: 316
Title:

The Signing of a Professional Athlete: Reducing Uncertainty with a Weighted Mean Hemimetric for Phi−Fuzzy Subsets

Authors:

Julio Rojas-Mora and Jaime Gil-Lafuente

Abstract: In this paper we present a tool to help reduce the uncertainty presented in the decision-making process associated to the selection and hiring of a professional athlete. A weighted mean hemimetric for Phi−fuzzy subsets with trapezoidal fuzzy numbers (TrFN) as their elements, allows to compare candidates to the “ideal” player that the technical body of a team believes should be hired.
Download

Paper Nr: 342
Title:

Graph Structure Learning for Task Ordering

Authors:

Yiming Yang, Abhimanyu Lad, Henry Shu, Bryan Kisiel, Chad Cumby, Rayid Ghani and Katharina Probst

Abstract: In many practical applications, multiple interrelated tasks must be accomplished sequentially through user interaction with retrieval, classification and recommendation systems. The ordering of the tasks may have a significant impact on the overall utility (or performance) of the systems; hence optimal ordering of tasks is desirable. However, manual specification of optimal ordering is often difficult when task dependencies are complex, and exhaustive search for the optimal order is computationally intractable when the number of tasks is large. We propose a novel approach to this problem by using a directed graph to represent partial-order preferences among task pairs, and using link analysis (HITS and PageRank) over the graph as a heuristic to order tasks based on how important they are in reinforcing and propagating the ordering preference. These strategies allow us to find near-optimal solutions with efficient computation, scalable to large applications. We conducted a comparative evaluation of the proposed approach on a form-filling application involving a large collection of business proposals from the Accenture Consulting & Technology Company, using SVM classifiers to recommend keywords, collaborators, customers, technical categories and other related fillers for multiple fields in each proposal. With the proposed approach we obtained near-optimal task orders that improved the utility of the recommendation system by 27% in macro-averaged F1, and 13% in micro-averaged F1, compared to the results obtained using arbitrarily chosen orders, and that were competitive against the best order suggested by domain experts.
Download

Paper Nr: 361
Title:

Key Characteristics in Selecting Software Tools for Knowledge Management Tools

Authors:

Hanlie Smuts, Alta Van der Merwe and Marianne Loock

Abstract: The shift to knowledge as the primary source of value results in the new economy being led by those who manage knowledge effectively. Today’s organisations are creating and leveraging knowledge, data and information at an unprecedented pace – a phenomenon that makes the use of technology not an option, but a necessity. Software tools in knowledge management are a collection of technologies and are not necessarily acquired as a single software solution. Furthermore, these knowledge management software tools have the advantage of using the organisation’s existing information technology infrastructure. Organisations and business decision makers spend a great deal of resources and make significant investments in the latest technology, systems and infrastructure to support knowledge management. It is imperative that these investments are validated properly, made wisely and that the most appropriate technologies and software tools are selected or combined to facilitate knowledge management. In this paper, we propose a set of characteristics that should support decision makers in the selection of software tools for knowledge management. These characteristics were derived from both in-depth interviews and existing theory in publications.
Download

Paper Nr: 387
Title:

Towards a Semantic System for Managing Clinical Processes

Authors:

Massimo Ruffolo

Abstract: Managing risks is an high priority theme for health care professionals and providers. A promising approach for reducing risks, and enhancing patient safety, is the definition of process-oriented healthcare information systems. In this area a number of approaches to medical knowledge and clinical processes representation and management are available. But no systems that provide an integrated approach to both declarative and procedural medical knowledge are currently available. Furthermore, little attention is paid to systems that enable to manage and prevent risks and errors. This work describes a system aimed at supporting a semantic process-centered vision of healthcare practices. The system is founded on an ontology-based clinical knowledge representation framework that allows representing and managing, in a unified way, both medical knowledge and clinical processes. The system provides functionalities for: (i) creating ontologies of clinical processes that can be queried and explored in a semantic fashion; (ii) expressing errors and risks rules (by ad hoc reasoning tasks) that can be used for a monitored process execution; (iii) executing clinical processes and acquiring clinical process instances by means of either workflow enactment or dynamic workflow composition; (iv) monitoring clinical processes during the execution by running reasoning tasks; (v) analyzing acquired clinical process schemas and instances by semantic querying. The proposed system makes available decision support capabilities able to enhance risks control and patient safety. System features are described by using an example of clinical process regarding cares of breast neoplasm.
Download

Paper Nr: 393
Title:

Mining Patterns in the Presence of Domain Knowledge

Authors:

Cláudia Antunes

Abstract: One of the main difficulties of pattern mining is to deal with items of different nature in the same itemset, which can occur in any domain except basket analysis. Indeed, if we consider the analysis of any transactional database composed by several entities and relationships, it is easy to understand that the equality function may be different for each element, which difficult the identification of frequent patterns. This situation is just one example of the need for using domain knowledge to manage the discovery process, but several other, no less important can be enumerated, such the need to consider patterns at higher levels of abstraction or the ability to deal with structured data. In this paper, we show how the Onto4AR framework can be explored to overcome these situations in a natural way, illustrating its use in the analysis of two distinct case studies. In the first one, exploring a cinematographic dataset, we capture patterns that characterize kinds of movies in accordance to the actors present in their casts and their roles. In the second one, identifying molecular fragments, we find structured patterns, including chains, rings and stars.
Download

Paper Nr: 469
Title:

USER-DRIVEN ASSOCIATION RULE MINING USING A LOCAL ALGORITHM

Authors:

Marinica Claudia, Andrei Olaru and Fabrice Guillet

Abstract: One of the main issues in the process of Knowledge Discovery in Databases is the Mining of Association Rules. Although a great variety of pattern mining algorithms have been designed to this purpose, their main problems rely on in the large number of extracted rules, that need to be filtered in a post-processing step resulting in fewer but more interesting results. In this paper we suggest a new algorithm, that allows the user to explore the rules space locally and incrementally. The user interests and preferences are represented by means of the new proposed formalism - the Rule Schemas. The method has been successfully tested on the database provided by Nantes Habitat.
Download

Paper Nr: 491
Title:

Monitoring Cooperative Business Contracts in an Institutional Environment

Authors:

Henrique Lopes Cardoso and Eugénio Oliveira

Abstract: The automation of B2B processes is currently a hot research topic. In particular, multi-agent systems have been used to address this arena, where agents can represent enterprises in an interaction environment, automating tasks such as contract negotiation and enactment. Contract monitoring tools are becoming more important as the level of automation of business relationships increase. When business is seen as a joint activity that aims at pursuing a common goal, the successful execution of the contract benefits all involved parties, and thus each of them should try to facilitate the compliance of their partners. Taking into account these concerns and inspecting international legislation over trade procedures, in this paper we present an approach to model contractual obligations: obligations are directed from bearers to counterparties and have flexible deadlines. We formalize the semantics of such obligations using temporal logic, and we provide rules that allow for monitoring them. The proposed implementation is based on a rule-based forward chaining production system.
Download

Paper Nr: 494
Title:

A SIMULATION-BASED METHODOLOGY TO ASSIST DECISION-MAKERS IN REAL VEHICLE ROUTING PROBLEMS

Authors:

Angel A. Juan, Javier Faulin, Daniel Riera, David M. i and Josep Jorba

Abstract: The aim of this work is to present a simulation-based algorithm that not only provides a competitive solution for instances of the Capacitated Vehicle Routing Problem (CVRP), but is also able to efficiently generate a full database of alternative good solutions with different characteristics. These characteristics are related to solution’s properties such as routes’ attractiveness, load balancing, non-tangible costs, fuzzy preferences, etc. This double-goal approach can be specially interesting for the decision-maker, since he/she can make use of this algorithm to construct a database of solutions and then send queries to it in order to obtain those feasible solutions that better fit his/her utility function without incurring in a severe increase in costs. In order to provide high-quality solutions, our algorithm combines a CVRP classical heuristic, the Clarke and Wright Savings method, with Monte Carlo simulation using state-of-the-art random number generators. The resulting algorithm is tested against some well known benchmarks and the results obtained so far are promising enough to encourage future developments and improvements on the algorithm and its applications in real-life scenarios.
Download

Paper Nr: 511
Title:

A LOGIC PROGRAMMING FRAMEWORK FOR LEARNING BY IMITATION

Authors:

Nicola Di Mauro, Teresa M. Basile, Grazia Bombini, Stefano Ferilli and Floriana Esposito

Abstract: Humans use imitation as a mechanism for acquiring knowledge, i.e. they use instructions and/or demonstrations provided by other humans. In this paper we propose a logic programming framework for learning from imitation in order to make an agent able to learn from relational demonstrations. In particular, demonstrations are received in incremental way and used as training examples while the agent interacts in a stochastic environment. This logical framework allows to represent domain specific knowledge as well as to compactly and declaratively represent complex relational processes. The framework has been implemented and validated with experiments in simulated agent domains.
Download

Paper Nr: 523
Title:

I.M.P.A.K.T.: an innovative, semantic-based skill management system exploiting standard SQL

Authors:

Eufemia Tinelli, Eufemia Tinelli, Antonio Cascone, Michele Ruta, Tommaso Di Noia, Eugenio Di Sciascio and Francesco M. Donini

Abstract: The paper presents I.M.P.A.K.T. (Information Management and Processing with the Aid of Knowledge-based Technologies), a semantic-enabled platform for skills and talent management. In spite of the full exploitation of recent advances in semantic technologies, the proposed system only relies on standard SQL queries. Distinguishing features include: the possibility to express both strict requirements and preferences in the requested profile, a logic-based ranking of retrieved candidates and the explanation of rank results.
Download

Paper Nr: 524
Title:

TOWARDS A UNIFIED STRATEGY FOR THE PRE-PROCESSING STEP IN DATA MINING

Authors:

Camelia Lemnaru and Rodica Potolea

Abstract: Data-related issues represent the main obstacle in obtaining a high quality data mining process. Existing strategies for preprocessing the available data usually focus on a single aspect, such as incompleteness, or dimensionality, or filtering out “harmful” attributes, etc. In this paper we propose a unified methodology for data preprocessing, which considers several aspects at the same time. The novelty of the approach consists in enhancing the data imputation step with information from the feature selection step, and performing both operations jointly, as two phases in the same activity. The methodology performs data imputation only on the attributes which are optimal for the class (from the feature selection point of view). Imputation is performed using machine learning methods. When imputing values for a given attribute, the optimal subset (of features) for that attribute is considered. The methodology is not restricted to the use of a particular technique, but can be applied using any existing data imputation and feature selection methods.
Download

Paper Nr: 541
Title:

Semantic Argumentation in Dynamic Environments

Authors:

Jörn Sprado and Björn Gottfried

Abstract: Decision Support Systems play a crucial role when controversial points of views are to be considered in order to make decisions. In this paper we outline a framework for argumentation and decision support. This framework defines arguments which refer to conceptual descriptions of the given state of affairs. Based on their meaning and based on preferences that adopt specific viewpoints, it is possible to determine consistent positions depending on these viewpoints. We investigate our approach by examining soccer games, since many observed spatiotemporal behaviours in soccer can be interpreted differently. Hence, the soccer domain is particularly suitable for investigating spatiotemporal decision support systems.
Download

Paper Nr: 555
Title:

Hybrid Optimization Technique for Artificial Neural Networks Design

Authors:

Cleber Zanchettin and Teresa B. Ludermir

Abstract: In this paper a global and local optimization method is presented. This method is based on the integration of the heuristic Simulated Annealing, Tabu Search, Genetic Algorithms and Backpropagation. The performance of the method is investigated in the optimization of Multi-layer Perceptron artificial neural network architecture and weights. The heuristics perform the search in a constructive way and based on the pruning of irrelevant connections among the network nodes. Experiments demonstrated that the method can also be used for relevant feature selection. Experiments are performed with four classification and one prediction datasets.
Download

Paper Nr: 576
Title:

Estimating Greenhouse Gas Emissions Using Computational Intelligence

Authors:

Pedro G. Coelho, Joaquim Pinto Rodrigues, Luiz Biondi Netopq.cnpq.br and João B. Soares de Mello

Abstract: This paper proposes a Neuro-Fuzzy Intelligent System – ANFIS (Adaptive Network based Fuzzy Inference System) for the annual forecast of greenhouse gases emissions (GHE) into the atmosphere. The purpose of this work is to apply a Neuro-Fuzzy System for annual GHE forecasting based on existing emissions data including the last 37 years in Brazil. Such emissions concern tCO2 (tons of carbon dioxide) resulting from fossil fuels consumption for energetic purposes, as well as those related to changes in the use of land, obtained from deforestation indexes. Economical and population growth index have been considered too. The system modeling took into account the definition of the input parameters for the forecast of the GHE measured in terms of tons of CO2. Three input variables have been used to estimate the total tCO2 one year ahead emissions. The ANFIS Neuro-Fuzzy Intelligent System is a hybrid system that enables learning capability in a Fuzzy inference system to model non-linear and complex processes in a vague information environment. The results indicate the Neural-Fuzzy System produces consistent estimates validated by actual test data.
Download

Paper Nr: 629
Title:

INTEGRATING AGENTS WITH CONNECTIONIST SYSTEMS TO EXTRACT NEGOTIATION ROUTINES

Authors:

Marisa Masvoula, Panagiotis Kanellis and Drakoulis Martakos

Abstract: Routinization is a technique of knowledge exploitation based on the repetition of acts. When applied to negotiations it results the substitution of parts or even whole processes, disembarrassing negotiators from significant deliberation and decision making effort. Although it has an important impact on negotiators, the risk of establishing ineffective routines is evident. In our paper we discuss weaknesses and limitations and we propose a generic framework to address them. We consider routines as evolving processes and we take two orientations. The first concerns a communicative dimension to allow for external evaluation of the applied routines and the second concerns enforcement of the system core with evolving structure that adjusts to routine changes and flexibly incorporates new knowledge.
Download

Paper Nr: 22
Title:

A NEW CASE-BASED APPROXIMATE REASONING BASED ON SPMF IN LINGUISTIC APPROXIMATION

Authors:

Dae-Young Choi and I. K. RA

Abstract: A new case-based approximate reasoning (CBAR) based on standardized parametric membership functions (SPMF) in linguistic approximation is proposed. Linguistic case indexing and retrieval based on SPMF is suggested. It provides an efficient mechanism for linguistic approximation within linear time complexity. Thus, it can be used to improve the speed of linguistic approximation. It can be processed relatively fast compared to the previous linguistic approximation methods. From the engineering viewpoint, it may be a valuable advantage.
Download

Paper Nr: 37
Title:

SOCIAL ROBOTS, MORAL EMOTIONS

Authors:

Ana R. Delgado

Abstract: The affective revolution in Psychology has produced enough knowledge to implement abilities of emotional recognition and expression in robots. However, the emotional prototypes are still very basic, almost caricaturized ones. If the goal is constructing robots that respond flexibly, in order to fulfill market demands from different countries while respecting the moral values implicit in the social behavior of their inhabitants, then these robots will have to be programmed attending to detailed descriptions of the emotional experiences that are considered relevant in the interaction context in which the robot is going to be put to work (e.g., assisting people with cognitive or motor disabilities). The advantages of this approach are illustrated with an empirical study on contempt, the seventh basic emotion in Ekman’s theory, and one of the “rediscovered” moral emotions in Haidt’s New Synthesis. A phenomenological analysis of the experience of contempt in 48 Spanish subjects shows the structure and some variations –prejudiced, self-serving, and altruistic– of this emotion. Quantitative information was later obtained with the help of blind coders. Some spontaneous facial expressions that sometimes accompany self-reports are also shown. Finally, some future directions in the Robotics-Psychology intersection are presented (e.g., gender differences in social behavior).
Download

Paper Nr: 46
Title:

METHODS AND TOOLS FOR MODELLING REASONING IN DIAGNOSTIC SYSTEMS

Authors:

Alexander Eremeev and Vadim Vagin

Abstract: The methods of case-based reasoning for a solution of problems of real-time diagnostics and forecasting in intelligent decision support systems (IDSS) is considered. Special attention is drawn to a case library structure for real-time IDSS and an application of this reasoning type for diagnostics of complex object states. The problem of finding the best current measurement points in model-based device diagnostics with using Assumption-based Truth Maintenance Systems (ATMS) is viewed. The new heuristic approaches of current measurement point choosing on the basis of supporting and inconsistent environments are presented. This work was supported by the Russian Foundation for Basic Research (projects No 08-01-00437 and No 08-07-00212).
Download

Paper Nr: 76
Title:

STRATEGIES FOR ROUTE PLANNING ON CATASTROPHE ENVIRONMENTS

Authors:

Pedro Abreu and Pedro Mendes

Abstract: The concept of multi-agent systems (MAS) appeared when computer science researchers had the need to solve problems involving the simulation of real environments with several intervenients (agents). Solving these requires a coordination process between agents and in some cases negotiation. Such is the case of a catastrophe scenario with the need intervention to minimize the consequences, like for instance a fire. In this particular case the agents (firemen) must have a good coordination process to achieve as fast as they can their fire fighting position. The main goal of this project is to create an optimal strategy to calculate the best path to the fire fighting position. Tests were conducted on an existing simulator platform Pyrosim. Three factors have an important role: wind (intensity and direction), ground topology and vegetation variety. At the end the results were quite satisfactory, mainly in what concerns the agents main objective. The A* algorithm proved to be feasible for this particular problem, and the coordination process between agents was implemented successfully. In the future this project may have its agents ported to the BDI concept.
Download

Paper Nr: 78
Title:

AGENT-BASED MODELING AND SIMULATION OF RESOURCE ALLOCATION IN ENGINEERING CHANGE MANAGEMENT

Authors:

Young Moon and Bochao Wang

Abstract: An engineering change (EC) refers to a modification of products and components including purchased parts or even supplies after product design is finished and released to the market. While any company involved in product development would have to deal with engineering changes, the area of engineering change management hasn't received much attention from the research community. It is partly because of its complexity and lack of appropriate research tools. In this paper, we present preliminary research results of modeling the engineering change management (ECM) process using an agent-based modeling and simulation technique. The aim of the research reported in this paper is to study optimal strategies of resource allocation for a company when it is dealing with two kinds of ECs: "necessary ECs" and "initialized ECs." We discuss results from these simulation models to illustrate some insights of ECM, and present several research directions from these results.
Download

Paper Nr: 82
Title:

Evaluating Generalized Association Rules Combining Objective and Subjective Measures and Visualization

Authors:

Magaly L. Fujimoto, Veronica Oliveira de Carvalho, Magaly Fujimoto, Veronica O. Carvalho and Solange Rezende

Abstract: Considering the user view, many problems can be found during the post-processing of association rules, since a large number of patterns can be obtained, which complicates the comprehension and identification of interesting knowledge. Thereby, this paper proposes an approach to improve the knowledge comprehensibility and to facilitate the identification of interesting generalized association rules during evaluation. This aid is realized combining objective and subjective measures with information visualization techniques, implemented on a system called RulEE-GARVis.
Download

Paper Nr: 100
Title:

A TOOL FOR MEASURING INDIVIDUAL INFORMATION COMPETENCY ON AN ENTERPRISE INFORMATION SYSTEM

Authors:

Chui Y. Yoon, In S. Lee and Byung C. Shin

Abstract: This study presents a tool that can efficiently measure individual information competency to execute the given tasks on an enterprise information system. The measurement items are extracted from the major components of a general competency. By factor analysis and reliability analysis, a 14-item tool is proposed to totally measure individual information capability. The tool’s application and utilization are confirmed by applying it to measuring the information competency of the individuals in an enterprise.
Download

Paper Nr: 114
Title:

DESIGN A REVERSE LOGISTICS INFORMATION SYSTEM WITH RFID

Authors:

Ka M. Lee and Wilsern Tan

Abstract: Recently, reverse logistics management has become an integral part of the business cycle. This is mainly due to the need to be environmental friendly and urgent need to reuse scarce resources. Traditionally, reverse logistics activities have been a cost center for most businesses without generating extra revenue. However, due to recent increase in commodity and energy prices, reverse logistics management could eventually be a cost savings method. In this research, we propose using Radio Frequency Identification (RFID) technology to better optimize and streamline reverse logistics operations. Using RFID, we try to eliminate parts of the unknowns in reverse logistics flow that made reverse logistics model complicated. Furthermore, Genetic algorithm is used to optimize the place of initial collection center so as to cover the largest population possible in order to reduce logistics cost and provide convenience to end users. This study is based largely on literature review of past workings and also experiments are conducted on RFID hardware to test for its suitability. The significance of this paper is to adopt ubiquitous RFID technology and Genetic Algorithms for reverse logistics so as to obtain an economic reverse logistics network.
Download

Paper Nr: 117
Title:

SIHC: A Stable Incremental Hierarchical Clustering Algorithm

Authors:

Ibai Gurrutxaga, Olatz Arbelaitz, José Ignacio Martín, Javier Muguerza, Jesús María Pérez and Iñigo Perona

Abstract: SAHN is a widely used agglomerative hierarchical clustering method. Nevertheless it is not an incremental algorithm and therefore it is not suitable for many real application areas where all data is not available at the beginning of the process. Some authors proposed incremental variants of SAHN. Their goal was to obtain the same results in incremental environments. This approach is not practical since frequently must rebuild the hierarchy, or a big part of it, and often leads to completely different structures. The human user of such an application cannot assimilate so drastic changes and loses confidence on the algorithm. We propose a novel algorithm, called SIHC, that updates SAHN hierarchies with minor changes in the previous structures. This property makes it suitable for real environments. Results on 11 synthetic and 6 real datasets show that SIHC builds high quality clustering hierarchies. This quality level is similar and sometimes better than SAHN's. Moreover, the computational complexity of SIHC is lower than SAHN's.
Download

Paper Nr: 118
Title:

MODELING AND SIMULATION FOR DECISION SUPPORT IN SOFTWARE PROJECT WORKFORCE MANAGEMENT

Authors:

Bernardo G. Ambrósio, José Luis Braga, Moisés Resende-Filho and Jugurta Lisboa-Filho

Abstract: This paper presents and discusses the construction of a system dynamics model, focusing on key managerial decision variables related to workforce management during requirements extraction in software development projects. Our model establishes the relationships among those variables, making it possible to analyze and to better understand their mutual influences. Simulations conducted with the model made it possible to verify and foresee the consequences of risk factors (e.g. people turnover and high requirements volatility) on quality and cost of work. Three scenarios (e.g. optimistic, baseline and pessimistic) are set using data from previous studies and data collected in a software development company.
Download

Paper Nr: 166
Title:

STATISTICAL DECISIONS IN PRESENCE OF IMPRECISELY REPORTED ATTRIBUTE DATA

Authors:

Olgierd Hryniewicz

Abstract: The paper presents a new methodology for making statistical decisions when data is reported in an imprecise way. Such situations happen very frequently when quality features are evaluated by humans. We have demonstrated that traditional models based either on the multinomial distribution or on predefined linguistic variables may be insufficient for making correct decisions. Our model, which uses the concept of the possibility distribution, allows separation stochastic randomness from fuzzy imprecision, and provides a decision – maker with more information about the phenomenon of interest.
Download

Paper Nr: 171
Title:

An Agent-based Architecture for Cancer Staging

Authors:

José Machado, António Abelha, Miguel Miranda, José Neves and Manuel F. Santos

Abstract: Cancer staging is the process by which physicians evaluate the spread of cancer. This is important, once in a good cancer staging system, the stage of disease helps to determine prognosis and assists in selecting therapies. A combination of physical examination, blood tests, and medical imaging is used to determine the clinical stage; if tissue is obtained via biopsy or surgery, examination of the tissue under a microscope can provide pathologic staging. On the other hand, good patient education may help to reduce health service costs and improve the quality of life of people with chronic or terminal conditions. In this paper we describe a theoretically based framework for the provision of computer based information on cancer patients, and the computational techniques used to implement it. One's goal is to develop an interactive agent based computational system which may provide physicians with the right information, on time, that is adapted to the situation and process-based aspects of the patients's illness and treatment.
Download

Paper Nr: 172
Title:

N-grams-based File Signatures for Malware Detection

Authors:

Igor Santos, Jaime Devesa, Pablo García Bringas and Yoseba K. Penya

Abstract: Malware is any malicious code that has the potential to harm any computer or network. The amount of malware is increasing faster every year and poses a serious security threat. Thus, malware detection is a critical topic in computer security. Currently, signature-based detection is the most extended method for detecting malware. Although this method is still used on most popular commercial computer antivirus software, it can only achieve detection once the virus has already caused damage and it is registered. Therefore, it fails to detect new malware. Applying a methodology proven successful in similar problem-domains, we propose the use of ngrams (every substring of a larger string, of a fixed lenght n) as file signatures in order to detect unknown malware whilst keeping low false positive ratio. We show that n-grams signatures provide an effective way to detect unknown malware.
Download

Paper Nr: 281
Title:

Intelligent Systems for Retail Banking Optimization

Authors:

Darius Dilijonas, Virgilijus Sakalauskas, Dalia Kriksciuniene and Rimvydas Simutis

Abstract: The article analyzes the problems of optimization and management of ATM (Automated Teller Machine) network system, related to minimization of operating expenses, such as cash replenishment, costs of funds, logistics and back office processes. The suggested solution is based on merging up two dif-ferent artificial intelligence methodologies – neural networks and multi-agent technologies. The practical implementation of this approach enabled to achieve better effectiveness of the researched ATMs network. During the first stage, the system performs analysis, based on the artificial neural networks (ANN). The second stage is aimed to produce the alternatives for the ATM cash manage-ment decisions. The performed simulation and experimental tests of method in the distributed ATM networks reveal good forecasting capacities of ANN.
Download

Paper Nr: 343
Title:

BUSINESS ANALYSIS IN THE OLAP CONTEXT

Authors:

Emiel Caron

Abstract: Today's multi-dimensional business or OnLine Analytical Processing (OLAP) databases have little support for sensitivity analysis. Sensitivity analysis is the analysis of how the variation in the output of a mathematical model can be apportioned, qualitatively or quantitatively, to different sources of variation in the input of the model. This functionality would give the OLAP analyst the possibility to play with ``What if...?''-questions in an OLAP cube. For example, with questions of the form: ``What happens to an aggregated value in the dimension hierarchy if I change the value of this data cell by so much?'' These types of questions are, for example, important for managers that want to analyse the effect of changes in sales, cost, etc., on a product's profitability in an OLAP sales cube. In this paper, we describe an extension to the OnLine Analytical Processing (OLAP) framework for business analysis in the form of sensitivity analysis.
Download

Paper Nr: 347
Title:

W-NEG: A WORKFLOW NEGOTIATION SYSTEM

Authors:

Melise Paula, Danilo Lima, Luis Camargo, Sérgio A. Rodrigues and Jano Moreira De Souza

Abstract: It has been claimed that there are different methods for solving conflict; however, the main one is to solve conflicts through negotiations. This paper addresses one of the Negotiation Support Systems developed, namely NK-Sys and a workflow approach titled W-Neg. Negotiators often attempt to resolve their conflict through the use of intrinsic activities and their own skills. In W-Neg, we suggest a set of workflow models to tackle issues that may be conflicting during the negotiation table. As any decision-making process, negotiations arise from some well known steps. Therefore, the management of activities realized from these steps can be considered an alternative to improve negotiator’s preparation. In this proposal, workflow’s technology is aligned with this alternative once the main goal of workflow systems is to provide better business processes management.
Download

Paper Nr: 365
Title:

Forecasting Total Sales of High-tech Products - Daily Diffusion Models and a Genetic Algorithm

Authors:

Masaru Tezuka and Satoshi Munakata

Abstract: In recent years, the release interval of high-tech consumer products such as mobile phones and portable media players is getting shorter. New models of mobile phones are released three times a year in Japan. The manufactures have to avoid dead stock because the value of the previous model drops sharply after the launch of the new model. In this paper, we propose a method to forecast the total sales of the products. The method utilizes diffusion models for forecasting. Only short-term sales record is available since the sales are forecasted one month after the release. In order to make effective use of the available data, we use a day as the time unit of forecasting. To apply the diffusion models to daily demand forecasting, we derive the difference equation representation of the models and propose discrete-time diffusion models. Day-of-week-dependent parameters are introduced to the models. The proposed method is tested on the data provided by a high-tech consumer products manufacturer. The result shows that the proposed method has an excellent forecasting ability.
Download

Paper Nr: 376
Title:

Implementation of Intention-Driven Search Processes by SPARQL Queries

Authors:

Catherine Faron, Olivier Corby and Isabelle Mirbel

Abstract: Capitalisation of search processes becomes a real challenge in many domains. By search process, we mean a sequence of queries that enables a community member to find comprehensive and accurate information by composing results from different information sources. In this paper we propose an intentional model based on semantic Web technologies and models and aiming both at the capitalization, reuse and sharing of queries into a community and at the organization of queries into formalized search processes. It is intended to support knowledge transfer on information searches between expert and novice members inside a community. Intention-driven search processes are represented by RDF datasets and operationalized by rules represented by SPARQL queries and applied in backward chaining by using the CORESE semantic engine.
Download

Paper Nr: 405
Title:

PERFORMING THE RETRIEVE STEP IN A CASE-BASED REASONING SYSTEM FOR DECISION MAKING IN INTRUSION SCENARIOS

Authors:

Jesus Conesa and Angela Ribeiro

Abstract: The present paper describes implementation of a case-based reasoning system involved in a crisis management project for infrastructural building security. The goal is to achieve an expert system, capable of making decisions in real-time to quickly neutralize one or more intruders that threaten strategic installations. This article presents development of usual CBR stages, such as case representation, retrieval phase and validation process, mainly focusing on the retrieving phase, approaching it through two strategies: similarity functions and decision tree structures. The designing case, such as the discretization values that are adopted, will also discussed. Finally, results on the retrieving phase performance are shown and analyzed according to well-known cross-validations, such as k-validations or leave-one-out.
Download

Paper Nr: 406
Title:

MasDISPO_xt: Process optimisation by use of integrated, agentbased Manufacturing Execution Systems inside the Supply Chain of Steel Production

Authors:

Sven Jacobi, David Raber and Christian Hahn

Abstract: The production of steel normally constitutes the inception of many supply chains in different areas of industry. Steel manufacturing companies are strongly affected by bull whip effects and other unpredictable influences along their production chains. Improving their operational efficiency is required to keep a competitive position on the market. Hence, flexible planning and scheduling systems are needed to support these processes, which are based on considerable amounts of data, hardly processable manually anymore. MasDISPO_xt is an agent-based generic online planning and scheduling system for the observation on MES-level of the complete supply chain of Saarstahl AG, a globally respected steel manufacturer. This paper concentrates on the horizontal and vertical integration of influences of rough planning on detailed and the other way around. Based on model-driven engineering business processes are modeled on CIM-level, a service oriented architecture is presented for the interoperability of all components, legacy systems and others wrapped behind services. Finally, an agent-based detailed planning and scheduling ensuring interoperability in horizontal and vertical direction is approached effectively.
Download

Paper Nr: 444
Title:

PATTERN RECOGNITION FOR DOWNHOLE DYNAMOMETER CARD IN OIL ROD PUMP SYSTEM USING ARTIFICIAL NEURAL NETWORKS

Authors:

José M. Felippe de Souza, Manuel de A. Barreto Filho, Marco A. D. Bezerra and Leizer Schnitman

Abstract: This paper presents the development of an Artificial Neural Network system for Dynamometer Card pattern recognition in oil well rod pump systems. It covers the establishment of pattern classes and a set of standards for training and validation, the study of descriptors which allow the design and the implementation of features extractor, training, analysis and finally the validation and performance test with a real data base.
Download

Paper Nr: 453
Title:

KNOWLEDGE REPRESENTATION AND REASONING FOR AIR-NAILER COLOR CONFIGURATION BASED ON HSV SPACE

Authors:

Jing Fan

Abstract: Computer aided color design is one of the hotspots in industrial design area. Based on current color configuration research, this paper focuses on knowledge representation and reasoning of color design for air-nailer color configuration system. Firstly, color representation including color type and color value is discussed. Secondly, color reasoning based on HSV space among main color, first assistant color and second assistant color is investigated. At last, the application is represented, which has been tested efficiency and accurate in air-nailer color design.
Download

Paper Nr: 459
Title:

An Application of the Student Relationship Management Concept

Authors:

Maria Piedade

Abstract: It is largely accepted that a way to promote the students’ success is by implementing processes that allow the students closely monitoring, the evaluation of their success and the approximation to their day-by-day activities. However, the implementation of these processes does not take place in many Higher Education Institutions due to the lack of appropriates institutional practices and an adequate technological infrastructure able to support these practices. In order to overcome these conceptual and technological limitations, this paper presents the Student Relationship Management System (SRM System). The SRM System supports the SRM concept and the SRM practice, also here presented, and it is implemented using the technological infrastructure that supports the Business Intelligence (BI) systems. The SRM system was used in an application case (in a real context) to obtain knowledge about the students and their academic behaviour. Such information is fundamental to support the decision-making associated with the teaching-learning process. All the obtained results are also presented and analysed in this paper.
Download

Paper Nr: 510
Title:

INTEGRATING CASE-BASED REASONING WITH EVIDENCE-BASED PRACTICE FOR DECISION SUPPORT

Authors:

Expedito Lopes and Ulrich Schiel

Abstract: Evidence-Based Practice (EBP), an emergent paradigm, uses the premise that decision making can be based on scientific proofs available in reliable data bases, usually found on sites over the Internet. However, the procedures of the EBP do not provide mechanisms for retention of information and knowledge strategic of the individual solutions, which could facilitate the learning of different end-users, in the future. On the other hand, Case-Based Reasoning (CBR) uses the history of similar cases to support decision making. But, the retrieval of cases may not be sufficient to give support to the solution of problems. Since both research evidences as well as similar cases are important for decision-making, this paper proposes the integration of the two paradigms for problem-solution support, regarding complex problems. An example of the justice area is presented.
Download

Paper Nr: 520
Title:

MODELLING COLLABORATIVE FORECASTING IN DECENTRALIZED SUPPLY CHAIN NETWORKS WITH A MULTIAGENT SYSTEM

Authors:

Jorge Esteban Hernández, Josefa Mula and Raul Poler

Abstract: Information technology has become a strong modelling approach to support the complexities involved in a process. One example of this technology is the multiagent system which, from a decentralized supply chain configuration perspective, supports the information sharing processes that any of its node will be able to carry out to support its process in a collaborative manner, for example, the forecasting process. Therefore, this paper presents a novel collaborative forecasting model in supply chain networks by considering a multiagent system modelling approach. The hypothesis presented herein is that by collaborating in the information exchange process, less errors are made in the forecasting process.
Download

Paper Nr: 552
Title:

CRONUS: A TASK MANAGEMENT SYSTEM TO SUPPORT SOFTWARE DEVELOPMENT

Authors:

Yura Ferreira, Sérgio A. Rodrigues, Divany Lima, Márcio Duran, José Blaschek and Jano Moreira De Souza

Abstract: Currently, information technology professionals have become increasingly interested in factors that may have an impact on project management effectiveness and the success of projects. This article introduces a task management tool wich complements traditional tools to support the planning, controlling and execution of software development projects.
Download

Paper Nr: 604
Title:

EVALUATING RISKS IN SOFTWARE NEGOTIATIONS THROUGH FUZZY COGNITIVE MAPS

Authors:

Sérgio A. Rodrigues, Efi Papatheocharous, Andreas Andreou and Jano Moreira De Souza

Abstract: Risks are inevitably and permanently present in software negotiations and they can directly influence the success or failure of negotiations. Risks should be avoided when they represent a threat and encouraged when they denote an opportunity. This work examines the influence of some negotiation elements in the area of risk and cost estimation, which are both factors that directly influence software development negotiation. In this work, risk quantification is proposed to translate its impact to measurable values that may be taken into consideration during negotiations. The model proposed involves an assessment tool based on basic negotiation elements – namely relationship, interests, cost and time – quantifying the influences among each other, and makes use of Fuzzy Cognitive Maps (FCMs) for developing the associations around basic risk elements on one hand and attaining an innovative risk quantification model for improved software negotiations on the other. Indicative scenarios are presented to demonstrate the efficacy of the proposed approach.
Download

Area 3 - Information Systems Analysis and Specification

Full Papers
Paper Nr: 49
Title:

A Service Integration Platform for the Labor Market

Authors:

Mariagrazia Fugini

Abstract: Employment Services are an important topic in the agenda of local governments and in the EU due to their social implications, such as sustainability, workforce mobility, workers’ re-qualification paths, training for fresh graduates and students. The SEEMP system presented in this paper overcomes the issue in different ways: starting bilateral communications with neighbor border similar offices, building a federation of the local employment services, and merging isolate trials. The SEEMP approach relies on a distributed semantic service oriented infrastructure able to federate local projects in a federative approach for creating new geographically aggregated Services to Employment by leveraging on existing local ones.

Paper Nr: 217
Title:

Developing Business Process Monitoring Probes to Enhance Organization Control

Authors:

Fabio Mulazzani, Barbara Russo and Giancarlo Succi

Abstract: This work present business process monitoring agents we developed called Probes. Probes enable to control the process performance aligning it to the company’s strategic goals. Probes offer a real time monitoring of the strategic goals achievement, also increasing the understanding of the company activities. In this paper Probes are applied to a practical case of a bus company. Probes were developed and deployed into the company ERP system and determined a significant change in the strategy of the company and a corresponding enhancement of the performances of a critical business process.

Paper Nr: 277
Title:

Text Generation for Requirements Validation

Authors:

Petr Kroha and Manuela Rink

Abstract: In this paper, we describe a text generation method used in our novel approach to requirements validation in software engineering by paraphrasing a requirements model expressed in UML by natural language. The basic idea is that after an analyst has specified a UML model based on a requirements description, a text may be automatically generated that describes this model. Thus, users and domain experts are enabled to validate the UML model, which would generally not be possible as most of them do not understand (semi-) formal languages, such as UML. A corresponding text generator has been implemented and examples will be presented.

Paper Nr: 280
Title:

Automatic Compositional Verification of Business Processes

Authors:

Luis E. Mendoza and Manuel I. Capel-Tuñón

Abstract: Nowadays the Business Process Modelling Notation (BPMN) has become a standard to provide a notation readily understandable by all business process (BP) stakeholders when it comes to carrying out the Business Process Modelling (BPM) activity. On other hand, the use of Software Engineering methods has proved to be useful to improve BPM techniques. In this paper, we present a new Formal Compositional Verification Approach (FCVA), based on the Model--Checking verification technique for software, integrated with a formal software design method called MEDISTAM-RT (Spanish acronym for 'Systematic Design Method Based on Model Transformations for Real--Time systems'). Both are used to facilitate the development of the Task Model (TM) associated to a BP design. MEDISTAM-RT uses UML-RT as its graphical modelling notation and CSP+T formal specification language for temporal annotations. The application of FCVA is aimed at guaranteeing the correctness of the TM with respect to initial property specification derived from the BP rules. One instance of a BPM enterprise-project related to the Customer Relationship Management (CRM) business is discussed in order to show a practical use of our proposal.

Paper Nr: 288
Title:

ACTOR RELATIONSHIP ANALYSIS FOR I* FRAMEWORK

Authors:

Shuichiro Yamamoto, Komon Ibe, June Verner, Karl Cox and Steven Bleistein

Abstract: The i* framework is a goal-oriented approach that addresses organizational IT requirements engineering concerns, and is considered an effective technique for analyzing dependencies between actors. However, the effectiveness and limitations of i* are unclear. When we modelled an industrial case with a large number of actors using i*, we discovered difficulties in (1) validating the completeness of the model, and (2) managing change. To solve these problems, we propose an actor relationship matrix analysis method (ARM) as a precursor to i* modelling, which we found aided in addressing the above two issues. This paper defines our method and demonstrates it with a case study. ARM enables requirements engineers to better ensure completeness of requirements in a repeatable and systematic manner that does not currently exist in the i* framework.

Paper Nr: 320
Title:

TOWARDS SELF-HEALING EXECUTION OF BUSINESS PROCESSES BASED ON RULES

Authors:

Mohamed Boukhebouze, Youssef Amghar, Aïcha-Nabila Benharkat and Zakaria Maamar

Abstract: In this paper we discuss the need to offer a self-healing execution of a business process within the BP-FAMA framework (Business Process Framework for Agility of Modelling and Analysis) presented in (Boukhebouze et al. 2008). This will be done by identifying errors in the process specification and reacting to possible performance failures in order to drive the process execution towards a stable situation. To achieve our objective, we propose to model the high-level process by using a new declarative language based on business rules called BbBPDL (Rules based Business Process Description Language). In this language, a business rule has an Event- Condition-Action-Post condition-Post event-Compensation (ECA2PC) format. This allows translating a process into a cause/effect graph that is analyzed for the sake of ensuring the reliably of the business processes.

Paper Nr: 348
Title:

TOWARDS A FLEXIBLE INTER-ENTERPRISE COLLABORATION, A Supply Chain Perspective

Authors:

Boris Shishkov, Marten van Sinderen and Alexander Verbraeck

Abstract: Since neither uniformity nor pluriformity provide the answer to easing inter-enterprise collaborations, we address (inspired by relevant strengths of service-oriented architectures) the problem of supporting such collaborations from an infrastructure perspective. We propose architectural guidelines for interactively establishing a suitable inter-enterprise collaboration scheme, before the exchange of actual content takes place. The proposed guidelines stem from an analysis of some currently popular approaches concerning the achievement of inter-enterprise collaborations with ICT means. Taking into account the strong relevance of these issues to the Supply chain domain, we put our work in the Supply chain perspective. We also illustrate our architectural guidelines with an example from this domain. It is expected that the research contribution, reported in this paper, will be useful as an additional result concerning the (ICT-driven) inter-enterprise collaboration.

Paper Nr: 356
Title:

A MODEL-BASED TOOL FOR CONCEPTUAL MODELING AND DOMAIN ONTOLOGY ENGINEERING IN ONTOUML

Authors:

Alessander Botti Benevides and Giancarlo Guizzardi

Abstract: This paper presents a Model-Based graphical editor for supporting the creation of conceptual models and domain ontologies in a philosophically and cognitively well-founded modeling language named OntoUML. The Editor is designed in a way that, on one hand, it shields the user from the complexity of the ontological principles underlying this language. On the other hand, it reinforces these principles in the produced models by providing a mechanism for automatic formal constraint verification.

Paper Nr: 410
Title:

CONCEPTS-BASED TRACEABILITY Using Experiments to Evaluate Traceability Techniques

Authors:

Rodrigo P. Noll and Marcelo Blois Ribeiro

Abstract: Knowledge engineering brings direct benefits to software development through the cognitive mapping between user expectations and software solution, checking system consistency and requirements conformance. One of the potential benefits of knowledge representation could be the definition of a standard domain terminology to enforce artifacts traceability. This paper proposes a concepts-based approach to drive traceability by the integration of knowledge engineering activities into the Unified Process. This paper also presents an experiment and its replication to evaluate precision and effort variables from concepts-based traceability and conventional requirements-based traceability techniques.

Paper Nr: 418
Title:

A Service-oriented Framework for Component-based Software Development: An i* Driven Approach

Authors:

Yves Wautelet, Youssef Achbany, Sodany Kiv and Manuel Kolp

Abstract: Optimizing is a fundamental concept in our modern mature economy. Software development also follows this trend and, as a consequence, new techniques are appearing over the years. Among those we find services oriented computing and component based development. The first gives the adequate structure and flexibility required in the development of large industrial software developments, the second allows recycling of generically developed code. This paper is at the borders of these paradigms and constitutes an attempt to integrate components into service-oriented modelling. Indeed, when developing huge multi-actor application packages, solutions to specific problems should be custom developed while others can be found in third party offers. FaMOS-C, the framework proposed in this paper, allows modelling such problems and directly integrates a selection process among different components based on their performance in functional and non-functional aspects. The framework is firstly depicted and then evaluated on a case study in supply chain management.

Paper Nr: 419
Title:

A PROCESS FOR DEVELOPING ADAPTABLE AND OPEN SERVICE SYSTEMS: Application in Supply Chain Management

Authors:

Yves Wautelet, Youssef Achbany, Jean-Charles Lange and Manuel Kolp

Abstract: Service-oriented computing is becoming increasingly popular. It allows designing flexible and adaptable software systems that can be easily adopted on demand by software customers. Those benefits are from primary importance in the context of supply chain management; that is why this paper proposes to apply ProDAOSS, a process for developing adaptable and open service systems to an industrial case study in outbound logistics. ProDAOSS is conceived as a plug-in for I-Tropos - a broader development methodology - so that it covers the whole software development life cycle. At analysis level, flexible business processes were generically modelled with different complementary views. First of all, an aggregate services view of the whole applicative package is offered; then services are split using an agent ontology - through the i* framework - to represent it as an organization of agents. A dynamic view completes the documentation by offering the service realization paths. At design stage, the service center architecture proposes a reference architectural pattern for services realization in an adaptable and open manner.

Paper Nr: 445
Title:

Business Process-awareness in the Maintenance Activities

Authors:

Maria Tortorella and Lerina Aversano

Abstract: In this paper we focus on the usefulness of business process knowledge for clarifying change requirement concerning the supporting software systems. To this aim the correctness and completeness of the change requirement impact have been evaluated with and without the business process knowledge. Results of this preliminary empirical study are encouraging and indicates that the business information effectively provides a significant help to software maintainers.

Paper Nr: 502
Title:

BORM POINTS – INTRODUCTION AND RESULTS OF PRACTICAL TESTING

Authors:

Zdenek Struska and Robert Pergl

Abstract: This paper introduces the BORM-points method. The method is used for complexity estimation for information systems development. In the first part of the paper there is a detailed description of BORM-points and its specifics. In the second part there is a presentation of results of BORM-points application for real projects.

Paper Nr: 507
Title:

A TECHNOLOGY CLASSIFICATION MODEL FOR MOBILE CONTENT & SERVICE DELIVERY PLATFORMS

Authors:

Antonio Ghezzi, Filippo Renga and Raffaello Balocco

Abstract: The growing complexity of mobile “rich media” digital contents and services requires the integration of next generation middleware platform within Mobile Network Operators and Service Providers infrastructural architecture, for supporting the overall process of content creation, management and delivery. The purpose of the research is to design a technology classification model for Content & Services Delivery Platforms – CSDPs –, the core of Mobile Middleware Technology Providers – MMTPs – value proposition. A three-steps theoretical framework is provided, which identifies a set of significant classification variables to support the platforms positioning analysis. Afterwards, through adopting the multiple case studies research methodology, the model is applied to map the current CSDP offer presented by a sample of 24 companies, classified as MMTP, so to test the framework validity and get a valuable insight on the actual “state of the art” for such solutions. The main findings show that existing platforms possess major strengths – e.g. wide content portfolio manageable, integration between mobile and web channels and frequent recourse to SOA and Web Service approach –, while some drawbacks – poor support to context aware and location-based services, verticality and low interoperability of some proprietary products, criticality of content adaptation etc. – are still limiting the solutions effectiveness.

Paper Nr: 545
Title:

Patterns for Modeling and Composing Workflows from Grid Services

Authors:

Yousra H. Bendaly

Abstract: we propose a set of composition patterns based on UML activity diagrams that support the different forms of matching and inte- grating Grid service operations in a workflow. The workklows are built on an abstract level using UML activity diagram language and following an MDA composition approach. In addition, we propose a Domain Specific Language (DSL) which extends the UML activity diagram notation allowing a systematic composition of workflows and containing appropriate data to describe a Grid service. These data are useful for the execution of the resulting workflow.

Paper Nr: 557
Title:

A CASE STUDY OF KNOWLEDGE MANAGEMENT USAGE IN AGILE SOFTWARE PROJECTS.

Authors:

Anderson Ricardo Yanzer Cabral, Marcelo Blois Ribeiro, Mauricio Cristal, Marcos Tadeu Silva, Cristiano Franco and Ana P. Lemke

Abstract: Agile Methodologies promote a group of principles which differ from Traditional Methods. In this way, one concrete difference is the manner of how the Knowledge is managed during a software development process. Most proposals to knowledge management have been generated for Traditional Methods but have failed in Agile Projects because they focus on explicit Knowledge Management. This paper aims to present a case study with a detailed contributions taken from Lessons Learned for some issues related to Knowledge Management in a distributed project that make use of Agile Methodologies.

Paper Nr: 578
Title:

A HIERARCHICAL PRODUCT-PROPERTY MODEL TO SUPPORT PRODUCT CLASSIFICATION AND MANAGE STRUCTURAL AND PLANNING DATA

Authors:

Gabriela Henning, Horacio Leone and Diego Giménez

Abstract: Mass customization is one of the main challenges that managers face since it results in a proliferation of product data within the various organizational areas of an enterprise and across different enterprises. Effective solutions to this problem have resorted to generic bills of materials and to the grouping of product variants into product families, thus improving data management and sharing. However, issues like product family identification and formation, as well as data aggregation have not been dealt with by this type of approach. This contribution addresses these challenges and proposes a hierarchical data model based on the concepts of variant, variant set and family. It allows managing huge amounts of structural and non-structural information in a systematic way, with minimum replication. Besides, it proposes an unambiguous criterion, based on the properties of variants, for identifying families and variant sets. Finally, the approach can explicitly handle aggregated data which is intrinsic to generic concepts like families and variant sets. A case study is analyzed to illustrate the representation capabilities of this approach.

Paper Nr: 598
Title:

Collaborative, Participative and Interactive Enterprise Modeling

Authors:

Joseph Barjis

Abstract: Enterprise modeling is a daunting task to be carried out from a single perspective. A challenge to this whole complexity is conflicting descriptions given by different actors when business processes are documented. Often enterprise modeling takes rounds of iterations and clarification before the models are verified and validated. In order to expedite the modeling process and validity of the models, in this paper we propose an approach called collaborative, participative, and interactive modeling (CPI Modeling). The main objective of the CPI approach is to furnish an extended participation of actors that have valuable insight into the enterprise operations and business processes. Achieving this goal with any modeling method and language could be quite challenging. For CPI Modeling to succeed the modeling method should adhere to certain qualities. Next to the CPI Modeling approach, this paper discusses an enterprise modeling method that is simple, and yet powerful to capture intricate enterprise processes and simulate them.

Short Papers
Paper Nr: 9
Title:

A PETRI NET MODEL OF PROCESS PLATFORM-BASED PRODUCTION CONFIGURATION

Authors:

Lianfeng L. Zhang and Brian Rodrigues

Abstract: In the literature process platform-based production configuration (PPbPC) has been proposed to obtain efficiency in product family production. In this paper, we present a holistic view of PPbPC, attempting to facilitate understanding and implementation. This is accomplished through dynamic modelling and graphical representation based on Petri nets (PNs) techniques. To cope with the modelling difficulties, we develop a new formalism of hierarchical colored timed PNs (HCTPNs) by integrating the basic principles of hierarchical PNs, timed PNs and colored PNs. In the formalism, three types of nets together with a system of HCTPNs are defined to address the fundamental issues in PPbPC, including variety handling, process variation accommodation, configuration granularity handling and constraint satisfaction. A family of vibration motors for mobile phones is used to demonstrate PPbPC using the proposed formalism.
Download

Paper Nr: 77
Title:

A SIMULATION MODEL FOR MANAGING ENGINEERING CHANGES ALONG WITH NEW PRODUCT DEVELOPMENT

Authors:

Young Moon and Weilin Li

Abstract: This paper presents a process model for managing Engineering Changes (ECs) while other New Product Development (NPD) activities are being carried out in a company. The discrete-event simulation model incorporates Engineering Change Management (ECM) into an NPD environment by allowing ECs to share resources with regular NPD activities. Six model variables - (i) overlapping, (ii) NPD departmental interaction, (iii) ECM effort, (iv) resource constraints, (v) arrival rate, and (vi) priority - are explored to identify how they affect lead time and productivity of both NPD and ECM. Decision-making suggestions for minimum EC impact are then drawn from an overall enterprise system level perspective based on the simulation results.
Download

Paper Nr: 90
Title:

SECURITY ANALYSIS OF THE GERMAN ELECTRONIC HEALTH CARD’S PERIPHERAL PARTS

Authors:

Ali Sunyaev, Helmut Krcmar, Alexander Kaletsch and Christian Mauro

Abstract: This paper describes a technical security analysis which is based on experiments done in a laboratory and verified in a physician’s practice. The health care telematics infrastructure in Germany stipulates every physician and every patient to automatically be given an electronic health smart card (for patients) and a corresponding health professional card (for health care providers). We analyzed these cards and the peripheral parts of the telematics infrastructure according to the ISO 27001 security standard. The introduced attack scenarios show that there are several security issues in the peripheral parts of the German health care telematics. Based on discovered vulnerabilities we provide corresponding security measures to overcome these open issues and derive conceivable consequences for the nation-wide introduction of electronic health card in Germany.
Download

Paper Nr: 116
Title:

ON ANALYZING WEB SERVICES INTERACTIONS

Authors:

Zakaria Maamar, Yetongnon Kokou, Khouloud Boukadi and Djamal Benslimane

Abstract: This paper looks into the types of preferences that could be associated with the interaction sessions that are established between componentWeb services engaged in a composition scenario. Examples of preferences could include partnership, which has a composition flavor, and privacy, which has a data flavor. Not all providers are willing to let their Web services interact and share data with peers without knowing for example the reputation of these peers, the rationale of submitting data to these peers, and the actions these peers would take over these data. Our contributions revolve around a Specification for Privacy and Partnership Preferences, and include a list of potential partnership- and privacy-oriented preferences that are relevant to Web services engaged in compositions, a list of corrective actions that are taken in case these preferences turn out unsatisfied at run-time, and graphical mechanisms that show these preferences during modeling composition scenarios.

Paper Nr: 164
Title:

AN APPROACH TO MODEL-DRIVEN DEVELOPMENT PROCESS SPECIFICATION

Authors:

Rita Suzana Pitangueira Maciel, Bruno Carreiro Da Silva, Ana Patrícia Magalhães and Nelson Rosa

Abstract: The adoption of MDA in software development is increasing and is widely recognized as an important approach for building software systems. Meanwhile, the use of MDA requires the definition of a software process that guides developers in the elaboration and generation of models. While first model-driven software processes have started to appear, an approach for describing them in such way that they may be better communicated, understood, reused and evolved systematically by the development team is lacking. In this context, this paper presents an approach for the specification of MDA processes based on specializations of some SPEM 2 concepts. In order to support and evaluate our approach a tool was developed and applied in a particular MDA process for specific middleware services development.
Download

Paper Nr: 165
Title:

ONTOLOGY MAPPING BASED ON ASSOCIATION RULE MINING

Authors:

Christos Tatsiopoulos and Basilis Boutsinas

Abstract: Ontology mapping is one of the most important processes in ontology engineering. It is imposed by the decentralized nature of both the WWW and the Semantic Web, where heterogeneous and incompatible ontologies can be developed by different communities. Ontology mapping can be used to establish efficient information sharing by determining correspondences among such ontologies. The ontology mapping techniques presented in the literature are based on syntactic and/or semantic heuristics. In almost all of them, user intervention is required. In this paper, we present a new ontology mapping technique which, given two input ontologies, is able to map concepts in one ontology onto those in the other, without any user intervention. It is based on association rule mining applied to the concept hierarchies of the input ontologies. We also present experimental results that demonstrate the accuracy of the proposed technique.
Download

Paper Nr: 187
Title:

EVALUATION OF CASE TOOLS METHODS AND PROCESSES: An Analysis of Eight Open Source CASE Tools

Authors:

Stefan Biffl, Christoph Ferstl, Christian Höllwieser and Thomas Moser

Abstract: There are many approaches for Computer-aided software engineering (CASE), often accomplished by ex-pensive tools of market-leading companies. However, to minimize cost, system architects and software de-signers look for less expensive, if not open-source, CASE tools. As there is often no common understanding on functionality and application area, a general inspection of the open-source CASE tool market is needed. The idea of this paper is to define a “status quo” of the functionality and the procedure models of open-source CASE tools by evaluating these tools using a criteria catalogue for the areas: technology, modelling, code generation, procedure model, and administration. Based on this criteria catalogue, 8 open-source CASE tools were evaluated with 5 predefined scenarios. Major result is: there was no comprehensive open-source CASE tool which assists and fits well to a broad set of developer tasks, especially since a small set of the evaluated tools lack a solid implementation in several of the criteria evaluated. Some of the evaluated tools show either just basic support of all evaluation criteria or high capabilities in a specific area, particu-larly in code generation.
Download

Paper Nr: 190
Title:

SECURITY AND DEPENDABILITY IN AMBIENT INTELLIGENCE SCENARIOS: THE COMMUNICATION PROTOTYPE

Authors:

Alvaro Armenteros

Abstract: Ambient Intelligence (AmI) refers to an environment that is sensitive, responsive, interconnected, contextualized, transparent, intelligent, and acting on behalf of humans. Security, privacy, and trust challenges are amplified with AmI computing model and need to be handled. Along this paper the potential of SERENITY in Ambient Intelligence (AmI) Ecosystems is described. Main objective of SERENITY consists on providing a framework for the automated treatment of security and dependability issues in AmI scenarios. Besides, a proof of concept is provided. In this paper, we describe the implementation of a prototype based on the application of the SERENITY model (including processes, artefacts and tools) to an industrial AmI scenario. A complete description of this prototype, along with all S&D artefacts used is provided in following sections.
Download

Paper Nr: 195
Title:

A Method for Rewriting Leagacy Systems Using Business Process Management Technology

Authors:

Gleison Nascimento, Cirano Iochpe, Lucinéia Heloisa Thom and Manfred Reichert

Abstract: Legacy systems are systems which execute useful tasks for the organization. Unfortunately, to maintain a legacy system running is a complex and costly task. Thus, in recent years several approaches were suggested to rewrite legacy systems using contemporary technologies. In this paper we present a method for rewriting legacy systems based on Business Process Management (BPM). The use of BPM for migrating legacy systems facilitates the monitoring and continuous improvement of the information systems existing in the organization.
Download

Paper Nr: 202
Title:

A COMPREHENSIVE APPROACH FOR SOLVING POLICY HETEROGENEITY

Authors:

Rodolfo Ferrini, Rodolfo Ferrini and Elisa Bertino

Abstract: With the increasing popularity of collaborative application, policy-based access control models have become the usual approach for access control enforcement. In the last years several tools have been proposed in order to support the maintenance of such policy-based systems. However, no one of those tools is able to deal with heterogeneous policies that is policies that belong to different domains and thus adopting different terminologies. In this paper, we propose a stack of function that allow us to create a unified vocabulary for a multidomain policy set. This unified vocabulary can then be exploited by analysis tools improving accuracy in the results and thus applicability in real case scenarios. In our model, we represent the vocabulary of a policy adopting ontologies. With an ontology it is possible to describe a certain domain of interest providing richer information than a plain list of terms. On top of this additional semantic data it is possible to define complex functions such as ontology matching, merging and extraction that can be combined together in the creation of the unified terminology for the policies under consideration. Along with the definition of the proposed model, detailed algorithms are also provided. We also present experimental results which demonstrate the efficiency and practical value of our approach.
Download

Paper Nr: 204
Title:

CORRELATION OF CONTEXT INFORMATION FOR MOBILE SERVICES

Authors:

Stephan Haslinger and Miguel Jiménez

Abstract: Location Based Services are a key driver in today's telecom market, even if the power of Location Based Services is not nearly exhausted in nowadays telecom systems. To build intuitive Location Based Services for mobile handsets one success factor is to cover a broad range of mobile handsets available on the market and to make the services context aware. Within the EUREKA project MyMobileWeb we implement a framework to obtain contextual information from handsets using various capabilities of the mobiles. Contextual Information is every information we can obtain from the handset and that can be used for any kind of service. The most obvious information is location information. Within our framework we built an architecture that can obtain location information from various sources and is not bound to any special handset capability. Furthermore the architecture can be used to obtain various other context information, such as e.g. battery level. This information in addition is then used to offer special services to the customer. For this a correlation of the context information has to be done, which is based on a correlation engine for contextual information. This paper presents a framework that can handle and correlate contextual information in a very flexible way.
Download

Paper Nr: 223
Title:

Business Process Re-engineering in Supply Chains Examining the case of the expanding Halal industry

Authors:

Mohammed Belkhatir

Abstract: Due to several issues arising in the rapidly-expanding Halal industry, among them the production of non-genuine or contaminated products and meats, there is a need to develop effective solutions for ensuring authenticity and quality. This paper proposes the specification of a formalized supply chain framework for the production and monitoring of Halal food and products. The latter enforces high-level quality of automated monitoring as well as shorter production cycles through enhanced coordination between the actors and organizations involved. Our proposal is guided by business process support to ensure quality and efficiency of product development and delivery. It moreover meets the requirements of industrial standards by adopting the Capability Maturity Model Integration’s highest process maturity level through establishing quantitative process-improvement objectives, proposing the integrated support of engineering processes, enforcing synchronization and coordination, drastic monitoring and exception handling. We then delve into some of the important technologies from the implementation point-of-view and align it with the formalized Halal framework. An Information Technology support instantiation is proposed leading to a use case scenario with technology identification.
Download

Paper Nr: 237
Title:

DISCOVERY AND ANALYSIS OF ACTIVITY PATTERN CO-OCCURRENCES IN BUSINESS PROCESS MODELS

Authors:

Jean M. Lau, Manfred Reichert, Lucinéia Heloisa Thom and Cirano Iochpe

Abstract: Research on workflow activity patterns recently emerged in order to increase the reuse of recurring business functions (e.g., notification, approval, and decision). One important aspect is to identify pattern co-occurrences and to utilize respective information for creating modeling recommendations regarding the most suited activity patterns to be combined with an already used one. Activity patterns as well as their co-occurrences can be identified through the analysis of process models rather than event logs. Related to this problem, this paper proposes a method for discovering and analyzing activity pattern co-occurrences in business process models. Our results are used for developing a BPM tool which fosters the modeling of business processes based on the reuse of activity patterns. Our tool includes an inference engine which considers the patterns co-occurrences to give design time recommendations for pattern usage.
Download

Paper Nr: 253
Title:

MODELLING, ANALYSIS AND IMPROVEMENT OF MOBILE BUSINESS PROCESSES WITH THE MPL METHOD

Authors:

André Köhler

Abstract: This paper introduces the Mobile Process Landscaping (MPL) method for modelling, analysing and improving mobile business processes. Current approaches for process modelling and analysis do not explicitly allow the consideration of typical mobility issues, e.g. location-dependent activities, mobile networks as resources and specifics of mobile information systems. Thus, our method focuses on the modelling and analysis of these characteristics, and is furthermore based on the process landscaping approach, supporting the easy creation of hierarchical models of distributed processes. The method comes with a specialized modelling notation and guidelines for the creation of process landscapes, context models, and business object models. Furthermore, it provides a catalogue of formally defined evaluation objectives, targeting at typical mobility issues. Each evaluation objective can automatically be tested on the created process landscape. Furthermore, the method includes a best practices catalogue with patterns for process and application improvements for typical mobility situations. A validation of the method is presented showing results from the method’s use in a real-world project
Download

Paper Nr: 262
Title:

RFID IN THE SUPPLY CHAIN: HOW TO OBTAIN A POSITIVE ROI - The Case of Gerry Weber

Authors:

Christoph Goebel, Ralph Tröger, Christoph Tribowski, Oliver Günther and Roland Nickerl

Abstract: Although the use of Radio Frequency Identification (RFID) in supply chains still lags behind expectations, its appeal to practitioners and researchers alike is unbowed. Apart from technical challenges such as low read rates and efficient backend integration, a major reason for its slow adoption is the high transponder price. We deliver a case study that investigates the financial, technical and organizational challenges faced by an apparel company that is currently introducing item-level RFID to monitor their supply chain. The company has developed an implementation strategy based on cross-company closed-loop multi-functional use of RFID transponders. This strategy leads to a positive ROI in their case and could serve as an example for other companies considering the introduction of item-level RFID.
Download

Paper Nr: 274
Title:

UNCERTAINTIES MANAGEMENT FRAMEWORK – FOUNDATIONAL PRINCIPLES

Authors:

Deniss Kumlander

Abstract: Uncertainties management is the crucial part of modern software engineering practices, which is mostly ignored by management and modern software development practices or dealt with reactively. In the result unhandled uncertainties do introduce a lot of threads and cause later delivery of projects or over-budgeting, which means the failure of the software engineering process in most cases. In this paper foundation principles of uncertainties management framework are defined.
Download

Paper Nr: 285
Title:

A RULE-BASED APPROACH AND FRAMEWORK FOR MANAGING BEST PRACTICES

Authors:

Essam Mansour and Hagen Höpfner

Abstract: perform specified activities. In this paper we present our SIM approach that incorporates best practices as skeletal plans from which several entity-specific (ES) plans are generated. The skeletal and ES plans represent the complex information incorporating the best practices into organization activities. The paper also presents the SIM framework for managing complex information through three phases: specifying the skeletal plans, instantiating ES plans, maintaining these ES plans during their lifespan. The paper outlines an implementation, a case study and the evaluation of the SIM approach and framework.
Download

Paper Nr: 289
Title:

ENTERPRISE SYSTEM DEVELOPMENT WITH INVARIANT PRESERVING - A Mathematical Approach by the Homotopy Lifting and Extension Properties

Authors:

Kenji Ohmori

Abstract: In this paper, a theoretical method for developing enterprise systems represented by the pi-calculus is introduced. The method is based on the modern mathematics of homotopy theory. The homotopy lifting and extension properties are applied to developing systems in bottom-up and top-down ways with the incrementally modular abstraction hierarchy, where system development is carried out by climbing down abstraction hierarchy with linearly adding invariants. It leads to avoid combinatorial explosion causing an enormous waste of time and cost on testing. The system requirements and use-cases derive a state transition diagram by applying the HEP. Then, the state transition diagram and behavior description bring pi-calculus processes by applying the HLP. These development processes do not need testing since these are designed by preserving invariants.
Download

Paper Nr: 296
Title:

AUTOMATIC GENERATION OF TEST CASES IN SOFTWARE PRODUCT LINES

Authors:

Pedro Reales Mateo, Beatriz Perez and Macario Polo

Abstract: This paper describes a method to automatically generate test cases with oracle in software product lines, where the management of variability and traceability are two indispensable requirements. These characteristics may be quite useful for the processing and automatic addition of the oracle to test cases, which is one of the main problems found, not only in the context of software product lines, but also in general testing literature. The paper describes a simple, but effective, way to deal with this problem, based on annotations to precode artifacts, metamodelling and transformation algorithms.
Download

Paper Nr: 308
Title:

REASONING ABOUT CUSTOMER NEEDS IN MULTI-SUPPLIER ICT SERVICE BUNDLES USING DECISION MODELS

Authors:

Sybren de Kinderen, Jaap Gordijn and Hans Akkermans

Abstract: We propose a method, e3-service, to reason about satisfying customer needs in the context of a wide choice of multi-supplier ICT service bundles. Our method represents customer needs, their ensuing consequences, and the services that realize those consequences in a service catalogue. This catalogue is then used by a reasoner, which elicits customer needs, computes their consequences, and automatically matches these consequences with services offered by suppliers. The e3-service method has been implemented and tested in software to demonstrate its feasibility.
Download

Paper Nr: 325
Title:

AN EVENT STRUCTURE BASED COORDINATION MODEL FOR COLLABORATIVE SESSIONS

Authors:

Kamel Barkaoui, Chafia Bouanaka and José M. Espinosa

Abstract: Distributed collaborative applications are characterized by supporting groups’ collaborative activities. This kind of applications is branded by physically distributed user groups, who cooperate by interactions and are gathered in work sessions. The effective result of collaboration in a session is the production of simultaneous and concurrent actions. Interactions are fundamental actions of a collaborative session and require being coordinated (synchronized) to avoid inconsistencies. We propose in the present work an event structure based model for coordination in a collaborative session, making possible interactions between participants and applications in a consistent way. The proposed model describes interdependencies, in the form of coordination rules, between different actions of the collaborative session actors.
Download

Paper Nr: 331
Title:

MINING AND MODELING DECISION WORKFLOWS FROM DSS USER ACTIVITY LOGS

Authors:

Petrusel Razvan

Abstract: This paper introduces the concept of decision workflows, regarded as the sequence of actions of the decision maker in decision making process. We show how, based on a decision support system we previously created, we log the behaviour of the decision maker. The log is then imported into ProM framework and mined using process mining algorithms. The mined model will show us the control-flow perspective (which is the order of decision maker’s actions), the organisational perspective (which is the actual relationship among decision makers in group decisions), and the case perspective (what kind of support is required by each type of decisions). The aim of our research is to automate the creation of decision making patterns. Once obtained, the workflows can be merged into a financial enterprise model, which, properly validated, can become a financial reference model.
Download

Paper Nr: 334
Title:

INFORMATION ARCHITECTURE OF FRACTAL INFORMATION SYSTEMS

Authors:

Janis Grabis, Marite Kirikova and Jānis Vanags

Abstract: The fractal approach has emerged as a promising method for development of loosely coupled, distributed enterprise information systems. This paper investigates application of information architecture in development of fractal information systems. The information architecture allows modelling rich information flows among fractal entities. Principles of designing the information architecture of fractal information systems as well as rules for analyzing the information architecture are developed. These rules are used to obtain problem-domain representations specifically suited for needs of individual fractal entities. The usage of the information architecture in implementation of the fractal information system for the university’s study programme development problem is demonstrated.
Download

Paper Nr: 351
Title:

A PROCESS FOR MULTI-AGENT DOMAIN AND APPLICATION ENGINEERING: THE DOMAIN ANALYSIS AND APPLICATION REQUIREMENTS ENGINEERING PHASES

Authors:

Rosario Girardi and Adriana Leite

Abstract: Domain Engineering is a process for the development of a reusable application family in a particular domain problem, and Application Engineering, the one for the construction of a specific application based on the reuse of software artifacts in the application family previously produced in the Domain Engineering process. MADAE-Pro is an ontology-driven process for multi-agent domain and application engineering which promotes the construction and reuse of agent-oriented applications families. This article introduces an overview of MADAE-Pro emphasizing the description of its domain analysis and application requirements engineering phases and showing how software artifacts produced from the first are reused in the last one.
Download

Paper Nr: 358
Title:

A REVISED MODELLING QUALITY FRAMEWORK

Authors:

Pieter Joubert, Stefanie Louw, Carina De Villiers and Jan Kroeze

Abstract: Systems modelling quality plays a critical role in the quality of the final system. Better quality systems are one aspect of addressing system failures which are still common today. This research paper studies quality frameworks for systems modelling techniques, presenting a revised framework. Several authors built their frameworks on the Lindland et al. (1994) conceptual model quality framework. Those frameworks are more abstract and static – they do not clearly illustrate the flow of information through the systems modelling process. The proposed framework makes it much easier to identify which quality aspects have to be in place at which points within the modelling process for it to be successful in its purpose. In addition, it creates awareness on issues such as the kind of skills and background knowledge that people, who are involved in this process, need to have.
Download

Paper Nr: 398
Title:

LAYERED PROCESS MODELS: ANALYSIS AND IMPLEMENTATION (USING MDA PRINCIPLES)

Authors:

Samia Oussena and Balbir Barn

Abstract: One of the key challenges of Service-oriented architecture (SOA) is to build applications, services and processes that truly meet business requirements. Model-Driven Architecture (MDA) promotes the creation of models and code through model transformation. We argue in this paper that the same principle can be used to drive the development of SOA applications, using a Business Process Modelling (BPM) approach, supported by Business Process Modelling Notation (BPMN). We present an approach that allows the SOA application to be aligned with the business requirements, by offering guidelines for a systematic transformation of a business process model from requirements analysis into a working implementation.
Download

Paper Nr: 402
Title:

EFFICIENT DATA STRUCTURES FOR LOCAL INCONSISTENCY DETECTION IN FIREWALL ACL UPDATES

Authors:

Sergio Pozo Hidalgo, Fernando de la Rosa T. and Rafael M. Gasca

Abstract: Filtering is a very important issue in next generation networks. These networks consist of a relatively high number of resource constrained devices and have special features, such as management of frequent topology changes. At each topology change, the access control policy of all nodes of the network must be automatically modified. In order to manage these access control requirements, Firewalls have been proposed by several researchers. However, many of the problems of traditional firewalls are aggravated due to these networks particularities, as is the case of ACL consistency. A firewall ACL with inconsistencies implies in general design errors, and indicates that the firewall is accepting traffic that should be denied or vice versa. This can result in severe problems such as unwanted accesses to services, denial of service, overflows, etc. Detecting inconsistencies is of extreme importance in the context of highly sensitive applications (e.g. health care). We propose a local inconsistency detection algorithm and data structures to prevent automatic rule updates that can cause inconsistencies. The proposal has very low computational complexity as both theoretical and experimental results will show, and thus can be used in real time environments.
Download

Paper Nr: 409
Title:

DERIVING SIMPLIFIED BUSINESS OBJECT OPERATION NETS FROM PROCESS MODELS

Authors:

Wang Zhaoxia, Wang Jianmin, Wen Lijie and Liu Yingbo

Abstract: Process model defines the business object operation orders. It is necessary to validate that the business object operation sequences are consistent with the business object reference life cycle. In this paper we proposed an approach for deriving the simplified business object operation nets from process models which are modelled with workflow based on coloured Petri net, i.e. WFCP-net. Our approach consists of 3 steps. First, the tasks, which access a certain business object, are modelled with task operation nets. Second, the WFCP-net is rewritten with these task operation nets. Third, the business object operation net is reduced to the simplified one.
Download

Paper Nr: 451
Title:

A VISION FOR AGILE MODEL-DRIVEN ENTERPRISE INFORMATION SYSTEMS

Authors:

Nick van Beest, Nick Szirbik and Hans Wortmann

Abstract: Current model-driven techniques claim to be able to generate Enterprise Information Systems (EISs) based on enterprise models. However, these techniques still lack – after the initial deployment – the long-time desired flexibility, which allows that a change in the model can be immediately and easily reflected in the EIS. Interdependencies between models are insufficiently managed, requiring a large amount of human intervention to achieve and maintain consistency between models and the EIS. In this position paper a vision is presented, which describes how model-driven change of EISs should be structured in a coherent framework that allows for monitoring of interdependencies during model-driven change. Therefore, proposing fully automated consistency and pattern checks, the presented agile model-driven framework will reduce the amount of required human interventions during change. As a result, the cost and time span of model-driven EIS change can be reduced, thereby improving organizational agility.
Download

Paper Nr: 454
Title:

SPECIFYING AND COMPILING HIGH LEVEL FINANCIAL FRAUD POLICIES INTO STREAMSQL

Authors:

Michael Edge, Pedro R. Falcone Sampaio , Oliver Philpott and Mohammad Choudhary

Abstract: Fraud detection within financial platforms remains a challenging area in which criminals continue to thrive, breaching security mechanisms with increasingly innovative and sophisticated system attacks. Following the migration from reactive to proactive screening of transactional data to reduce an organisations fraud detection latency, fraud analysts now find themselves responsible for the maintenance of extensive fraud policy sets and their implementation as complex data stream processing procedures. This paper presents a Financial Fraud Modelling Language and policy mapping tool for high level expression and implementation of proactive fraud policies using stream processors. A key aspect of the approach is reduction of the complexity and implementation latency associated with proactive fraud policy management through abstraction of policy functionality using a conceptual level modelling language and innovative policy mapping tool. This paper focuses upon the rule based language model for high level expression of financial fraud policies and the associated compiler tool for specifying and mapping policies into StreamSQL.
Download

Paper Nr: 480
Title:

Robust approach towards context dependent information sharing in distributed environments

Authors:

Jenny Lundberg and Rune Gustavsson

Abstract: In the paper we propose a robust approach towards context dependant information modelling supporting trustworthy information exchange. Shortcomings and challenges of present approaches of syntax-based information modelling in dynamic context are identified. Basic principles are introduced and used to provide a robust approach towards meeting some of those challenges. The approach has a main aim of reducing brittleness of context dependant information and enabling intelligible information handling in distributed environments. The application domain is Emergency Service Centres, where the distributed handling of emergency calls in life critical situations of future change is in focus. The main contribution in the paper is a principled approach of use of abbreviations in dynamic emergency situations. Points of interaction for coordination are introduced as a tool supporting mappings of abbreviations between different contexts.
Download

Paper Nr: 489
Title:

USING BPMN AND TRACING FOR RAPID BUSINESS PROCESS PROTOTYPING ENVIRONMENTS

Authors:

Mario G. C. A. Cimino, Beatrice Lazzerini, Alessandro Ciaramella and Francesco Marcelloni

Abstract: Business Process (BP) analysis aims to investigate properties of BPs by performing simulation, diagnosis and verification with the goal of supporting BP Management (BPM). In this paper, we propose a framework for BPM that relies on the BP Modeling Notation (BPMN). More specifically, we first introduce a method to deal with the BPM life cycle. Then, we discuss a platform to support this life cycle. The platform com-prises three basic modules: a visual BPMN-based designer, a process tracing service, and a BP Manager for, respectively, the design, configuration and execution phases of the BPM life cycle. The proposed frame-work is particularly useful to perform business simulations such as what-if analysis, and to provide an effi-cient integration support within the supply-chain. In this study, we also show some practical application of this framework through a real-world experience on a leather firm, offering an environment for process communication as well as for time and cost analysis.
Download

Paper Nr: 501
Title:

INFORMATION SYSTEMS SECURITY BASED ON BUSINESS PROCESS MODELING

Authors:

Joseph Barjis

Abstract: In this paper, we propose a conceptual model and develop a method for secure business process modeling towards information systems (IS) security. The emphasis of the proposed method is on social characteristics of systems, which is furnished through association of each social actor to their authorities, responsibilities and obligations. In turn, such an approach leads to secure information systems. The resulting modeling approach is a multi-method for developing secure business process models (secure BPM), where the DEMO transaction concept are used for business process modeling, and the Norm Analysis Method (organizational semiotics) for incorporating security safeguards into the model.
Download

Paper Nr: 513
Title:

A SERVICE ORIENTED ENGINEERING APPROACH TO ENHANCE THE DEVELOPMENT OF AUTOMATION AND CONTROL SYSTEMS

Authors:

David Hästbacka and Seppo Kuikka

Abstract: In order for the manufacturing industry and closely related engineering disciplines to be competitive and productive, business structures and practices have to adapt to global changes and harder competition on all levels of operation. An engineering approach based on engineering services provides a foundation for commercial of the shelf services to be combined and utilized between engineering enterprises in the development of automation and control systems. A service based operations model would enable the utilization of expert services as a part of the development process to improve system quality, increase productivity and provide better work process management as well as allow easier integration to later life-cycle operations. This paper presents opportunities this kind of a conceptual approach offers and outlines some of the related research challenges that need further investigation.
Download

Paper Nr: 515
Title:

APPLYING AN EVENT-BASED APPROACH FOR DETECTING REQUIREMENTS INTERACTION

Authors:

Edgar Sarmiento, Marcos Borges and Maria L. Campos

Abstract: At the software development cycle, it is in the requirements analysis phase that most of the problems that can compromise the delivery time and the development and maintenance costs must be identified and resolved. In general, the requirements obtained in this phase have different relationships with each other. Some of these relationships, commonly called negative interactions, make difficult or impossible the progress of some activities of the development process. The detection of interactions between requirements is an important activity that may prevent some of these problems and avoid their propagation throughout the remainder activities of the software development process. Most of the existent research in this area only focuses on the requirements phase, mainly in the identification of conflict and/or inconsistency interactions. This paper presents a semi-formal event-based approach to model and identify the interactions between requirements, investigating the interactions that influence the other phases of the software development process.
Download

Paper Nr: 518
Title:

AN EVOLUTIONARY APPROACH FOR QUALITY MODELS INTEGRATION

Authors:

Rodrigo Espindola and Jorge Audy

Abstract: The existing quality models (as ISO/IEC 15504, CMMI, MPS.BR, ITIL, COBIT) establish different processes and controls that must be adopted to achieve high software process reliability. Whereas it’s possible notice similarities and overlapping areas among them, a systematic approach to integrate quality models is not widely explored in the literature. In this work we propose an evolutionary approach to integrate quality models. The approach defines a method that can be executed in a systematic way and has a meta-model and a mapping table as outcome. The method is composed by two stages: the meta-model development and the meta-model stabilization. As this is an ongoing research, this work is presenting the application and the results from the execution of the first stage. As a result, a meta-model representing the structure of four different quality models was developed and its applicability was verified.
Download

Paper Nr: 527
Title:

A socio-semantic approach to the conceptualization of domains, processes and tasks in large projects

Authors:

Carla Pereira, Antonio L. Soares and Cristovão Sousa

Abstract: A case study involving a new method to support the collaborative construction of semantic artefacts in an inter-organizational context is described. The method aims at being applied, in particular, in the early phases of ontology development. We share the view that the development of semantic artefacts in collaborative networks of organizations should be based on a continuous construction of meaning, rather than pursuing the delivery of highly formalized accounts of domains. For that, our research is directed to the application of cognitive semantics results, specifically by developing and extending the Conceptual Blending Theory to cope with the socio-cognitive aspects of inter-organizational ontology development. An evaluation experiment for this method is accomplished in the scope of a large European project in the area of industrial engineering. The method evaluation and its results are described. We conclude by describing avenues of ongoing and future research.
Download

Paper Nr: 532
Title:

Enterprise Ontology Management: An Approach Based on Information Architecture

Authors:

Leonardo Azevedo, Claudia Cappelli, Jairo Souza, Flavia Santoro, Fernanda Baião, Sean Siqueira and Mauro Lopes

Abstract: Ontologies have gained popularity, but its promises of being a key point to the solution of real-world problems and mitigating interoperability problems at a large scale have not yet been accomplished. Ontology management is at the kernel of this evolution, and there is a lack of adequate strategies and mechanisms for handling it in such a way to contribute to a better alignment between business and IT. This work proposes an approach for enterprise ontology management as part of an Information Architecture initiative. This approach provides a more complete foundation of the ontology lifecycle while guiding the enterprise in this management, by defining a set of processes, roles and competencies required for ontology management.
Download

Paper Nr: 554
Title:

FINDING REUSABLE BUSINESS PROCESS MODELS BASED ON STRUCTURAL MATCHING

Authors:

Han G. Woo

Abstract: Successfully integrating business processes with information systems has been a critical issue in many organizations. Such integrations should take place throughout the various stages of systems development to manage correct, traceable business process requirements. To support business process management (BPM) activities, many modeling formalisms and tools were proposed. Yet reuse of business process knowledge has been understudied although reuse practice is common, often relying on human recollection and reference models. This research proposes a tool support that assists reuse of business process models such as BPMN, EPC, and UML Activity Diagrams. In the suggested approach, the semantics of these formalisms are preserved in the conceptual graph format along with their instantiations and interrelationships. A structural data mining tool is then used to find reusable process models based on similarities in sequences of events and processes. This study can be applied to many reuse-related situations, namely retrieval of reusable process models given a problem, discovery of sequence patterns among process models, and suggesting the instances of (anti-) patterns for learning purpose.
Download

Paper Nr: 573
Title:

METHOD MANUAL BASED PROCESS GENERATION AND VALIDATION

Authors:

Peter Killisperger, Georg Peters, Markus Stumptner and Thomas Stückl

Abstract: In order to use software processes for a spectrum of projects they are described in a generic way. Due to the uniqueness of software development, processes have to be adapted to project specific needs to be effectively applicable in projects. Siemens AG has started research projects aiming to improve this instantiation of processes. A system supporting project managers in instantiation of software processes at Siemens AG is being developed. It aims not only to execute instantiation decision made by humans but to automatically restore correctness of the resulting process.
Download

Paper Nr: 580
Title:

REVERSE ENGINEERING A DOMAIN ONTOLOGY TO UNCOVER FUNDAMENTAL ONTOLOGICAL DISTINCTIONS

Authors:

Mauro Lopes, Giancarlo Guizzardi, Fernanda Baião and Ricardo Falbo

Abstract: Ontologies are commonly used in computer science either as a reference model to support semantic interoperability in several scenarios, or as a computer-tractable artifact that should be efficiently represented to be processed. This duality poses a tradeoff between expressivity and computational tractability that should be taken care of in different phases of ontology engineering. In this scenario, the choice of the ontology representation language is crucial, since different languages contain different expressivity and ontological commitments, reflecting on the specific set of available constructs. The inadequate use of a representation language, disregarding the goal of each ontology engineering phase, can lead to serious problems to database design and integration, to domain and systems requirements analysis within the software development processes, to knowledge representation and automated reasoning, and so on. This article presents an illustration of these issues by using a real industrial case study in the domain of Oil and Gas Exploration and Production. We explicit the differences between two different representations of this domain, and highlight a number of concepts and ideas (tacit domain knowledge) that were implicit in the original model represented using an ontology-codification language and that became explicit by applying methodological directives underlying an ontologically well-founded modeling language.
Download

Paper Nr: 587
Title:

A BPMN Based Secure Workflow Model

Authors:

Li Peng

Abstract: Secure workflow has become an important topic in both academia and industry. A secure workflow model can be used to analyze workflow systems according to specific security policies. This model is needed to allow controlled access of data objects, secure execution of tasks, and efficient management and administration of security. In this paper, I propose a BPMN-based secure workflow model to manage specific processes such as authorizations in executing tasks and accessing documents. The secure workflow model is constructed using BPMN-elements. The model is hierarchical and describes a secure workflow system at workflow layer, task layer and data layer. This model ensures the security properties of workflows: integrity, authorization and availability. Moreover, the model is easily readable and understandable
Download

Paper Nr: 589
Title:

Semiotics: An Asset for Understanding Information Systems Communication

Authors:

Pedro A. Rocha and Ângela L. Nobre

Abstract: Problem solving resides on knowledge and/or imagination use, and in a dialogue, even in a monologue, established communication often has misunderstandings, prideful assumptions and crosstalks. The processing and communication of Information in an organisation are produced by creating, passing and utilising signs, whatever they may be, with or without the perception of its Semiotics. Considering we could conceive it in such way, and because we are three dimensional beings, the act of solving is endemic and unconscious to us. We do it using a cognitive mental and visual mean that resides on a hyper-environment based on signs, even before the creation of its doctrine. Therefore, Semiotics exists in and within us. With that definition in mind, why we do not use it and establish it on a daily basis in the classroom, at the workplace, in social affairs?
Download

Paper Nr: 601
Title:

Enhancing high precision by combining Okapi BM25 with structural similarity in an information retrieval system

Authors:

Yaël Champclaux and Taoufiq Dkaki

Abstract: The main objective of information retrieval systems (IRS) is to select relevant documents, related to a user’s information need, from a collection of documents. The user‘s need is expressed as a query. An IRS compares documents against queries to select documents that may be useful to the user. The comparison is usually performed on document and query representations rather than on the primary documents and queries. Documents (resp. queries) are analyzed by the IRS in order to extract keywords representing their content. This represents the indexing phase. The aim of indexing is to choose concepts or terms that represent the document. To achieve it, each document (resp. query) is analysed, common language terms called “stop words” are omitted, and remaining terms are stemmed. To refine documents (resp. queries) representation, representing terms can be weighted, i.e. a value is associated to each term in order to quantify the term importance in a document. After the indexing phase, query representation can be compared to document representation, and thus find similar to query documents. Similarity is a core component that shapes the whole IR model. The representation space and the comparison method in this space define the IR model. From the fifties, we have seen different IR model, such as Boolean model, vector-space model, probabilistic model … each of them based on different theories, and using different measure to uncover most similar documents to queries. In this paper, we present a graph-based model which belongs to the vector space family. A vector space model considers each document as a vector in the term space. Each coordinate of a vector is a value representing the importance in a document or in a query of an indexing term. The vector space is defined by the set of terms that the system collects during the indexing phase. Many similarity measures such as Cosine, Jaccard, Dice… are used to determine how well a document corresponds to a query. Such measures determine local similarities between a document and a query on the basis of the terms they have in common. Our goal is to exploit another type of similarities called structural similarities. These similarities identify resemblances between elements on the basis of relationships they have. The structural relationship that we use originates from the fact that documents contain words and that words are contained in documents. The idea is to compare these documents through the similarities between the words they contain while similarities between words are themselves dependent on similarities between the documents they are contained in. In a previous paper, we have shown that the use of structural similarities alone was not sufficient to improve the performance of an IRS. In this paper, we present a different method that combines the use of both structural and surface similarities with the aim of enhancing high precision. Surface similarity is computed as an Okapi measure. Selected documents are then stored in a graph then sorted using a SimRank-based score. We call this 2-stages method OkaSim. We have performed different experiments with different term-weightings on the Cranfield Corpus and show that the structural similarities can improve an Okapi ranking. We show that those similarities can improve average precision more than 50% and precision at top 10 retrieved documents about 50% of an Okapi ranking. Tests and experiments also address the term weighting influences on system performances.
Download

Paper Nr: 603
Title:

Towards Integrating Perspectives and Abstraction Levels in Business Process Modeling

Authors:

Ivan Markovic and Florian Hasibether

Abstract: In process-driven organizations, process models are the basis on which their supporting process-aware information systems are built. Process modeling is still a highly complex, time consuming and error-prone task. In this paper, we propose an approach for integrating perspectives and abstraction levels in business process modeling. First, we propose six process perspectives to adequately organize information about a business process. Second, we present the abstraction levels in process modeling and discuss metamodel projections on each of the levels. Third, we provide a comparison of our approach to other efforts in the field. With our approach, we make a step towards reducing the complexity of process modeling.
Download

Paper Nr: 621
Title:

KEEPING THE RATIONALE OF IS REQUIREMENTS USING ORGANIZATIONAL BUSINESS MODELS

Authors:

Juliana Jansen Ferreira, Victor M. Chaves, Renata Araujo and Fernanda Baião

Abstract: This paper proposes an approach for identifying, documenting and keeping the rationale of information systems requirements starting from the organizational business model. The approach comprises a method and the implementation of a supporting tool. The paper also discusses the results of preliminary case studies with this approach.
Download

Paper Nr: 624
Title:

AN EFFECTIVE PROCESS MODELLING TECHNIQUE

Authors:

Nadja Damij

Abstract: This paper discusses the problem of process modelling and aims to introduce a simple technique called the activity table to find a better solution for the problem mentioned. The activity table is a technique used in the field of process modelling and improvement. Business process modelling is done by identifying the business processes and is continued by defining work processes and activities of each chosen business process. This technique is independent on the analyst ans his/her experience. It requires that each identified must be connected to its resource and its successor activity and in this manner contribute a great deal in developing a process model, which represents a true reflection to actual business process. The problem of conducting a surgery is used as an example to test the technique.
Download

Paper Nr: 16
Title:

A KIND OF INFORMATION CONTENT APPLIED FOR THE HANDICAPPED AND DEMENTIA SITUATION CONSIDERING PHILOSOPHICAL ELEMENTS

Authors:

Masahiro Aruga

Abstract:  There has been developed a communication system among the blind deaf persons and others, they have been able to communicate easily. And its information contents are needed to be discussed when the situation of their dementia is needed to be analyzed, and the information contents are needed to be estimated to correspond to changing signals of this system when they become dementia. In this paper, firstly the outline of an example of such communication system is described, and secondly the philosophical elements and its structures are discussed and such information contents are considered from the philosophical side. Here on the Peirce’s semiotic the interpretants which were defined by Peirce are introduced into the analytical method as the philosophical elements to analyze the information structure and communication system of handicapped and dementia situations. And the new information contents different from the Shannon’s ordinary information content are discussed to be introduced into the method of analysis of information process of handicapped and dementia situations. Thirdly considering the new information contents the concept of compartment system is introduced into the consideration of relations among interpretants, and as a result an example of concept of the new information content with regard to the elements of structure of the handicapped and dementia situation is proposed on the basis of discussing the compartment system characters.
Download

Paper Nr: 70
Title:

Evaluation of UML in Practice - Experiences in a Traffic Management Systems Company

Authors:

Michel dos Santos Soares and Jos Vrancken

Abstract: This article is about research performed by the authors into improving the Software Engineering process at a company that develops software-intensive systems. The company develops road traffic management systems using the object-oriented paradigm, and UML as the visual modeling language. Our hypothesis is that UML has some difficulties/drawbacks in certain system development phases and activities. Many of these problems were reported in the literature normally after applying UML to one project and/or studying the language's formal specifications and comparing with other languages. Unfortunately, few publications are based on surveys and interviews with practitioners, i.e., the developers and project managers that are using UML in real projects and are frequently facing these problems. As a matter of fact, some relevant questions were not fully addressed in past research, mainly related to UML problems in practice. The purpose of this text is to report the main findings and the proposed improvements based on other methods/languages, or even considering UML diagrams that are not often used. The research methodology involved surveys, interviews and action research with a system developed in order to implement the recommendations and evaluate the proposed improvements. The recommendations were considered feasible, as they are not proposing to radically change the current situation, which would involve higher costs and risks.
Download

Paper Nr: 147
Title:

AN EVALUATION OF SECURITY SAFEGUARD SELECTION METHODS

Authors:

Thomas Neubauer and Thomas Neubauer

Abstract: IT security incidents pose a major threat to the efficient execution of corporate strategies and business processes. Although companies generally spend a lot of money on security companies are often not aware of their spending on security and even more important if these investments into security are effective. This paper provides decision makers with an overview of decision support techniques, describes pros and cons of these methodologies.
Download

Paper Nr: 159
Title:

Privacy for RFID-Enabled Distributed Applications - Design Notes

Authors:

Mikaël Ates, Jacques FAYOLLE, Christophe Gravier, Jeremy Lardon and Rahul Garg

Abstract: We depict here a RFID system based on a distributed application. Through a simple use case we describe the main privacy concerns, The concern of this paper is RFID systems coupled with distributed applications. We do not treat known RFID attacks, we rather focus on the best way to protect the identity mapping, i.e. the association of a tag identifier, which can be obtained or deduced from the tags or communications including the tags, and the real identity of its carrier. We rely on a common use case of a distributed application and a modelling approach.
Download

Paper Nr: 178
Title:

BRAIN PHYSIOLOGICAL CHARCTERISTIC ANALYSIS FOR SOFTWARE ANALYSIS SUPPORT ENVIRONMENTS

Authors:

Mikio Ohki and Haruki Murase

Abstract: In the field of Industrial Engineering, a number of studies on the production process have been conducted to achieve higher quality and productivity through the ages. On the other hand, as for software development, no study has been conducted on the environment optimized for brain work from the viewpoints of personality, motivation, and procedures to improve quality and productivity, since brain work is not visible. However, recently, devices that can measure the activation state of brain in a practical work environment are available. This paper analyzes software analysis tasks from the viewpoint of brain physiology based on the measurement results attained from the experiments using such a device and discusses the fundamental issues and challenges to implement an ideal software analysis support environment.
Download

Paper Nr: 192
Title:

Modular Behaviour Modelling of Service Providing Business Processes

Authors:

E. E. Roubtsova, Lex Wedemeijer, Ashley Mcneile and Karel Lemmen

Abstract: We examine possibilities for modularizing the executable models of Service Providing Business Processes in a way that allows reuse of common patterns across different applications. We argue that this requires that we can create and then compose independent models for different required aspects of the process and then compose these partial behavioral models to realize a complete solution. We identify two areas of modeling that should be separable from the main, application specific, process model: the underlying subject matter with which the process is concerned, and standard re-usable process-level behavior that is common across many processes. We compare the capabilities of two modeling techniques (Coloured Petri Nets and Protocol Modeling) to support this compositional approach, using an example of a Service Providing Business Process concerned with Formal Accreditation of Prior Learning.
Download

Paper Nr: 208
Title:

THE PATTERNS FOR INFORMATION SYSTEM SECURITY

Authors:

Diego Abbo and Lily Sun

Abstract: The territory of IS is continuously improving its capacities, new architectures grow at a brisk pace and qualitatively the functional processes are deepening the degree of interaction inherent in the services provided. In the logical and/or physical territory of application, security management wisely faces the inherent problems in the domains of prevention, emergency and forensic investigation. If the visionary plans are good the security breakages will be going to be within the “residual risk profiles” of a congruous preventive risk analysis, and any further business development will match costs of security safeguards with the detrimental economical consequences of security breakages. In that perspective the IS security should have a larger field of application than the traditional security vision in the sense that the mere responsibility of a security domain should not only consider the immediate self interest of the owner of the asset. The IS security should consider the horizontal and hierarchical integrations and interoperability with all the correlated security systems or all the security needed systems, with an intrinsic capacity of evaluation any possible future model. The most efficient security should results the one that can individuate all the possible variables that constitute the basic for the patterns.
Download

Paper Nr: 224
Title:

ALIGNING GOAL-ORIENTED REQUIREMENTS ENGINEERING AND MODEL-DRIVEN DEVELOPMENT

Authors:

Fernanda Alencar, Oscar Pastor, Jaelson Castro, Giovanni Giachetti and Beatriz Marín

Abstract: In order to fully capture the various system facets, a model should have not only software specifications but it should integrate multiple complementary views. Model-Driven Development (MDD) and Goal-Oriented Requirements Engineering (GORE) are two modern approaches that deal with models and that can complement each other. We want to demonstrate that a sound software production process can start with a GORE-based requirements model and can finish with advanced MDD-based techniques to generate a software product. Therefore, we intend to show that GORE and MDD can be successfully put together.
Download

Paper Nr: 241
Title:

VAODA: A VIEWPOINT AND ASPECT-ORIENTED DOMAIN ANALYSIS APPROACH

Authors:

João Araújo and António Rodrigues

Abstract: Domain analysis (DA) consists of analyzing properties, concepts and solutions for a given domain of application and, based on that information, decisions are made for future application within that domain. In DA, feature modelling is used to describe common and variable requirements in software systems. Nevertheless, they show a limited view of the domain. Requirements approaches can be integrated to specify domain requirements. Among them, viewpoint-oriented approaches stand out by their simplicity, and efficiency in organizing requirements. However, none of them deals with modularization of crosscutting concerns. Aspect-Oriented Domain Analysis (AODA) is a growing area of interest as it addresses the problem of specifying crosscutting properties at domain analysis level. The goal is to obtain a better reuse at this abstraction level through the advantages of aspect orientation. This work proposes an AODA approach that integrates feature modeling and viewpoints.
Download

Paper Nr: 292
Title:

TOWARDS A UNIFIED DOMAIN FOR FUZZY TEMPORAL DATABASES

Authors:

Carmen Garrido, Nicolás Marín and Olga Pons

Abstract: Temporal Databases (TDB) have as a primary aim to offer a common framework to those DB applications that need to store or handle temporal data of different nature or source, since they allow to unify the concept of time from the point of view of its meaning, its representation and its manipulation. At first sight, it may seem that incorporation of time to a DB is a direct and even simple task, but, on the contrary, it is a quite complex aim because time may be provided by different sources, with different granularities and meaning. The situation gets more complex when the time specification is not made in precise but in fuzzy terms, where together with the inherent problems of the time domain, we have to consider the imprecision factor. To deal with this problem, the first task to perform is to unify as much as possible the representation of time in order to be able to define the range and the semantics of the necessary operators to handle data of this type.
Download

Paper Nr: 313
Title:

PROCESS INSTITUTIONALIZATION USING SOFTWARE PROCESS LINES

Authors:

Tomás Martínez-Ruiz, Félix García Rubio and Mario Piattini

Abstract: Software Process Institutionalization is an important step which must be carried out by organizations if they are to improve their processes, and must take place in a coherent manner in accordance with the organization’s policies. However, process institutionalization implies adapting processes from a set of the organization’s standard processes, and these standard processes must be continually maintained and updated through the standardization of best practices, since adaptation in itself cannot create capable processes. In this paper we propose using the philosophy of software process lines to design a cycle and specify a set of techniques and practices to institutionalize software processes. The cycle, techniques and practices include both process tailoring and process standardization to offer organizations an infrastructure with which to generate processes that are better fitted to their necessities. The use of our cycle will enable capable processes to be tailored from software process lines, and the analysis of these processes will permit the improvement of the organization’s set of standard processes and of the software process line.
Download

Paper Nr: 397
Title:

A SYSTEMATIC LITERATURE REVIEW OF REQUIREMENTS ENGINEERING IN DISTRIBUTED SOFTWARE DEVELOPMENT

Authors:

Thaís Ebling, Jorge Audy and Rafael Prikladnicki

Abstract: Distributed Software Development (DSD) is a recent approach that presents itself as a trend within the organizations. With the evolution of this phenomenon, the result is an increasing in the existent literature. At the same time and on analyzing the main characteristics of distributed environments (physical and temporal distance, cultural and language differences), we can notice that they particularly affect the Requirements Engineering (RE). For this reason, in this paper we report from a systematic review of the DSD literature, where we looked for challenges and possible solutions related to RE in DSD environments. We also discuss gaps of this research area, which can be used to guide future researches.
Download

Paper Nr: 443
Title:

APPLICABILITY OF ISO/IEC 9126 FOR THE SELECTION OF FLOSS TOOLS

Authors:

María A. Perez, Kenyer Dominguez, Edumilis M. Ortega and Luis E. Mendoza

Abstract: The trend towards the use of Free/Libre Open Source Software (FLOSS) tools is impacting not only how we work and how productivity can be improved when it comes to developing software, but is also promoting new work schemes and business models. The purpose of this paper is to present the applicability of ISO/IEC 9126 for the selection of FLOSS Tools associated with three relevant software development disciplines, such as Analysis and Design, Business Models and Software Testing. The importance thereof originates from the provision to software development companies, specifically small and medium-size enterprises, with a feasible and effective model to decide and select the best tool. The categories considered for the evaluation of these three types of tools are Functionality, Maintainability and Usability. From the results obtained from this research-in-progress, we have been able to determine that these three categories are the most relevant and suitable to evaluate FLOSS tools, thus pushing to the background all aspects associated with Portability, Efficiency and Reliability. Our long-term purpose is to refine quality models for other types of FLOSS tools.
Download

Paper Nr: 443
Title:

APPLICABILITY OF ISO/IEC 9126 FOR THE SELECTION OF FLOSS TOOLS

Authors:

María A. Perez, Kenyer Dominguez, Edumilis M. Ortega and Luis E. Mendoza

Abstract: The trend towards the use of Free/Libre Open Source Software (FLOSS) tools is impacting not only how we work and how productivity can be improved when it comes to developing software, but is also promoting new work schemes and business models. The purpose of this paper is to present the applicability of ISO/IEC 9126 for the selection of FLOSS Tools associated with three relevant software development disciplines, such as Analysis and Design, Business Models and Software Testing. The importance thereof originates from the provision to software development companies, specifically small and medium-size enterprises, with a feasible and effective model to decide and select the best tool. The categories considered for the evaluation of these three types of tools are Functionality, Maintainability and Usability. From the results obtained from this research-in-progress, we have been able to determine that these three categories are the most relevant and suitable to evaluate FLOSS tools, thus pushing to the background all aspects associated with Portability, Efficiency and Reliability. Our long-term purpose is to refine quality models for other types of FLOSS tools.
Download

Paper Nr: 448
Title:

A workflow language for the experimental sciences

Authors:

Thérèse Libourel, Isabelle Mougenot, Yuan Lin, Therese Libourel and Isabelle Mougenot

Abstract: Scientists in the environmental domains (biology, geographical information, etc.) need to capitalize, distribute and validate their experimentations of varying complexities. The concept of the scientific workflow is increasingly being considered to fulfill this requirement. After a short discussion of existing work, this article presents the first phase of the establishment of a workflow environment corresponding to the static part, i.e., a meta-model and a language dedicated to the design of process-chain models. We illustrate our proposal with a simple example from the spatial domain and conclude with perspectives that open up with the establishment of a workflow environment.
Download

Paper Nr: 457
Title:

HIPPOCRATIC ONTOLOGY BASED:A Model For Protecting Personal Information Privacy

Authors:

Esraa Omran and Albert Bokma

Abstract: In the age of identity theft and increased misuse of personal information held in databases, a crucial topic is the incorporation of privacy protection into database systems. Several initiatives have been created to address privacy protection in various forms, from legislation such as PIPEDA to policies such as P3P. Unfortunately, none of these enforce protection of data. Recent solutions have emerged to enforcing data privacy & protection such as the Hippocratic database. But this technique has proved complex in purpose wide range decision making. To overcome this deficiency we propose to build personal information ontology’s that integrate with Hippocratic databases. This method introduces a new way in reducing the complexity and in clearly identifying terms of privacy in the database architecture.
Download

Paper Nr: 472
Title:

Linking IT and Business Processes for Alignment - A Meta Model Based Approach -

Authors:

Matthias Goeken and Wolfgang Johannsen

Abstract: Methods to optimize alignment between an enterprises business strategy and its IT strategy has been on the agenda of IS research since the beginning of last decade. Recognizing the growing impact of IT on the revenue side and on the cost side of P&L, it has become one of the most pressing issues of strategic IT management since then. One promising approach in gaining best results for alignment is to synchronize similar and related processes in both the business and the IT domain. In this contribution we present an approach for identifying components of processes in both domains on the basis of existing meta models. We consider this as a first step in developing a method based coherent view on both domains which finally will allow us to create a systematic and comprehensive alignment method.
Download

Paper Nr: 475
Title:

Modeling with BPMN and chorda: a top-down, data-driven methodology and tool

Authors:

Matteo Magnani, Danilo Montesi and Andrea Catalano

Abstract: In this poster paper we present a methodology, named chorda, for the modeling of business processes with BPMN. Our methodology focuses on the peculiar features of this notation: its ability to illustrate different levels of abstraction, its support for both orchestration and choreography, and the representation of data flows. In particular, this last feature has been extended to allow a better mapping of real processes, where data often plays a fundamental role. To evaluate and tune the methodology, we have developed a tool supporting it.
Download

Paper Nr: 481
Title:

Proactive Insider-Threat Detection against Confidentiality in Sensitive Pervasive Applications

Authors:

Joon Park, Jaeho Yim and Jason Hallahan

Abstract: The primary objective of the research is to mitigate insider threats against sensitive information stored in an organization’s computer system, using dynamic forensic mechanisms to detect insiders’ malicious activities. Among various types of insider threats, which may break confidentiality, integrity, or availability, this research is focused on the violations of confidentiality with privilege misuse or escalation in sensitive applications. We identify insider-threat scenarios and then describe how to detect each threat scenario by analyzing the primitive user activities such as Copy, Rename, Print, Paste, and so on. Finally, we implement our detection mechanisms by extending the capabilities of existing software packages. Since our approach can proactively detect the insider’s malicious behaviors before the malicious action is finished, we can prevent the possible damage proactively. In this particular paper the primary sources for our implementation are from the Windows file system activities, the Windows Registry, the Windows Clipboard system, and printer event logs and reports. However, we believe our approaches for countering insider threats can be also applied to other computing environments.
Download

Paper Nr: 508
Title:

AN INTEGRATION-ORIENTED MODEL FOR APPLICATION LIFECYCLE MANAGEMENT

Authors:

Guenter Pirklbauer, Rudolf Ramler and Rene Zeilinger

Abstract: In the last years a new trend emerged in the software engineering tool market: Application Lifecycle Management (ALM). ALM aims at integrating processes and tools to coordinate development activities in software engineering. However, a common understanding or widely accepted definition of the term ALM has not yet evolved. Thus, companies introducing ALM are usually confronted with a wide range of solutions following different, vendor-specific interpretations. The aim of this paper is to clarify the concept of ALM and to provide guidance on how to develop an ALM strategy for software development organizations. The paper identifies key problem areas typically addressed by ALM and derives a model to relate the solution concepts of ALM to engineering and management activities. The work has been applied in the context of an improvement project conducted at an industrial company. This case shows how the model can be used to systematically develop a tailored, vendor-independent ALM solution.
Download

Paper Nr: 533
Title:

Challenges and perspectives in the deployment of distributed components-based software.

Authors:

Mariam Dibo and Noureddine Belkhatir

Abstract: Software deployment encompasses all post-development activities that make an application operational. It covers different activities such as packaging, installation, configuration, start of application and updates. Theses deployment activities on large infrastructures are more and more complex leading to different works generally developed in a ad 'hoc way and consequently specific to a middleware as for instance J2EE, .net, CCM. Every middleware designs specific deployment mechanisms and tools. The objective of this work is to propose a generic deployment approach independently of the target environments and to propose necessary abstractions to describe the software to be deployed, the deployment infrastructures and the deployment process with the identification and the organization of the activities to be carried out and the support for its execution. Our approach is model driven and our contribution is about a generic deployment framework.
Download

Paper Nr: 551
Title:

DATABASE MARKETING PROCESS SUPPORTED BY ONTOLOGIES: SYSTEM ARCHITECTURE PROPOSAL

Authors:

Filipe M. Pinto, Alzira Marques and Manuel F. Santos

Abstract: This work proposes an ontology based system architecture which works as developer guide to a database marketing practitioner. Actually marketing departments handles daily with a great volume of data which are normally task or marketing activity dependent. This sometimes requires specific knowledge background and framework. This article aims to introduce an unexplored research at Database Marketing: the ontological approach to the Database Marketing process. Here we propose a generic framework supported by ontologies and knowledge extraction from databases techniques. Therefore this paper has two purposes: to integrate ontological approach in Database Marketing and to create domain ontology with a knowledge base that will enhance the entire process at both levels: marketing and knowledge extraction techniques. Our work is based in the Action Research methodology. At the end of this research we present some experiments in order to illustrate how knowledge base works and how can it be useful to user.
Download

Paper Nr: 569
Title:

ON TECHNOLOGY INNOVATION: A COMMUNITY SUCCESSION MODEL FOR SOFTWARE ENTERPRISE

Authors:

Qianhui Liang and Weihui Dai

Abstract: In this paper, we have taken an economic approach of technological innovations to studying the issue of evolution in software enterprise. Based on Lotka–Volterra equations and equilibrium formula, we have built a model for the dynamics of software technological innovations. The model is applied in order to derive the typical succession patterns of communities and a method for optimal co-existence and interactions among the communities. We validate our model by presenting a case study on the development process of the software enterprises.
Download

Paper Nr: 610
Title:

MODELLING LOCATION-AWARE BEHAVIOUR IN WEB-GIS USING ASPECTS

Authors:

Ana Isabel Oliveira, Matias Urbieta, João Araújo, Armanda Rodrigues, Ana Moreira, Silvia Gordillo and Gustavo Rossi

Abstract: Web-GIS applications evolve fast as new requirements emerge constantly. Some of these requirements, particularly those related with spatial behaviours, might crosscut previous core application requirements. Conventional modelling techniques, which ignore the effect of crosscutting concerns (such as tangling and scattered behaviours) affect negatively the modularity and thus compromise application maintenance. In this paper we present and aspect-oriented approach to model crosscutting concerns in Web-GIS applications, particularly those related with spatial features. The process introduced in this paper starts with the identification and specification of crosscutting concerns, followed by the composition of these concerns, using the MATA language.
Download

Paper Nr: 622
Title:

Instructional design for Java enterprise component technology

Authors:

E. E. Roubtsova, Bert Hoogveld and Marco Marcellis

Abstract: We present a method for development of instructions for teaching component based development of enterprise application. The method considers development of an enterprise application as a complex task that has to be taught as a whole. The requirements to the user access and to the back-end systems serve as a natural means for the choice of the leaning tasks. We have fixed the back-end system and separated the task classes based on the requirements to the user access: a web-browser and an application client. Inside of each of those task classes the requirements are decomposed into "create", "retrieve", "update" and "remove" groups of functionality. Each of these functionalities can be seen as a simple enterprise application. Each of functionalities can be of different level of complexity and may be implemented with local and remote clients and different types of components. The Java enterprise component technology is used for implementation of learning tasks.
Download

Paper Nr: 635
Title:

Innovative health care channels - Towards Declarative Decision Support Systems focusing on patient security

Authors:

Jenny Lundberg, kerstin ådahl and Rune Gustavsson

Abstract: The main contribution in this paper is a structured approach supporting validated quality of information sharing in Health Care settings. Protocols, at different system levels, are used as a method to design and implement intelligible information sharing structures. Our approach can preferably be seen as a context dependant information modelling framework that could be implemented using, e.g., web 2.0 techniques in a professional context. The main challenge is how to trustworthy convey and analyze the huge amounts of information available in Health Care contexts. Our innovative information health channel concept provides an approach to analyze and structure information as well as a contextual support towards increasing patient security.
Download

Area 4 - Software Agents and Internet Computing

Full Papers
Paper Nr: 93
Title:

E-LEARNING IN LOGISTICS COST ACCOUNTING - Automatical Generation and Marking of Exercises

Authors:

Markus Siepermann and Christoph Siepermann

Abstract: The paper presents the concept and realisation of an e-learning tool that provides predefined or automatically generated exercises concerning logistics cost accounting. Students may practise where and whenever they like to via the Internet. Their solutions are marked automatically by the tool while considering consecutive faults and without any intervention of lecturers.

Paper Nr: 156
Title:

TOWARDS SUCCESSFUL VIRTUAL COMMUNITIES

Authors:

Pierre Maret, Julien Subercaze, Christo El Morr, Matti Koivisto, Masayuki Ihara, Adrien Joly and Panayotis Antoniadis

Abstract: With the multiplication of communication medium, the increasing multi-partner global organizations, the remote working tendencies, dynamic teams, pervasive or ubiquitous computing Virtual Communities (VCs) are playing an increasing role in social organizations currently and will probably change profoundly the way people interact in the future. In this paper, we present our position on the key characteristics that are imperative to provide a successful VC as well as the future directions in terms of research, development and implementation. We identify three main aspects (business, techniques and social) and analyze for each of them the different components and their relationships.

Paper Nr: 271
Title:

A Multiagent-System for Automated Resource Allocation in the IT Infrastructure of a Medium-sized Internet Service Provider

Authors:

Michael Schwind and Marc Goederich

Abstract: In this article we present an agent-based system that is designed for the automated allocation of web hosting services to the IT resources at a medium-sized Internet service provider (ISP). The system is capable of finding a cost minimizing allocation of web hosting services on the distributed IT infrastructure of the ISP. For this purpose an agent which can independently determine a price for each package of web hosting services is assigned to each resource. The allocation mechanism employs a system of price and cost functions to form an economic model which guarantees a continuous capacity load for the companies' IT resources. According to the demand for web hosting services, resource agents can invest in the acquisition of IT infrastructure. These investments have to be amortized by the resource agents using the returns yielded by the web services sold to the ISP customers. By using real world demand profiles for web services packages taken from the operational systems of a medium-sized ISP, we were able to prove the stability of the resource allocation system.

Paper Nr: 306
Title:

AgEx: A Financial Market Simulation Tool for Software Agents

Authors:

Paulo Castro and Jaime Sichman

Abstract: Many researchers in the software agent field use the financial domain as a test bed to develop adaptation, cooperation and learning skills of software agents. However, there are no open source financial market simulation tools available, that are able to provide a suitable environment for agents with real information about assets and order execution service. In order to address such demand, this paper proposes an open source financial market simulation tool, called AgEx. This tool allows traders launched from distinct computers to act in the same market. The communication among agents is performed through FIPA ACL and uses a market ontology created specifically to be used for trader agents. We implemented several traders using AgEx and performed many simulations using data from real markets. The achieved results allowed to test and assess comparatively trader’s performance against each other in terms of risk and return. We verified that the effort to implement and test trader agents was significantly diminished by the use of AgEx. Furthermore, such results indicated new directions in trader strategy design.

Paper Nr: 322
Title:

A Domain Analysis Approach for Multi-agent Systems Product Lines

Authors:

Ingrid Nunes, Uirá Kulesza, Camila Nunes, Carlos J. Pereira de Lucena and Elder Cirilo

Abstract: In this paper, we propose an approach for documenting and modeling Multi-agent System Product Lines (MAS-PLs) in the domain analysis stage. MAS-PLs are the integration between two promising techniques, software product lines and agent-oriented software engineering, aiming at incorporating their respective benefits and helping the industrial exploitation of agent technology. Our approach explores the scenario of including agency features to existing web applications and is based on PASSI, an agent-oriented methodology, to which we added some extensions to address agency variability. A case study, OLIS (OnLine Intelligent Services), illustrates our approach.

Paper Nr: 324
Title:

A Reputation-based Game for Tasks Allocation

Authors:

Hamdi Yahyaoui

Abstract: We present in this paper a distributed game theoretical model for tasks allocation. During the game, each agent submits a cost for achieving a specific task. Each agent, that is offering a specific task, computes the so-called reputation-based cost, which is the product between the submitted cost and the inverse of the reputation value of the bidding agent. The game winner is the agent which has the minimal reputation-based cost. We show how the use of reputation allows a better allocation of tasks with respect to a conventional allocation where there is no consideration of the reputation as a criteria for allocating tasks.

Paper Nr: 386
Title:

REMOTE CONTROLLING AND MONITORING OF SAFETY DEVICES USING WEB-INTERFACE EMBEDDED SYSTEMS

Authors:

Alejandro Carrasco Muñoz, María Dolores Hernández, Maria Del Carmen Romero Ternero, Francisco Sivianes and Jose Ignacio Escudero Fombuena

Abstract: To date, access control systems have been hardware-based platforms, where software and hardware parts were uncoupled into different systems. The Department of Electronic Technology of the University of Seville, together with ISIS Engineering, have developed an innovative embedded system that provides all needed functions for controlling and monitoring remote access control systems through a built-in web interface. The design provides a monolithic structure, independence from outer systems, ease in management and maintenance, conformation to the highest standards in security, and straightforward adaptability to applications other than the original one. We have accomplished it by using an extremely reduced Linux kernel and developing web and purpose-specific logic under software technologies with an optimal resource use.

Paper Nr: 407
Title:

RECOGNIZING CUSTOMERS’ MOOD IN 3D SHOPPING MALLS BASED ON THE TRAJECTORIES OF THEIR AVATARS

Authors:

Anton Bogdanovych

Abstract: This paper proposes a method to assess the cognitive state of a human embodied as an avatar inside a 3-dimensional virtual shop. In order to do so we analyze the trajectories of the avatar movements to classify them against the set of predefined prototypes. To perform the classification we use the trajectory comparison algorithm based on the combination of the Levenshtein Distance and the Euclidean Distance. The proposed method is applied in a distributed manner to solving the problem of making autonomous assistants in virtual stores recognize the intentions of the customers.

Paper Nr: 465
Title:

Assembling and Managing Virtual Organizations out of Multi-party Contracts

Authors:

Evandro Bacarin, Evandro Baccarin, Edmundo Madeira and Claudia B. Medeiros

Abstract: Assembling virtual organizations is a complex process, which can be modeled and managed by means of a multi-party contract. Such a contract must encompass seeking consensus among parties in some issues, while simultaneously allowing for competition in others. Present solutions in contract negotiation are not satisfactory because they do not accommodate such a variety of needs and negotiation protocols. This paper shows our solution to this problem, discussing how our SPICA negotiation protocol can be used to build up virtual organizations. It assesses the effectiveness of our approach and discusses the protocol's implementation.

Paper Nr: 497
Title:

A Video-Based Biometric Authentication for E-Learning Web Applications

Authors:

Bruno E. Penteado and Aparecido Nilceu Marana

Abstract: In the last years there was an exponential growth in the offering of Web-enabled distance courses and in the number of enrolments in corporate and higher education using this modality. However, the lack of efficient mechanisms that assures user authentication in this sort of environment, in the system login as well as throughout his session, has been pointed out as a serious deficiency. Some studies have been led about possible biometric applications for web authentication. However, password based authentication still prevails. With the popularization of biometric enabled devices and resultant fall of prices for the collection of biometric traits, biometrics is reconsidered as a secure remote authentication form for web applications. In this work, the face recognition accuracy, captured on-line by a webcam in Internet environment, is investigated, simulating the natural interaction of a person in the context of a distance course environment. Partial results show that this technique can be successfully applied to confirm the presence of users throughout the course attendance in an educational distance course. An efficient client/server architecture is also proposed.

Paper Nr: 521
Title:

Modeling JADE agents from GAIA Methodology under the perspective of Semantic Web

Authors:

Ig I. Bittencourt, Pedro Santos, Evandro Costa, João P. Pontes, Douglas Veras, Diego Dermeval and Henrique Pacca

Abstract: Building multi-agent software systems is pointed out as a high complex task because various aspects must be considered, such as roles, interaction protocols, agents, organization rules, services, and so on. Moreover, researchers have raised different issues for building several applications, such as high development costs, scalability, content sharing and others. Therefore several agent-oriented software engineering (AOSE) methodologies and multi-agent-based (MAS) frameworks has been proposed in order to facilitate the hard task of modeling and building high complex systems. However, those methodologies in an attempt to model complex systems end up being hard to use and to ensure the consistency between each part (roles, protocols, services, resources, agents). On the other hand, ontologies have been considered useful for representing the knowledge of software engineering techniques and methodologies in order to provide an unambiguous terminology that can be shared, reusable, and ensure the consistence between the concepts involved. This paper proposes i) ontologies for specifying agents through the use of GAIA methodology and JADE Framework and ii) SWRL rules to map the instances from GAIA ontology to JADE ontology. Finally, it is presented a case study and a discussion to demonstrate the use of the ontologies and rules.

Paper Nr: 583
Title:

A BUSINESS SERVICE SELECTION MODEL FOR AUTOMATED WEB SERVICE DISCOVERY REQUIREMENTS

Authors:

Tosca Lahiri and Mark Woodman

Abstract: Automated web service (WS) discovery, i.e. discovery without human intervention, is a goal of service-oriented computing. So far it is an elusive goal. The weaknesses of UDDI and other partial solutions have been extensively discussed, but little has been articulated concerning the totality of requirements for automated web service discovery. Our work has led to the conclusion that solving automated web service discovery will not be found through solely technical thinking. We argue that the business motivation for web services must be given prominence and so have looked to processes in business for the identification, assessment and selection of business services in order to assess comprehensively the requirements for web service discovery and selection. The paper uses a generic business service selection model as a guide to analyze a comprehensive set of requirements for facilities to support automated web service discovery. The paper presents an overview of recent work on aspects of WS discovery, proposes a business service selection model, considers a range of technical issues against the business model, articulates a full set of requirements, and concludes with comments on a system to support them.

Short Papers
Paper Nr: 53
Title:

A process for implementing online and physical business based on a strategy integration aspect

Authors:

Ing-Long Wu, Chu-Ying Fu and Chin-Wei Lee

Abstract: The growth of e-business has been experiencing a tremendous change in recent years. The initial prosperity in e-business has now been replaced by the recent trend in a series of bankruptcies and acquisitions. Business managers begin to consider new business models in terms of the integration of virtual and physical operations to support more and different services to customers. The key to success in e-business lies in how you carry out the integration between online and offline business. However, past research just literally discussed the important relevant issues of clicks-and-bricks strategy and channel management. There was lack of a complete and solid process to effectively guide the implementation of the integration. Therefore, this study proposed a three-step process based on a strategic perspective: (1) strategy integration (2) channel coordination, and (3) synergy realization. The results indicate that the right strategy integration significantly
Download

Paper Nr: 61
Title:

Using Grids to support Information Filtering Systems: A case study of running Collaborative Filtering recommendations on gLite

Authors:

Leandro Ciuffo and Elisa Ingrà

Abstract: Today’s business becomes increasingly computational-intense. Grid computing is a powerful paradigm for running ever-larger workloads and services. Commercial users have been attracted by this technology, which can potentially be exploited by industries and SMEs to offer new services with reduced costs and higher performance. This work aims at presenting a “gridified” implementation of a Recommender System based on the classic Collaborative Filtering algorithm. It also introduces the core services of the gLite middleware and discusses the potential benefits of using Grids to support the development of such systems.
Download

Paper Nr: 103
Title:

A P2P implementation for the High Availability of Web Services

Authors:

Zakaria Maamar

Abstract: This paper introduces an approach that aims at sustaining the high-availability of Web services using a similarity-based active replication strategy. Web service availability is defined as the proportion of time that this Web service remains functioning at a satisfactory level. Current high-availability approaches promote the use of techniques based on ``exact'' replicas to the original Web service that needs to be backed up when failures occur. Three replication strategies known as active, passive, and hybrid, are studied in this paper and permit to identify how many replicas are needed, how replicas interact with the original Web service,~etc. Our approach takes replication one step further by focussing on Web services that offer the same functionality as the original Web service does (i.e., the one to back up). This functionality similarity is built upon the development of communities that gather similarly-functional Web services. Similarity-based replication offers solutions to some limitations of the existing replication strategies, but at the same time raises other issues that need to be tackled in the particular context of communities. To prove the suitability of the active replication strategy for Web services high-availability, a P2P testbed on top of the JXTA platform is developed.
Download

Paper Nr: 119
Title:

UBIQUITOUS SOFTWARE DEVELOPMENT DRIVEN BY AGENTS’ INTENTIONALITY

Authors:

Milene Serrano, Carlos J. Pereira de Lucena and Maurício Serrano

Abstract: Ubiquitous computing is a novel computational paradigm in which the users’ mobility, the devices’ heterogeneity and the service omnipresence need is intrinsic and intense. In this context, the ubiquitous software development poses some particular challenges that are not yet dealt with by the traditional approaches found in the Software Engineering community. In order to improve the ubiquitous software development, this paper describes a detailed technological set based on multi-agent systems (MAS), goalorientation, the BDI (Belief Desire Intention) model and various frameworks and conceptual models.
Download

Paper Nr: 125
Title:

A SCHEME OF STRATEGIES FOR REAL-TIME WEB COLLABORATION BASED ON AJAX/COMET TECHNIQUES FOR LIVE RIA

Authors:

Walter Balzano, Maria Rosaria Del Sorbo and Luca Di Liberto

Abstract: The last web applications advances, even if considerable, doesn’t yet allow to replace desktop applications: data are often redundant, user/web interfaces interactions notably differ from the user/desktop ones, data propagation isn’t completely instantaneous. This work presents some strategies for real-time client/server and client/client communications through server to manage multiuser collaboration problem in innovative ways: the user can exploit two techniques AJAX [1] and Comet [2], recently diffused and standard based, without installing plug-ins. Web 2.0 [3] was this project’s leading philosophy, by which a mash up of two features was mainly realized: a chat module and a WYSIWYG (What You See Is What You Get) text editor. The results show possibilities of cooperation among unspecified users numbers interacting simultaneously through simple graphical interfaces, with a considerable acceleration of the work. Many applications exploit client/server interactions, immediately sending updated information to connected users, as in online auctions [4], multiuser cooperation[5] and e-learning[6] information systems. Further details about the code and simple demos are available on http://people.na.infn.it/~wbalzano/AJAX.
Download

Paper Nr: 131
Title:

On The Helpfulness of Product Reviews - An Analysis of Customer-to-Customer Trust on eShop-Platforms

Authors:

Georg Peters and Vasily Andrianov

Abstract: In the last decade the market share of online stores in the retail sector has risen constantly and partly replaced traditional face-to-face shops in cities and shopping malls. One reason is that the cost structure of online shops is lower than of classic shops since the latter have to finance physical stores and sales personnel. On the one hand, this often leads to a strategic cost advantage and results lower selling prices. On the other hand, online stores normally do not provide personal consulting services as in traditionally face-to-face shops. However, online shops have established different forms of product consulting to compensate the missing personal advice of the sales persons in a physical shop - examples are product related hotlines or online chatrooms. An even cheaper possibility is to establish a recommendation system where previous buyers are invited to write reviews on a product. Some eShops even provide some kind of cascading system: a product review written by a customer can be classified as helpful or not by other customers. In our research we focus on this second cascade. The objective of our paper is to analyze if there are structures or rules that make product reviews written by customers helpful for other customers.
Download

Paper Nr: 151
Title:

OFLOSSC, an ontology for supporting open source development communities

Authors:

Isabelle Mirbel

Abstract: Open source development is a particular case of distributed software development having a volatile project structure, without clearly-defined organization, where activity coordination is mostly based on the use of Web technologies. The dynamic and free nature of this kind of project raises new challenges about knowledge sharing. In this context, we propose a semantic Web approach to enhance coordination and knowledge sharing inside this kind of community. The purpose of this paper is to present OFLOSSC, the ontology we propose as the backbone of our approach. It is dedicated to the annotation of the community members and resources to support knowledge management services. While building OFLOSSC, our aim was twofold. On one hand, we wanted to reuse the ontologies on open source provided in the literature. On the other hand, we adopted a community of practice point of view to acquire the pertinent concepts for annotating resources of the open source development community. This standpoint emphasizes the sharing dimensions in knowledge management services.
Download

Paper Nr: 177
Title:

INFLUENCING FACTORS FOR THE ADOPTION OF M-COMMERCE APPLICATIONS: A MULTIPLE CASE STUDY

Authors:

Saira Zeeshan, Helana Scheepers and Yen Cheung

Abstract: In the last few years mobile-commerce (m-commerce) has evolved providing its users sets of applications that provide greater communication and flexibility. As these m-commerce applications become popular, organizations are adopting them to provide these services to their customers. This paper explores the influencing factors involved in the adoption of m-commerce applications by organizations. The research question that is addressed in this paper is: What factors affect an organization’s decision to adopt m-commerce? In order to answer the research question a research model adapted from the framework presented by Wang and Cheung (2004) is proposed. The research model examines the influencing factors under three levels: organizational, environmental and managerial. A multiple case study approach is employed as a research method to validate the research model. Findings from this research enhance research in m-commerce as well as assist businesses to better plan their adoption of m-commerce applications.
Download

Paper Nr: 191
Title:

BLOG CLASSIFICATION USING K-MEANS

Authors:

Lee K. Jun, Lee Myung Jin and Kim W. Ju

Abstract: With an exponential growth of blogs, lots of important data have appeared on blogs. However, since main topics mentioned in blog pages are quite different from general web pages, there are problems which can’t be solved by general search engines. Therefore, many researchers have studied searching methods only for blogs to help users who want to have useful information on blog. We also present a blog classifying method using K-means. First, we analyze blogs and blog search engines to find problems and solution of current blog search. Second, applying K-means algorithm on blog titles, we discuss a way to develop titles available for K-means. Finally, by making a prototype system of our algorithm, we evaluate our algorithm’s effectiveness and show conclusion and future work. We expect this algorithm could add its power to current search engine.
Download

Paper Nr: 203
Title:

FLESHING OUT CLUES ON GROUP PROGRAMMING LEARNING

Authors:

Thais Castro, Hugo Fuks, Alberto Castro and Leonardo Santos

Abstract: This work examines the findings of a case study carried out in the first semester of 2008, which uses a programming progression learning scheme, from the individual to group programming. This approach implies the generation of conversation logs among students as they take part in group programming. Supporting strategies are the evidences fleshed out through those logs. These strategies will guide the teacher on his inferences in the next group programming practical sessions.
Download

Paper Nr: 205
Title:

MOBILE DEVICE LOCATION INFORMATION ACQUISITION FRAMEWORK FOR DEVELOPMENT OF LOCATION INFORMATION WEB APPLICATIONS

Authors:

Andrej Dolmac and Stephan Haslinger

Abstract: Mobile device location information based services are one of the key drivers in Telecommunication market today. The current development of mobile device location based service solutions is focusing on a development of a particular services, but there is nearly no effort to build a framework that does not focus on a particular location information based service, but instead, provides developers with a set of tools that would enable them easier and faster development of new services based on mobile device location information. Within the EUREKA project MyMobileWeb, framework was implement for acquisition of location information from mobile devices. Framework architecture enables obtaining location information form various mobile devices and is not bound to any special device type or capability. Furthermore the architecture can be used not only to obtain location information, but also to obtain any other information from mobile device, such as e.g. battery level.
Download

Paper Nr: 305
Title:

GAIA4E: A TOOL SUPPORTING THE DESIGN OF MAS USING GAIA

Authors:

Luca Cernuzzi and Franco Zambonelli

Abstract: Different efforts have been devoted to improve the original version of Gaia methodology. The more relevant is the official extension of Gaia, exploiting the organizational abstractions to provide clear guidelines for the analysis and design of complex and open multiagent systems. However, now a day a successful design methodology should include some other strategic factors like the support of a specific CASE tool to simplify the work of the designer. Such a tool supporting the Gaia design process may facilitate the adoption of the methodology in the industrial arena. The present study introduces Gaia4E, a plug-in for the ECLIPSE environment which covers all the phases of Gaia allowing agent engineers to produce and document the corresponding models.
Download

Paper Nr: 326
Title:

IDENTIYING HOMOGENOUS CUSTOMER SEGMENTS FOR LOW RISK EMAIL MARKETING EXPERIMENTS

Authors:

George Sammour, Benoît Depaire, Koen Vanhoof and Geert Wets

Abstract: Research in email marketing is divided into two broad areas spam and improving response rate. In this paper we propose a methodology which allows companies to experiment with their email campaigns to increase the campaigns’ response rate, This methodology is particularly suited for companies that are reluctant to experiment with their customer’s data fearing a drop of the response rate due to unsuccessful changes of the email campaign. The goals of this research have been achieved in two steps. Firstly, homogenous groups of customers are identified, eliminating largely any hindering heterogeneity. Secondly, customers that are not clicking and/or having a low click rate within their homogenous groups are identified.
Download

Paper Nr: 383
Title:

Email-based Interoperability Service Utilities for Cooperative Small and Medium Enterprises

Authors:

Hong-Linh Truong, Kalaboukas Konstantinos, Thomas Burkhart, Martin Carpenter, Michal Laclavik, Christian Melchiorre, Martin Seleng, Ana Pinuela, Panagiotis Gkouvas, Dario Luiz Lopez, Enrico Morten, Cesar Marin, Dirk Werth and Christoph Dorn

Abstract: As most SMEs utilize email for conducting business, email-based interoperability solutions for SMEs can have a profound effect on their business. This paper presents a utility-like system to support specialized SMEs to improve their business via emails by providing system, semantic and process interoperability solutions for individual SMEs and network of cooperative SMEs. We describe the concept of Email-based Interoperability Service Utility (EISU) and a software framework that provides almost zero cost interoperability solutions.
Download

Paper Nr: 403
Title:

ONTOLOGY-BASED EMAIL CATEGORIZATION AND TASK INFERENCE USING A LEXICON-ENHANCED ONTOLOGY

Authors:

Roger Tagg and Prashant Gandhi

Abstract: Today’s knowledge workers are increasingly faced with the problem of information overload as they use current IT systems for performing daily tasks and activities. This paper focuses on one source of overload, namely electronic mail. Email has evolved from being a basic communication tool to a resource used – and misused – for a wide variety of purposes. One possible approach is to wean the user away from the traditional, often cluttered, email inbox, toward an environment where sorted and prioritized lists of tasks are presented. This entails categorizing email messages around personal work topics, whilst also identifying implied tasks in messages that users need to act upon. A prototype email agent, based on the use of a personal ontology and a lexicon, has been developed to test these concepts in practice. During the work, an opportunistic user survey was undertaken to try to better understand the current task management practices of knowledge workers and to aid in the identification of potential future improvements to our prototype.
Download

Paper Nr: 455
Title:

AN ADAPTIVE MIDDLEWARE FOR MOBILE INFORMATION SYSTEMS

Authors:

Malte Huelder and Volker Gruhn

Abstract: The advances in mobile telecommunication networks as well as in mobile device technology have stimulated the development of a wide range of mobile applications. While it is sensible to install at least some components of applications on mobile devices to gain independence of rather unreliable mobile network connections, it is difficult to decide about the suitable application components and the amount of data to be provided. Because the environment of a mobile device can change and mobile business processes evolve over time, the mobile system should adapt to these changes dynamically to ensure productivity. In this paper, we present a mobile middleware that targets typical problems of mobile applications and dynamically adapts to context changes at runtime by utilizing reconfiguration triggers.
Download

Paper Nr: 473
Title:

Monitoring Service Compositions in MoDe4SLA: Design of Validation

Authors:

Lianne Bodenstaff, Andreas Wombacher, Michael C. Jaeger, Manfred Reichert and Roel Wieringa

Abstract: In previous research we introduced the MoDe4SLA approach for monitoring service compositions. MoDe4SLA identifies complex dependencies between Service Level Agreements (SLAs) in a service composition. By explicating these dependencies, causes of SLA violations of a service might be explained by malfunctioning of the services it depends on. MoDe4SLA assists managers in identifying such causes. In this paper we discuss how to evaluate our approach concerning usefulness for the user as well as effectiveness for the business. Usefulness is evaluated by experts who are asked to manage simulated runs of service compositions using MoDe4SLA. Their opinion on the approach is an indicator for its usefulness. Effectiveness is evaluated by comparing runtime results of SLA management using MoDe4SLA with runtime results of unsupported management. Criteria for effectiveness are cost reduction and increase in customer satisfaction.
Download

Paper Nr: 500
Title:

Personalized Medical Workflow through Semantic Business Process Management

Authors:

Jiangbo Dang, Amir Hedayati, Ken Hampel and Candemir Toklu

Abstract: Business Process Management (BPM) systems are becoming the runtime governance of emerging Service Oriented Architecture (SOA) applications. They provide tools and methodologies to design and compose Web services that can be executed as business processes and monitored by BPM consoles. Ontology, as a formal declarative knowledge representation model, provides semantics upon which machine understandable knowledge can be obtained, and as a result, it makes machine intelligence possible. By combining ontology and BPM, Semantic Business Process Management (SBPM) provides a novel approach to align business processes from both business perspective and IT perspective. Current healthcare systems can adopt SBPM to make themselves adaptive, intelligent, and then serve patients better. Our ontology makes our vision of personalized healthcare possible by capturing all necessary knowledge for a complex personalized healthcare scenario including patient care, insurance policies, drug prescriptions, and compliances. This paper presents a hospital workflow management system that allows users, from physicians to administrative assistants, to create context-aware medical workflows, and execute them on-the-fly using an ontological knowledge base.
Download

Paper Nr: 522
Title:

IMPLEMENTATION ISSUES OF THE INFONORMA MULTI-AGENT RECOMMENDER SYSTEM

Authors:

Lucas Drumond, Rosario Girardi, D’Jefferson Maranhão and Geraldo Sarmento

Abstract: Recommender systems can help professionals of the legal area to deal with the growth and dynamism of legal information sources. Infonorma is a multi-agent recommender system that recommends legal normative instruments to users according to their particular interests using both content-based and collaborative information filtering techniques. It has been modeled under the guidelines of the MAAEM methodology. This paper discusses the main implementation issues of the Infonorma system.
Download

Paper Nr: 538
Title:

semantic indexing of web pages via probabilistic method

Authors:

Paolo Napoletano, Fabio Clarizia, Massimo De Santo and Francesco Colace

Abstract: In this paper we address the problem of modeling large collections of data, namely web pages by exploiting jointly traditional information retrieval techniques with probabilistic ones in order to find semantic descriptions for the collections. This novel technique is embedded in a real Web Search Engine in order to provide semantics functionalities, as prediction of words related to a single term query. Experiments on different small domains (web repositories) are presented and discussed.
Download

Paper Nr: 612
Title:

A CASE STUDY OF AUTOMATED INVENTORY MANAGEMENT

Authors:

Abrar Haider

Abstract: Maintaining a knowledgebase the location and condition of IT assets in large organisation is a problem. Knowledge of exact number of these assets is important for a number of reasons, which include controlling or eliminating procuring multiple assets for the same job or task; cost savings on maintenance contracts in accordance with the exact number of assets to be maintained; reduction in man hours spent in locating these assets; and checking theft. This paper presents a case study of a large sized Australian utility that is grappling with the same problems. In addition to these issues the company is also looking for looking for improved security of fixed/removable/mobile IT assets used by staff, integration of IT asset movement information with the staff access card and associated systems currently in use. This paper, therefore, presents a set of options available to the company to track the movement of their assets, and using the same technical architecture to integrate asset information with the information of the staff moving the asset.
Download

Paper Nr: 69
Title:

KNOWLEDGE MANAGEMENT AND ECO-DESIGN SCOPES

Authors:

Rinaldo Michelini and Roberto Razzoli

Abstract: The eco-protection acts imply reorganising the manufacture business, towards product-service supply chains. The innovation can be tackled at two ranges: - the presetting of the knowledge management surroundings, to deal with the extended producers’ responsibility; - the incorporation of the entrepreneurial facility/function assembly, to accomplish the product-service delivery. The paper surveys the knowledge management frame, specifying the standard PLM aids, with account of the PLM-SE and PLM-RL requirements, giving especial attention on the alternative net-concern options, from virtual, to extended enterprises infra-structures. For explanatory purposes, the study discusses example extended enterprise deployments, and related knowledge management frames, for SE applications ruled by SME contexts; and examines example virtual enterprise settings, with related information networking requirements, for RL applications, according to the EU enacted rules for the ELV recovery (reuse, recycle) domain. The developments relate the impending changes, needed by the current manufacture business, today, perhaps, too much neglected, by most industrial companies, due to incumbent economical vicissitude. The competition, however, is ceaseless spur, and the axiom innovate or perish should suggest to consider the eco-protection acts, rather than charges, the opportunity to reorganise the manufacture business, with the suited incorporation of intangible value added.
Download

Paper Nr: 113
Title:

EVALUATING THE ROLE OF INDIVIDUAL PERCEPTION IN IT OUTSOURCING DIFFUSION: AN AGENT BASED MODEL

Authors:

Marco Remondino, Marco Pironti and Roberto Schiesari

Abstract: The decision to adopt innovations has been investigated using both international patterns and behavioral theories. In this work, an agent-based (AB) model is created to study the spreading of innovation in enterprises (namely, the adoption of Information Technology outsourcing). The paradigm of AB simulation makes it possible to capture human factors, along with technical ones. This makes it possible to study the influence of perception, and the resulting bias. This work is focused on small and medium enterprises (SME), in which a restricted managing pool (sometimes just one person) decides whether to adopt a new technology or not, and bases the decision mainly on perception.
Download

Paper Nr: 354
Title:

WEB-BASED COLLABORATIVE ENGINEERING FOR INJECTION MOLD DEVELOPMENT

Authors:

Dongyoon Lee, Honzong Choi, Kihyeong Song, Seokwoo Lee and Kwangyeol Ryu

Abstract: Injection molding is one of the most important manufacturing processes enabling present mass-production. The recent evolutions of technology related to injection molding result in the difficulties in developing the molds. As a result, a higher level of engineering technology should be considered for the developmental stage. This paper presents the web-based engineering collaboration among mold makers, experts, and product makers. Pre-examination and post-verification in the moldmaking process were investigated carefully and implemented in the web environment. Hundreds of engineering collaborations were conducted via developed systems. Surveyed results show that these collaborations help small and medium sized moldmaking enterprises reduce cost and delivery time, while they increase the quality of molds
Download

Paper Nr: 417
Title:

REAL-TIME RFID-ENABLED HEALTHCARE-ASSOCIATED MONITORING SYSTEM

Authors:

Belal Chowdhury, Xiaozhe C. Wang and Nasreen Sultana

Abstract: In healthcare context, the use of Radio Frequency Identification (RFID) technology has been employed to reduce health care costs and to facilitate the automatic streamlining of healthcare-associated infectious disease outbreak detection. RFID is playing an important role in monitoring processes in health facilities such as hospitals, nursing homes, special accommodation facilities and rehabilitation hospital. In this paper, we present a design of healthcare system using a real-time RFID-enabled application, called “Healthcare-associated Infectious Outbreak Detection and Monitoring System (HIODMS)”.
Download

Paper Nr: 431
Title:

DYNAMIC-AGENTS TO SUPPORT ADAPTABILITY IN P2P WORKFLOW MANAGEMENT SYSTEMS

Authors:

Martin Stanton, Aibrahim Aldeeb and Keeley Crockett

Abstract: Peer-to-Peer (P2P) technology is being recognized as a new approach to decentralized workflow management systems to overcome the limitation of the current centralized Client/Server workflow management systems. However, the lack of supporting adaptability and exception handling at instance level of this approach seems to be responsible for the weakness of the P2P workflow management systems. Dynamic agents can be used within P2P workflow management systems architecture to facilitate adaptability and exception handling. This paper presents a novel dynamic-agent P2P workflow management system which integrates three major technologies: software agents, P2P networking and workflow systems. The adoption of dynamic agents within P2P network can help in overcoming the adaptability problem, reducing the need for human involvement in exception handling and improves the effectiveness of the P2P workflow management system
Download

Paper Nr: 487
Title:

PROFILING COMPUTER ENERGY CONSUMPTION ON ORGANIZATIONS

Authors:

Rui P. Lopes, Luís Pires, Tiago Pedrosa and Vasile Marian

Abstract: Modern organizations depend on computers to work. Text processing, CAD, CAM, simulation, statistical analysis and so on are fundamental for maintaining high degree of productivity and competitiveness. Computers in an organization, consume a considerable percentage of the overall energy and, although a typical computer provides power saving technologies, such as suspending or hibernating components, this feature can be disabled. Moreover, the user can opt for never turning off the workstation. Well defined power saving policies, with appropriate automatic mechanisms to apply them, can provide significant power savings with consequent reduction of the power expense. With several computers in an organization, it is necessary to build the profile of the energy consumption. We propose installing a software probe in each computer to instrument the power comsumption, either diretly, by using a power meter, or indirectly, by measuring the processor performance counters. This distributed architecture, with software probes in every computer and a centralized server for persistence and decision making tries to save energy, by defining and applying organization level power saving policies.
Download

Paper Nr: 550
Title:

A Simulator for teaching Automatas and Formal Languages FLyA

Authors:

J. R. Marcial-Romero, Pedro A. Contreras, Hector A. Montes Venegas and J. A. Hernandez

Abstract: Finite automata theory is taught in almost every computing program. Its importance comes from the broad range of applications in many areas. As any mathematically based subjects, automata theory content is full of abstractions which constructively explain theoretical procedures. In computing Engineering programs, teaching is mainly focus on procedures to solve a variety of Engineering problems. However, to follow this procedures using a conventional approach can be a tedious task for the student. In this paper, a computer based software as a supporting tool to aid in teaching Automata Theory is presented. The use of an educational methodology to design the tool, remarkably contributed to the acceptance of the software amongst students and teachers as compared with existing tools for the same purpose.
Download

Paper Nr: 634
Title:

Information Spaces As A Basis for Personalising The Semantic Web

Authors:

Ian Oliver

Abstract: The future of the SemanticWeb lies not in the ubiquity, addressability and global sharing of information but rather in localised, information spaces and their interactions. These information spaces will be made at amuch more personal level and not necessarily adhere to globally agreed semantics and structures but rely more upon shared ad hoc and evolving shared agreements.
Download

Area 5 - Human-Computer Interaction

Full Papers
Paper Nr: 84
Title:

AN AGILE PROCESS MODEL FOR INCLUSIVE SOFTWARE DEVELOPMENT

Authors:

Rodrigo Bonacin, Cecilia Baranauskas and Marcos Antonio Rodrigues

Abstract: The Internet represents a new dimension for software development. It can be understood as an opportunity to develop systems to promote social inclusion and citizenship. These systems impose a singular way for developing software, where accessibility and usability are key requirements. This paper proposes a process model for agile software development, which takes into account these requirements. This method brings together multidisciplinary practices coming from Participatory Design, and Organizational Semiotics with concepts of agile models. The paper presents the instantiation of the process model during the development of a social network system, which aims to promote the social and digital inclusion. The results and the adjustments of the proposed development process model are also discussed.

Paper Nr: 106
Title:

Creation and Maintenance of Query Expansion Rules

Authors:

Aaron Kaplan, Stefania Castellani, Jutta Willamowski, Antonietta Grasso and Frédéric Roulland

Abstract: In an information retrieval system, a thesaurus can be used for query expansion, i.e. adding words to queries in order to improve recall. We propose a semi-automatic and interactive approach for the creation and maintenance of domain-specific thesauri for query expansion. Domain-specific thesauri are especially required in highly technical domains where the use of general thesauri for query expansion introduces more noise than useful results. Our semi-automatic approach to thesaurus creation constitutes a good compromise between fully manual approaches, which produce high-quality thesauri but at a prohibitively high cost, and fully automatic approaches, which are cheap but produce thesauri of limited quality. This article describes our approach and the architecture of the system implementing it, named Cannelle. It exploits user query logs and natural language processing to identify valuable synonymy candidates, and allows editors to interactively explore and validate these candidates in the context of a domain-specific searchable knowledge base. We evaluated the system in the domain of online troubleshooting, where the proposed method clearly yielded an improvement in the quality of the search results obtained.

Paper Nr: 122
Title:

STORIES AND SCENARIOS WORKING WITH CULTURE-ART AND DESIGN IN A CROSS-CULTURAL CONTEXT

Authors:

Elizabeth S. Furtado, Albert Schilling and Liadina Camargo

Abstract: This paper discusses the use of user experience prototyping and theatrical techniques in two experiments to attaint the following objectives of the interaction design: to explore new ideas and to communicate the cross-cultural users’ needs and their expectations for iDTV (interactive Digital TeleVision) services. These two objectives are particularly important when are involved systems which are unknown to people. On the first experiment, we showed the implication of real stories for the construction of efficient interaction scenarios in a process of interaction design creation. On the second experiment, we showed the implication of stories told through theatre in order to have an objective communication of the purposes of iDTV services in a process of art and culture. The results are described by discussing the strengths and weaknesses of this approach.

Paper Nr: 220
Title:

End-User Development for individualized Information Management - Analysis of Problem Domains and Solution Approaches

Authors:

Michael Spahn and Volker Wulf

Abstract: Delivering the right information at the right time to the right persons is one of the most important requirements of today's business world. Nevertheless, enterprise systems do not always provide the information in a way suitable for the individual information needs and working practices of business users. Due to the complexity of enterprise systems, business users are not able to adapt these systems to their needs by themselves. The adoption of End-User Development (EUD) approaches, supporting end-users to create individual software artefacts for information access and retrieval, could enable better utilization of existing information and better support of the long tail of end-users' needs. In this paper, we assess possibilities for improving information management through EUD, by analyzing relevant problem domains and solution approaches considering fundamental aspects of technology acceptance theories. The analysis is based on a questionnaire survey, conducted in three midsized German companies. We investigate the domains of information access and the flexible post-processing of enterprise data. Therefore we assess the importance of the respective domain for the work of end-users, perceived pain points, the willingness to engage in related EUD activities and the perceived usefulness of concrete EUD approaches we developed to address the respective domains.

Paper Nr: 254
Title:

EVALUATING THE ACCESSIBILITY OF WEBSITES TO DEFINE INDICATORS IN SERVICE LEVEL AGREEMENTS

Authors:

Sinesio Lima, Fernanda Lima and Kathia Oliveira

Abstract: Despite the evolution of the Internet in the past years, people with disabilities still encounter obstacles to accessibility that impede adequate understanding of website content. Considering that Web accessibility is an added value to the website, it is important to have in place monitoring mechanisms and website accessibility controls. Service level agreements (SLA) can be used for this purpose, as they establish, by means of a service catalog, measurable indicators that certify the fulfillment of preset goals. This paper proposes a way to evaluate the accessibility of websites through a practical approach utilizing software measures, with the proposal of collecting data to define indicators for a service catalog of an SLA of website accessibility. Initial application of the approach was realized on Brazilian federal government websites with the participation of ten users with visual disabilities. The study shows the viability of defining indicators.

Paper Nr: 270
Title:

Promoting Collaboration Through a Culturally Contextualized Narrative Game

Authors:

Marcos Alexandre Silva and Junia Anacleto

Abstract: This paper describes a research about developing web narrative game to be used at school by teachers, considering students’ culture expressed in common sense knowledge, for storytelling, allowing teacher to create stories according to student’s cultural reality, and consequently enabling students to identify and get interested in collaborating with the teacher and other students to develop the story, being co-authors. Therefore this game can allow students to learn how to express themselves, which means to leave their imagination flow and to allow them to adequately understand and elaborate situations experienced in school, family and community with no impact on their real life. Through stories students also can learn how to work collaboratively, to help, and to be helped by their friends and teacher. This context is also applicable on companies, considering teamwork and the necessary role each one has to play for collaborative work.

Paper Nr: 275
Title:

Applying the Discourse Theory to the moderator’s interferences in Web debates

Authors:

Cristiano Maciel, Ana Cristina Bicharra Garcia, Vinicius Pereira and Licinio Roque

Abstract: This paper presents a methodology for supporting the moderation phase in DCC (Democratic Citizenship Community), a virtual community for supporting e-democratic processes in e-life systems and applications. Based on the Government-Citizen Interactive Model, the DCC encompasses an innovative debate structure, as well as the moderator’s participation based on Discourse Theory, specially concerning argumentative mistakes. Concerning the moderator’s role, efforts have been made in order to improve the formalization of arguments and opinions while maintaining the usability of the platform. This research focuses on the moderator’s participation via a case study and the experiment is analyzed in a Web debate.

Paper Nr: 333
Title:

EXPERTKANSEIWEB – A TOOL TO DESIGN KANSEI WEBSITE

Authors:

Anitawati M. Lokman, Nor M. Noor and Mitsuo Nagamachi

Abstract: In this paper we describe our research work involved in the development of a design tool for developing Kansei website. The design tool to facilitate Kansei web design is named, ExpertKanseiWeb and was developed based on results obtained from the application of the Kansei Engineering method to extract website visitors’ Kansei responses. From the Partial Least Square (PLS) analysis performed, a guideline composed from the website design elements and the implied Kansei was established. This guideline becomes the basis for the systems structure of the design tool. ExpertKanseiWeb system consists of a Client Interface (CI), system controller and Kansei Web Database System (KWDS). Client can benefit from the tools as it offers easy knowledge interpretation of the guideline and presents examples to the design of Kansei website.

Paper Nr: 388
Title:

Evaluation of Information Systems Supporting Asset Lifecycle Management

Authors:

Abrar Haider

Abstract: Performance evaluation is a subjective activity that cannot be detached from the human understanding, social context, and cultural environment, within which it takes place. Apart from these, information systems evaluation faces certain conceptual and operational challenges that further complicate the process of performance evaluation. This paper deals with the issue of performance evaluation of information system utilised for engineering asset lifecycle. The paper highlights that these information systems not only have to enable asset management strategy, but also are required to inform the same for better lifecycle management of the critical asset equipment utilised in production or service environments. Evaluation of these systems, thus, calls for ascertaining both hard as well as soft benefits to the organisation and their contribution to organizational development. This, however, requires that evaluation exercise identifies alternatives and choices and in doing so becomes a strategic advisory mechanism that supports information systems planning, development, and management processes. This paper proposes a comprehensive evaluation methodology for evaluation of information systems utilised in managing engineering assets. This methodology is learning centric, provides feedback that facilitates actionable organizational learning, and thus allows the organisation to engage in generative learning based continuous improvement.

Paper Nr: 408
Title:

FAST UNSUPERVISED CLASSIFICATION FOR HANDWRITTEN STROKE ANALYSIS

Authors:

Won-Du Chang

Abstract: This paper considers the unsupervised classification of handwritten character strokes in regards to speed, since handwritten strokes prove challenging with their high and variable dimensions for classification problems. Our approach employs a robust feature detection method for brief classification. The dimension is reduced by selecting feature points among all the points within strokes, and thus the need to compare stroke signals between two different dimensions is eliminated. Although there are some remaining problems with misclassification, we safely classify strokes according to handwriting styles through a refinement procedure. This paper illustrates that the equalization problem, the severe difference in small parts between two strokes, can be ignored by summing all of the differences via our method.

Paper Nr: 495
Title:

INTERFACES FOR ALL – A TAILORING-BASED APPROACH

Authors:

Vania Neris and Cecilia Baranauskas

Abstract: Following the precepts of Universal Design, we must develop systems that allow access to software applications without discrimination and making sense for the largest possible audience. One way to develop Interfaces for All is to offer to users the possibility of tailoring the interface according to their preferences, needs and situations of use. Tailorable solutions already present in some interactive systems do not consider the diversity of users, as they do not include illiterates and non-expert users, for example. The development of systems to be used for all requires a socio-technical vision for the problem. In this paper we present and discuss the first results of a work based on the reference of Organizational Semiotics and Participatory Design to elicit users’ and system’s requirements, and design a software solution with the direct participation of those involved, under the design for all principles.

Paper Nr: 498
Title:

INTEGRATING GOOGLE EARTH WITHIN OLAP TOOLS FOR MULTIDIMENSIONAL EXPLORATION AND ANALYSIS OF SPATIAL DATA

Authors:

Sergio Di Martino, Sandro Bimonte, Filomena Ferrucci and Michela Bertolotto

Abstract: Spatial OnLine Analytical Processing solutions are a type of Business Information Tool meant to support a Decision Maker in extracting hidden knowledge from data warehouses containing spatial data. To date, very few SOLAP tools are available, each presenting some drawbacks reducing their flexibility. To overcome these limitations, we have developed a web-based SOLAP tool, obtained by suitably integrating into an ad-hoc architecture the Geobrowser Google Earth with a freely available OLAP engine, namely Mondrian. As a consequence, a Decision Maker can perform exploration and analysis of spatial data both through the Geobrowser and a Pivot Table in a seamlessly fashion. In this paper, we illustrate the main features of the system we have developed, together with the underlying architecture, using a simulated case study.

Paper Nr: 564
Title:

An Automated Meeting Assistant - A Tangible Mixed Reality interface for the AMIDA Automatic Content Linking Device

Authors:

Jochen Ehnes

Abstract: We describe our approach to support ongoing meetings with an automated meeting assistant. The system based on the AMIDA Content Linking Device aims at providing relevant documents used in previous meetings for the ongoing meeting based on automatic speech recognition. Once the content linking device finds documents linked to a discussion about a similar subject in a previous meeting, it assumes they may be relevant for the current discussion as well. We believe that the way these documents are offered to the meeting participants is equally important as the way they are found. We developed a mixed reality, projection based user interface that lets the documents appear on the table tops in front of the meeting participants. They can hand them over to others or bring them onto the shared projection screen easily if they consider them relevant. Yet, irrelevant documents don’t draw too much attention from the discussion. In this paper we describe the concept and implementation of this user interface and provide some preliminary results.

Paper Nr: 579
Title:

Investigation of Error in 2D Vibrotactile Position Cues with respect to Visual and Haptic Display Properties: A Radial Expansion Model for Improved Cuing

Authors:

Nicholas Lipari, Christoph Borst and Vijay Baiyya

Abstract: We present a human factors experiment aimed at investigating certain systematic errors in locating position cues on a rectangular array of vibrating motors. Such a task is representative of haptic signals providing supplementary information in a collaborative or guided exploration of some dataset. In this context, both the visual size and presence of correct answer reinforcement may be subject to change. Consequently, we considered the effects of these variables on position identification. We also investigated five types of stimulus points based on the stimulus' position relative to adjacent motors. As visual size increases, it initially demonstrates the dominant effect on error magnitude, then correct answer feedback plays a role in larger sizes. Radial error, roughly the radial difference in the stimulus and response position, modeled the systematic error. We applied a quadratic fit and estimated a calibration procedure within a 2-fold cross validation.

Paper Nr: 611
Title:

DEVELOPING A MODEL TO MEASURE USER SATISFACTION AND SUCCESS OF VIRTUAL MEETING TOOLS IN AN ORGANIZATION

Authors:

A.K.M.Najmul Islam

Abstract: Information Systems evaluation is an important issue for the managers in an organization. But it is very difficult to evaluate. A lot of works have been done in this particular area. Many methods have been developed over the years to evaluate the information systems. The easiest and mostly used evaluation method is to measure the user satisfaction of a system. But there is no unique model that can be used to evaluate all kind of information systems. In this paper, we propose a model to measure user satisfaction of virtual meeting tools used in an organization. We verify the model by conducting two surveys and applying different statistical analysis on the collected survey data. The proposed model measures the user satisfaction and success based on six factors namely content, accuracy, ease of use, timeliness, system reliability and system speed.

Short Papers
Paper Nr: 10
Title:

END-USER DEVELOPMENT IN A GRAPHICAL USER INTERFACE SETTING

Authors:

Martin Auer, Stefan Biffl and Johannes Poelz

Abstract: In many areas, software applications must be highly configurable—using a pre-defined set of options or preferences is not flexible enough. One way to improve an application’s flexibility is to allow users to change parts of the source code—and thus the application’s behavior—on-the-fly; modern languages like Java greatly facilitate this by providing reflection features. Such an approach, however, is often limited to user-defined mathematical formulas, e.g., in software like cash flow engines, reporting tools etc. This paper applies the concept to a more generic area: the graphical representation of diagrams in a UML tool. Users can create new types of graphical elements by directly programming how the elements are drawn, all within the UML tool, and at run time. The approach is flexible, and the user-defined extensions are consistent with the tool’s core source code.
Download

Paper Nr: 107
Title:

EVALUATION OF ANTHROPOMORPHIC USER INTERFACE FEEDBACK IN AN EMAIL CLIENT CONTEXT AND AFFORDANCES

Authors:

Pietro Murano, Amir Malik and Patrik O'brian Holt

Abstract: This paper describes an experiment and its results concerning research that has been going on for a number of years in the area of anthropomorphic user interface feedback. The main aims of the research have been to examine the effectiveness and user satisfaction of anthropomorphic feedback. The results are of use to all user interface designers. Currently the work in the area of anthropomorphic feedback does not have any global conclusions concerning its effectiveness and user satisfaction capabilities. This research is investigating finding a way for reaching some global conclusions concerning this type of feedback. This experiment, concerned the context of downloading, installing and configuring an email client which is part of the domain of software for systems usage. Anthropomorphic feedback was compared against an equivalent non-anthropomorphic feedback. The results indicated the anthropomorphic feedback to be more effective and preferred by users. It was also the aim to examine the types of feedback in relation to Affordances. The results obtained can be explained in terms of the Theory of Affordances.
Download

Paper Nr: 109
Title:

STUDY FOR ESTABLISHING DESIGN GUIDELINES FOR MANUALS USING AUGMENTED REALITY TECHNOLOGY

Authors:

Miwa Nakanishi

Abstract: Augmented reality (AR), a technology that enables users to see an overlay of digital information on the real view, is expected to be applied more and more to human factor innovation. It has been suggested that a manual using AR (AR manual) improves accuracy and efficiency in actual work situations. To make an AR manual practical, hardware such as see-through display or retinal scanning display has been actively developed. However, software, i.e., information provided by the AR manual, has not been sufficiently examined. In a recent study, the authors built a mathematical model that describes the “effective complexity” of an AR manual according to the complexity of the real view. In this study, the basic model is verified by applying it to the AR manual for a realistic task. Furthermore, the applicability of the basic model is examined by assuming two different situations where either accuracy or efficiency has high priority. The objective of this study is to establish rough but practical guidelines for designing an AR manual.
Download

Paper Nr: 163
Title:

APLYING COLORS BASED ON CULTURE KNOWLEDGE TO MOTIVATE COLLABORATION ON THE WEB

Authors:

Ana L. Dias, Junia Anacleto, ROSANGELA AP. D. PENTEADO and LUCIANA M. SILVEIRA

Abstract: Collaborative and Participatory work via Web tends to increase due to teams of professionals’ needs in accomplishing tasks separated by distance and time, which demands more effort and stronger commitment from each person. In this context, it must be considered cultural differences, which interfere with the performance of each individual and either promote or deny the communication intended for the group. This paper aims to discuss a multidisciplinary analysis about colors and stimuli in computing environment using Common Sense knowledge, considering the cultural association people make between colors and actions, emotions and objects, showing how it can motivate users to access and participate in collaborative tasks through stimuli using color symbolically built in the culture.
Download

Paper Nr: 184
Title:

Enabling Context-adaptive Collaboration for Knowledge-intense Processes

Authors:

Stephan Lukosch, Joerg M. Haake and Dirk Veiel

Abstract: Knowledge workers solve complex problems. Their work seems not to be routinisable because of the unique results and constellation of actors involved. For distributed collaboration knowledge workers need many different tools, which leads to knowledge dispersed over different locations, artifacts, and systems. Context-based adaptation can be used to support teams by shared workspace environments best meeting their needs. We propose an ontology representing context in a shared workspace environment, and a conceptual architecture for context sensing, reasoning, and adaptation. We report on first experiences demonstrating the applicability of our approach and give an outlook on directions of future work.
Download

Paper Nr: 301
Title:

DYNAMIC MULTIMEDIA ENVIRONMENT BASED ON REALTIME USER EMOTION ASSESSMENT - Biometric User Data towards Affective Immersive Environments

Authors:

Vasco Vinhas, Daniel C. Silva, Eugénio Oliveira and Luís Paulo Reis

Abstract: Both the academic and industry sectors have increased their attention and investment to the fields of Affective Computing and immersive digital environments, the latter imposing itself as a reliable domain, with increasingly cheaper hardware solutions. With all this in mind, the authors envisioned an immersive dynamic digital environment tied with automatic real-time user emotion assessment through biometric readings. The environment consisted in an aeronautical simulation, with internal variables such as flight plan, weather conditions and maneuver smoothness dynamically altered by the assessed emotional state of the user, based on biometric readings, including galvanic skin response, respiration rate and amplitude and phalange temperature. The results were consistent with the emotional states reported by the users, with a success rate of 78%.
Download

Paper Nr: 302
Title:

USERS NEEDS FOR COLLABORATIVE MANAGEMENT IN EMERGENCY INFORMATION SYSTEMS

Authors:

Teresa Onorati, Alessio Malizia, Ignacio Aedo and Paloma Díaz

Abstract: The management of an emergency is a cooperative work that involves people from different areas and different roles. Collaborative tools are potentially useful for solving emergency situations but their utility depends on the emergency workers’ needs. In this paper, we describe an empirical study based on surveys and interviews that have been done with users to study how to improve the collaborative functionalities of an existing system used for cooperating and sharing resources among different Spanish Emergency Management governmental agencies. The goal of the study was to understand how emergency workers cooperate in real emergencies and the kind of tools they are actually using, as well as to identify potential strategies and technologies to improve the level of computer-supported collaboration.
Download

Paper Nr: 321
Title:

A Study on the Use of Gestures for Large Displays

Authors:

António Neto and Carlos Duarte

Abstract: Large displays are becoming available to larger and larger audiences. In this paper we discuss the interaction challenges faced when attempting to transfer the classic WIMP design paradigm from the desktop to large wall-sized displays. We explore the field of gestural interaction on large screen displays, conducting a study where users are asked to create gestures for common actions in various applications suited for large displays. Results show how direct manipulation through gestural interaction appeals to users for some types of actions, while demonstrating that for other types gestures should not be the preferred interaction modality.
Download

Paper Nr: 329
Title:

OF COLLABORATIVE SOFTWARE FOR WEB COMMUNITIES EVALUATION: A CASE STUDY

Authors:

Juliano Duarte, Luis C. E. de Bona, Marcos Sunye, Laura S. García, Fabiano Silva, Marcos A. Castilho, Alexandre I. Direne and Dayane Fátima Machado

Abstract: Collaborative software, and more specifically social software, must provide its users with not only a good application interface, but also - and more importantly - with easy and direct contact with other users. Within the field of collaborative software, we chose Orkut R°as our object of evaluation, particularly in terms of the following communication tools: communities, messages and scrapbook. The research consisted, initially, of the evaluation of the abovementioned tools and, secondly, of the assessment of our method itself and its ability to appraise the inherent features of this kind of software. In the present paper we will introduce and describe the method upon which we based our assessment. In addition to that, we will justify the choice of this particular method and discuss the results obtained.
Download

Paper Nr: 352
Title:

Simulation of forest evolution : effects of environmental factors to trees growth

Authors:

Ying Tang and Jing Fan

Abstract: Out of the complexity and variety of plant communities, it is a challenging task to simulate the structure and dynamics of plant communities. In this paper we simulate and visualize the evolution of forests by the tree growth model influenced by the environmental factors. The environmental factors we considered include illumination, terrains and resource competition among trees. We develop our tree growth model based on the forest gap model by effectively incorporating the above environmental factors. The system is implemented with Visual C++ 6.0 and OpenGL. We compare the growth of trees (their heights and DBHs) which are of different ages or located in different regions. We also show changes of trees distribution within certain landscape for a long period of time (more than two hundred years). The illuminating and interesting experimental results show that our simulation technique is effective.
Download

Paper Nr: 370
Title:

INFORMATION SYSTEM CUSTOMIZATION - Toward Participatory Design and Development of the Interaction Process

Authors:

Daniela Fogli and Loredana Parasiliti Provenza

Abstract: This paper proposes the adoption of human-computer interaction methods to address some of the problems related to the customization of information systems, and particularly of enterprise resource planning systems. The paper specifically describes a multi-facet approach to participatory design and development of information systems to build the dialogue between the information system and its users. It encompasses i) a specification framework for representing and translating the different perspectives of the members of the design team, including the end users’ perspective, ii) a methodology for collaborative design of the interaction process, and iii) a set of guidelines to carry out the development activities.
Download

Paper Nr: 380
Title:

DEFINING A WORKFLOW PROCESS FOR TEXTUAL AND GEOGRAPHIC INDEXING OF DOCUMENTS

Authors:

Diego Seco, Miguel R. Luaces, Ana Cerdeira-Pena and Nieves R. Brisaboa

Abstract: Many public organizations are working on the construction of spatial data infrastructures (SDI) that will enable them to share their geographic information. However, not only geographic data are managed in these SDIs, and, in general, in Geographic Information Systems (GIS), but also many textual documents must be stored and retrieved (such as urban planning permissions and administrative les). Textual index structures must be integrated with GIS in order to provide an efficient access to these documents. Furthermore, many of these documents include geographic references within their texts. Therefore, queries with geographic scopes should be correctly answered by the index structure and the special characteristics of these geographic references, due to their spatial nature, should be taken into account. We present in this paper a workflow process that allows a gradual and collaborative creation of a document repository. These documents can be efficiently retrieved using queries regarding their texts and regarding the geographic references included within them. Moreover, the index structure and the supported query types are briefly described.
Download

Paper Nr: 385
Title:

Measuring Coordination Gaps of Open Source Groups through Social Networks

Authors:

Liaquat Hossain

Abstract: In this paper, we argue that coordination gaps, such as communication issues and task dependencies have significant impact on performance of work group. To address these issues, contemporary science suggests optimising links between social aspects of society and technical aspects of machines. A framework is proposed to describe social network structure and coordination performance variables with regards to distributed coordination during bug fixing in the Open Source domain. Based on the model and the literature reviewed, we propose two propositions—(i) level of interconnectedness has a negative relation with coordination performance; and, (ii) centrality social network measures have positive relation with coordination performance variables. We provide empirical analysis by using a large sample of 415 open source projects hosted on SourceForge.net. The results suggest that there is relationship between interconnectedness and coordination performance and centrality measures were found to have positive relationships with the performance variables of coordination measures.
Download

Paper Nr: 413
Title:

Intelligent Authoring Tools for enhancing Mass Customization of e-Services: The smarTag Framework

Authors:

Marios Belk, Panagiotis Germanakos, Nikos Tsianos, George Samaras, Constantinos Mourlas and Zacharias Lekkas

Abstract: Mass customization should be more than just configuring a specific component (hardware or software), but should be seen as the co-design of an entire system, including services, experiences and human satisfaction at the individual as well as at the community level. The main objective of this paper is to introduce a framework for the automatic reconstruction of Web content based on human factors. Human factors and users’ characteristics play the most important role during the entire design and implementation of the framework which has the inherent ability to interact with its environment and the user and transparently adapt its behaviour using intelligent techniques, reaching high levels of usability, user satisfaction, effectiveness and quality of service presentation. The initial results of the evaluation have proven that the proposed framework do not degrade the efficiency (in terms of speed and accuracy) during the Web content adaptation process.
Download

Paper Nr: 446
Title:

Electronic Records Management Systems: The Human Factor

Authors:

Johanna Gunnlaugsdottir

Abstract: The purpose of this paper is to present the findings of a research conducted in Iceland during the period 2001-2005 and in 2008 on how employees view their use of Electronic Records Management Systems (ERMS). Qualitative methodology was used. Four organizations were studied in detail and other four provided a comparison. Open-ended interviews and participant observations were the basic elements of the study. The research discovered the basic issues in the user-friendliness of ERMS, the substitutes that employees turned to if they did not welcome ERMS, and how they felt that their work could be shared and observed by others. Employees seemed to regard ERMS as a groupware for constructive group work and not as an obtrusive part of a surveillance society. The research indicated training as the most important factor in making employees confident in their use of ERMS. The research identifies that most important implementation factors and the issues that must be dealt with to make employees more content, confident and proficient users of ERMS.
Download

Paper Nr: 485
Title:

THE IMPACT OF INTERFACE ASPECTS ON INTERACTIVE MAP COMMUNICATION: AN EVALUATION METHODOLOGY

Authors:

Lucia P. Maziero, Laura S. García and Cláudia Robbi

Abstract: In this paper, we will present an analytical and methodological procedure to evaluate the interfaces of Inter-active Maps. The main aims of one such evaluation is to (i) identify the essential aspects of these interfaces, (ii) investigate their influence on the communication with users and, based on this, (iii) set directives to guide the design of interfaces of future Interactive Maps. The process of evaluation leads to a detailed analysis of both the interface and the interaction itself. In order to do so, the process consists of the analysis of the essential elements of the interfaces, the evaluation of these aspects in relation to the users and, finally, the study of the results obtained. The results mainly refer to significant information on those aspects of the interfaces which, in turn, concern the necessary resources to both the interaction itself and the functionalities that Interactive Maps provide.
Download

Paper Nr: 490
Title:

Scenario-based Design -– An Essential Instrument for an Innovative Target Application: Case Report.

Authors:

Laura S. Garcia, Luis C. E. de Bona, Fabiano Silva, Marcos Sunye, Marcos A. Castilho and Alexandre I. Direne

Abstract: Scenario-based design is a largely accepted method within the literature on Human-Computer Interaction and, for certain cases, also within the literature on Software Engineering. However, the lack of integration between these two areas, in addition to the lack of attention paid to the actual (and still quite infrequent) use of scenario-based design, stresses the need for increased emphasis on the relevance of scenario-based design applied to projects of truly innovative technological artefacts. In the present paper, we will present a case report whereby the abovementioned method, when applied to problem-analysis, led to the project of a differential user-interface environment when compared to the human process prior to the introduction of the computational application.
Download

Paper Nr: 506
Title:

Web Form Page in Mobile Devices: Optimization of Layout with a Simple Genetic Algorithm

Authors:

Luigi Troiano, Cosimo Birtolo, Roberto Armenise and Gennaro Cirillo

Abstract: Filling out a form on mobile devices is generally harder than on other terminals, due to the reduced keyboard and display size, entailing a higher fatigue and limiting the user experience. A solution to this problem can be based on reducing the input effort required to the user by auto-completion, and re-organizing the fields in order to provide first those with a higher prediction power. In this paper we assume to be able to predict the user input and we optimize the fields layout aiming at reducing on average the input actions.
Download

Paper Nr: 559
Title:

INTEGRATING VR IN AN ENGINEERING COLLABORATIVE PROBLEM SOLVING ENVIRONMENT

Authors:

Ismael Santos, Alberto Raposo and Marcello Gattass

Abstract: We present an environment for executing engineering simulations and visualizing results in a Virtual Environment. The work is motivated by the necessity of finding effective solutions for collaboration of team workers during the execution of complex Petroleum Engineering projects. By means of a Scientific Workflow Management System users are able to orchestrate the execution of different simulations as workflow tasks that can be arranged in many ways according to project requirements. Within a workflow, as its last step, the most interesting cases can be selected for visualization in a distributed collaborative session.
Download

Paper Nr: 595
Title:

BACK CHANNEL IN INTERACTIVE DIGITAL TELEVISION SYSTEMS: STRATEGIES FOR PROTOTyping APPLICATIONS USING AN INTERACTIVE SERVICE PROVIDER

Authors:

Joao D. Santos Junior, Iran Calixto Abrao, Gabriel Massote Prado, João Carlos Morselli, Paulo MUniz de Ávila, Mateus D. Santos and Rinaldi Nascimento

Abstract: This work, developed at the Interactive Digital Television Lab of the PUC Minas (Brazil), aims to present strategies for implementation of a prototype of a sufficient platform for building of Interactive Service Provider (PSI), which can store, analyze and generate reports based on information derived from the interaction of viewers with applications in Digital Interactive Television, characterized by the use of a back channel via Internet Protocol (IP). In case of brazilian scenario (Brazilian Digital Television System), the development of a PSI platform is based-on the experience accumulated with the development of the platform JiTV (Java Interactive Television), which includes the production in broadcaster enterprise, transmition over communication network and receiving on access terminal of the viewer, increasing for the use of back channel to interactivities actions.
Download

Paper Nr: 628
Title:

HOW CAN A QUANTUM IMPROVEMENT IN PERSONAL AND GROUP INFORMATION MANAGEMENT BE REALIZED?

Authors:

Roger Tagg and Tamara Beames

Abstract: A number of authors have pointed out that the IT technology used by individuals and groups for general information work support has not advanced very far in the last decade. At the same time the level of information and cognitive overload on individuals has continued to rise. Research has been done in several areas (e.g. text mining and categorization, email threads etc) but a significant improvement in everyday tools has yet to be seen. This paper addresses the current problems, describes a range of issues that need to be addressed, and discusses how it might be possible in the future to advance to a new level of IT support.
Download

Paper Nr: 639
Title:

Portuguese Web Accessibility

Authors:

José Martins and Ramiro Gonçalves

Abstract: The use of the web is quickly spreading to the majority of the society. In many countries the use of the web in government services, education and training, commerce, news, citizenship, health and entertainment is significantly increasing. The Internet is extremely important for the publishing of information and for the interaction between the society elements. Due to this, it’s essential that the web presents itself accessible to all, including those with any kind of deficiency. An accessible web may help the handicapped citizens interact with the society in a more active way. With this in mind, an evaluation of the accessibility levels of the Portuguese websites is essential for an assumption on the availability of these websites to all disabled citizens.
Download

Paper Nr: 644
Title:

AN INNOVATIVE MODEL OF TRANS-NATIONAL LEARNING ENVIRONMENT FOR EUROPEAN SENIOR CIVIL SERVANTS: ORGANIZATIONAL ASPECTS AND GOVERNANCE

Authors:

Nunzio Casalino

Abstract: The purpose of the study will be to investigate the benefits of the introduction of e-learning and of a specific online environment in the training process of European civil servants. It describes the final results and the organisational impact of a first pilot training course combining 24 hours of e-learning and 27 hours (one week) of in-class courses. For each module, the e-learning preparation provided general training contents to enhance participants background necessary for in-class sessions. The project implemented a pilot to demonstrate the effectiveness of the overall system (applications, contents and organizational aspects), to promote the use of e-learning in the EU Public Administration field. After one year, the project concluded its pilot phase and the results will be analyzed. With a view to stimulating co-operation and the exchange of best practices in Europe, its purpose is to build and test an innovative model of trans-national networking, thanks to the active involvement of European schools and institutes of Public Administration.
Download

Paper Nr: 74
Title:

AFFECTIVE ALGORITHM TO POLARIZE CUSTOMER OPINIONS

Authors:

Domenico Consoli, Claudia Diamantini and Domenico Potena

Abstract: Human interact with other people and exchange reviews and ideas via web. With the explosion of Web 2.0 platforms such as blogs, discussion forums, peer-to-peer networks, and various other types of social media, consumers share, on the web, their opinions regarding any product/service. Opinions give information about how product/service and reality in general is perceived by other people. Emotional needs are associated with the psychological aspects of product ownership. The customer when writes his reviews on a product/service transmits emotions in the message that he/she feels first and after purchasing the product. For the enterprise understanding customer emotional needs is vital for predicting and influencing their purchasing behaviour. In this paper, we polarize, with original algorithm, customer opinions basing on emotional indexes that are used for decipher, in affective key, facial expressions and emotional lexicon.
Download

Paper Nr: 87
Title:

User Acceptance of Self-Service Technologies: An Integration of the Technology Acceptance Model and the Theory of Planned Behavior

Authors:

Chiao-Chen Chang, Wei-Lun Chang and Yang-Chieh Chin

Abstract: This study examines what may affect consumers’ intention to use a self-service technology (SST). The objective of this study is to advance our understanding on the intention to use SSTs by comparing and integrating the theory of planned behaviour (TPB) and the technology acceptance model (TAM) as they relate to this issue. Data was collected from 280 adult consumers, and a structural equation modelling approach was employed to test the hypotheses. Although attitude, subjective norm, perceived usefulness have direct positive relationships to behavioural intention to use a SST, perceived behavioural control plays the most important role in explaining the intention to use SSTs. We conclude with managerial implications and directions for future research.
Download

Paper Nr: 199
Title:

KEEPING TRACK OF HOW USERS USE CLIENT DEVICES: An Asynchronous Client-Side Event Logger Model

Authors:

Vagner D. Figueredo Santana and Cecilia Baranauskas

Abstract: Web Usage Mining usually considers server logs as a data source for collecting patterns of usage data. This solution presents limitations when the goal is to represent how users interact with specific user interface elements, since this approach may not have detailed information about users’ actions. This paper presents a model for logging client-side events and an implementation of it as a websites evaluation tool. By using the model presented here, miner systems can capture detailed Web usage data, making possible a fine-grained examination of Web pages usage. In addition, the model can help Human-Computer Interaction practitioners to log client-side events of mobile devices, set-top boxes, Web pages, among other artefacts.
Download

Paper Nr: 218
Title:

Investigations into Enhanced Alert Management for Collision Avoidance in ship-borne Integrated Navigation Systems

Authors:

Michael Baldauf, Knud Benedict, Florian Motz and Sabine Höckel

Abstract: High sophisticated integrated navigation systems are installed on the ship navigational bridges to support the operator of modern container ships. The integrated systems should assist the captains, navigation officers and the pilots to avoid any dangerous situation when sailing from port of departure to the port of destination. Numerous Human Machine Interfaces require interaction to control the voyage in every situation under all possible circumstances. However, with respect to shipping statistics collisions and groundings are major risks. This paper deals with investigations into the alert management on board modern ships and a potential approach to enhance the alert handling by reducing the high number of alarms. Results gained during several field studies on board ships are presented. Based on these results the draft of a concept for reducing the high frequency of collision warnings to be implemented into the navigation systems on board is discussed. First preliminary results are introduced.
Download

Paper Nr: 349
Title:

A receiver unit of photodetectors for a laser pointer as a wireless controller

Authors:

Jaemyoung Lee

Abstract: We propose a wireless receiver unit of photodetectors for a commercially available laser pointer. A controller in the receiver unit drives a multimedia player in accordance with the scanning direction of the laser pointer over photodetectors. A control algorithm is proposed for control of the multimedia player. We believe that the proposed receiver unit and the control algorithm for a laser pointer can be applied to other systems.
Download

Paper Nr: 366
Title:

CROSSMODAL PERCEPTION OF MISMATCHED EMOTIONAL EXPRESSIONS BY EMBODIED AGENTS

Authors:

Ji-He Suk, yu suk cho and Kwang hee han

Abstract: Today an embodied agent generates a large amount of interest because of its vital role for human-human interactions and human-computer interactions in virtual world. A number of researchers have found that we can recognize and distinguish between emotions expressed by an embodied agent. In addition many studies found that we respond to simulated emotions in a similar way to human emotion. This study investigates interpretation of mismatched emotions expressed by an embodied agent (e.g. a happy face with a sad voice). The study employed a 4 (visual: happy, sad, warm, cold) X 4 (audio: happy, sad, warm, cold) within-subjects repeated measure design. The results suggest that people perceive emotions not depending on just one channel but depending on both channels. Additionally facial expression (happy face vs. sad face) makes a difference in influence of two channels; Audio channel has more influence in interpretation of emotions when facial expression is happy. People were able to feel other emotion which was not expressed by face or voice from mismatched emotional expressions, so there is a possibility that we may express various and delicate emotions with embodied agent by using only several kinds of emotions.
Download

Paper Nr: 447
Title:

COMMON SENSE KNOWLEDGE BASE EXPANDED BY AN ONLINE EDUCATIONAL ENVIRONMENT

Authors:

Junia Anacleto, Alexandre Mello Ferreira, Eliane Nascimento Pereira, Izaura M. Carelli, Marcos Alexandre Silva and Ana L. Dias

Abstract: The computers games use on educational field have been growing as a potential tool to facilitate the teaching-learning process. In the “What is it?” environment, presented in this article, the teacher can be co-author of a guess game based on cards, in which, the common sense knowledge support allows the teacher to be aware of students’ culture and necessities. The environment also proposes a way to collect common sense statements, where engines on editor’s module and player’s module store all user interaction and combine this information to make new relations into Brazilian Open Mind Common Sense project (OMCS-Br) knowledge base. A study case was done by teachers and students from two different public schools, whose result point out the potential of this new way to collect common sense statements naturally through a web game.
Download

Paper Nr: 456
Title:

AN ANALYSIS OF THE DIFFUSION OF INFORMATION TECHNOLOGY IN EDUCATION

Authors:

Laura Asandului, Ciprian Ceobanu and Alina ionescu

Abstract: The accelerated development of the information and communication technologies determined educational institutions and companies to implement alternatives to the traditional teaching methods. The new literacy determines the e-learning competencies. The paper concerns an analysis concerning the expenditure for information technologies, the use of computer and of Internet, computer and Internet skills, and also e-learning in the EU countries. The results showed that there are disparities among EU member states regarding the extent and the perspectives for the developing of e-learning.
Download

Paper Nr: 626
Title:

What Drives People to Play Wii Game? The Trend of Human-Computer Interaction on Video Game Design

Authors:

Chih-Hung Wu, Chin-Chia Hsu, Chin-Chia Hsieh and Cheng-Chieh Huang

Abstract: In Taiwan, even though official sales of the Wii Game have not yet begun, parallel imports have already taken the market by storm. Therefore, this study wants to know what drives people to play Wii games which are designed on the basis of new trend of human-computer interaction (HCI). This study uses the view of the integrated model which combines the intrinsic motives of leisure with technology acceptance model. We believe the intrinsic motivation of leisure will influence the attitude of people in using the Wii Game, as well as their intentions for using the Wii Game. This study uses the Wii Game as an empirical test to explore the people toward to accept the new trend of human-computer interaction--leisure technology by technology acceptance model (TAM). This study uses many sources of data, including written questionnaires and outdoor surveys, in order to obtain the perceptions and usage attitudes of users of the Wii Game; Structural Equation Modeling (SEM) is used to analyze data. Finally, we propose a new leisure technology acceptance model to verify our idea of new trend of HCI.
Download

Paper Nr: 637
Title:

Text Driven Lips Animation Synchronizing with Text-to-Speech for Automatic Reading System with Emotion

Authors:

Futoshi Sugimoto

Abstract: We developed a lips animation system that is simple and suits the characteristics of Japanese. It adopts phoneme context that is usually used in speech synthesis. It only needs a small database that contains tracing data of movement of lips for each phoneme context. Through an experiment using subjects in which the subjects read a word from lips animation, we got result as correct as from real lips.
Download