ICEIS 2003 Abstracts

 

Abstract of Accepted Papers

Program Committee

Case Studies

Keynote Lectures

Tutorials

Workshops

Paper Templates

Proceedings

Social Activities

Transportation and Accomodation

Local Information

Organizing Committee

Steering Committee

Sponsors

Hall of Fame

Links


Co-organized by:

École Supérieure d' Électronique de l' Ouest
École Supérieure
d' Électronique de
l' Ouest

and
Escola Superior de Tecnologia
Departamento de Sistemas 
e Informática
da
EST-Setúbal/IPS 
Escola Superior de 
Tecnologia de Setúbal 

 Instituto Politécnico de Setúbal

 

ICEIS 2003 Sites
www.est.ips.pt/iceis/

www.iceis.org

DBLP bibliography

 

Area 1 - DATABASES AND INFORMATION SYSTEMS INTEGRATION
Area 2 - ARTIFICIAL INTELLIGENCE AND DECISION SUPPORT SYSTEMS
Area 3 - INFORMATION SYSTEMS ANALYSIS AND SPECIFICATION
Area 4 - Software Agents and Internet Computing

Area 1 - DATABASES AND INFORMATION SYSTEMS INTEGRATION

Title:

O2PDGS: AN APPROACH FOR UNDERSTANDING OBJECT ORIENTED PROGRAMS

Author(s):

Hamed  Al-Fawareh

Abstract: In this paper, we provide a description of dependence graphs for representing meaningful dependencies between components of object-oriented programs. A formal description of the dependence relations of interest is given before giving a representative illustration of object-oriented program dependence graphs (O2PDGs). The paper also discusses an approach for understanding object-oriented programs through the use of O2PDGs.

Title:

ERP SYSTEMS IMPLEMENTATION DETERMINANTS AND SUCCESS MEASURES IN CHINA: A CASE STUDY APPROACH

Author(s):

Christy Cheung, Zhe Zhang, Matthew Lee, Liang Zhang

Abstract: With the growing intensive global competition and integration of the world economy, manufacturing firms have to reduce inventory level and operation costs, improve customer service to obtain competitive advantage against their competitors. Manufacturing companies are forced to adopt new methods to achieve the above objectives. Enterprise resource planning (ERP) system is one of the most widely accepted choices. AMR predicts the total ERP market will reach $66.6 billion by 2003, growing an estimated 32% annually over the next five years. Significant benefits such as improved customer service, better production scheduling, and reduced manufacturing costs can accrue from the successful implementation of ERP (Ang et al, 1995). However, the successful implementation rate is extremely low especially in China and many firms didn’t achieve intended goals. Thus, it’s necessary for ERP practitioners and researchers to investigate the reasons why the implementation success rate of ERP systems in China is so low. Prior studies mainly focus on critical success factors or single ERP implementation success measure without theoretical support. This study attempts to combine Ives, Hamilton, and Davis (1980) MIS research model and DeLone & McLean’s (1992) IS success model to develop an ERP implementation success model, identifying both generic and unique factors that affect ERP systems implementation success in China and using multiple ERP implementation success measures to assess whether an ERP implementation is a success or failure. Through multiple case study research method, more detailed information about ERP implementations could be collected. Moreover, it solves problems of validity and reliability of constructs occurring frequently in a single case study. The results of this research can help ERP-related researchers, practitioners, and companies to get more comprehension of ERP systems implementation issues and the chance of ERP implementation success could be increased given enough attention to these issues.

Title:

DATA WAREHOUSING: A REPOSITORY MODEL FOR METADATA STORAGE AND RETRIEVAL BASED ON THE HUMAN INFORMATION PROCESSING

Author(s):

Enrique Luna-Ramírez, Félix García-Merayo, Covadonga Fernández-Baizán

Abstract: The information on the creation, management and use of a data warehouse is stored in what is called the metadata repository, making this repository the single most important component of the data warehouse. Accordingly, the metadata repository plays a fundamental role in the construction and maintenance of the data warehouse, as well as for accessing the data it stores. In this paper, we propose a repository model conceived to store and retrieve the metadata of a corporation data warehouse. With a view to achieving this objective, the model, composed of an approach for modelling the repository structure and by a metamodel for retrieving metadata, is based on the human information processing paradigm. So, the model considers a series of distinctive functionalities that can be built into a repository system to assure that it works efficiently. These functionalities refer to the use of two memories for storing the repository metadata and a set of structures and processes for retrieving the information passing from one memory to another. One of the memories in particular is used to store the most recurrent metadata in a corporate environment, which can be rapidly retrieved with the help of the above-mentioned structures and processes. These structures and processes also serve to contextualise the information of a data warehouse according to the projects or business areas to which it refers.

Title:

HOSPITAL CASE RECORDS INFORMATION SYSTEM: CASE STUDY OF A KNOWLEDGE-BASED PRODUCT

Author(s):

A. Neelameghan, M. Vasudevan

Abstract: Briefly discusses knowledge management and use of knowledge-based products in enterprises. Enumerates the information resources of a hospital and describes the design and development of a patients’ case records system, specifically for a hospital specializing in surgical cases of tumors of the central nervous system. Each case record has data / information on over 150 attributes of patient, facility for hypertext linking relevant images (CT scan, X-ray, NMR, etc.) and access to electronic documents from other websites. The collaborative roles of the hospital doctors and a consultant information specialist in the development of the system are indicated. Output of a case record with links to related CT scan pictures and a web document is presented as example. Concludes mentioning the various uses of the system.

Title:

MODELS FOR IMPLEMENTATION OF ONLINE REAL TIME IT-ENABLED SERVICE FOR ENTRY TO PROFESSIONAL EDUCATION

Author(s):

Natesan  T.R, V. Rhymend  Uthariaraj, George  Washington .D.

Abstract: Any agency selecting candidates for admission to any professional education has to administer a common entrance examination, evaluate the responses and offer seats accoring to their merit. This task has two parts viz., conduct of examination and admission. In this paper a process oriented data model for the conduct of examination and admission process has been developed and implemented, based on statistical and mathematical models. The schedule for online real time registration for the examination at various centres is based on a statistical model and the centres for the conduct of counselling are selected based on a mathematical programming model. This system has been implemented through online real time distributed database with secured Virtual Private Network (VPN)

Title:

STORAGE OF COMPLEX BUSINESS RULES IN OBJECT DATABASES

Author(s):

Dalen Kambur, Mark Roantree

Abstract: True integration of large systems requires sharing of information stored in databases beyond sharing of pure data: business rules associated with this data must be shared also. This research focuses on providing a mechanism for defining, storing and sharing business rules across different information systems, in an area where existing technologies are weak. In this paper, we present the pre-integration stage where individual business rules are stored in the database for subsequent exchange applications and information systems.

Title:

A GRAPHICAL LANGUAGE FOR DEFINING VIEWS IN OBJECT ORIENTED DATABASES

Author(s):

Elias Choueiri, Marguerite Sayah

Abstract: Within the framework of an Object Oriented Database Graphical Query Environment for casual end users, a View Definition Mechanism conceived for users who are expert in their application domain, but not necessarily computer specialists, is proposed in this paper. In this mechanism, a concentration is made on the strength of the graphical view definition language and on the conviviality of the user interface. The view definition language offers adaptation operations to the work context and restructuring operations on both attributes and classes that take into consideration the structure’s nesting and inheritance of the database classes. The user interface conviviality is based on the graphical visualization of the portion of the database schema that represents the domain of interest for a user group, and on the use of the graphical language for view definition. To eliminate crossings between different links of the visualized composition hierarchy, a method for graphical visualization is introduced.

Title:

A TRANSPARENT CLIENT-SIDE CACHING APPROACH FOR APPLICATION SERVER SYSTEMS

Author(s):

Daniel Pfeifer, Zhenyu Wu

Abstract: In recent years, application server technology has become very popular for building complex but mission-critical systems. However, the resulting solutions tend to suffer from serious performance and scalability bottlenecks, because of their distributed nature and their various software layers. This paper deals with the problem by presenting a new approach about transparently caching results of a service interface's read-only methods on the client side. Cache consistency is provided by a descriptive cache invalidation model which may be specified by an application programmer. As the cache layer is transparent to the server as well as to the client code, it can be integrated with relatively low effort even in systems that have already been implemented. Early experimental results show that the approach is effective in improving a server's response times and its transactional throughput. Roughly speaking, the overhead for cache maintenance is small when compared to the cost for method invocations on the server side. The cache's performance improvements are dominated by the fraction of read method invocations and the cache hit rate. Moreover, the cache can be smoothly integrated with traditional caching strategies acting on other system layers (e. g. caching of dynamic Web pages on a Web server). The presented approach as well as the related prototype are not restricted to application server scenarios but may be applied to any kind of interface-based software layers.

Title:

EFFICIENT STORAGE FOR XML DATABASES

Author(s):

Weiyi Ho, Dave Elliman, Li Bai

Abstract: The widespread activity involving the Internet and the Web causes huge amount of electronic data to be generated everyday. This includes, in particular, semi-structured textual data such as electronic documents, computer programs, log files, transaction records, literature citations, and emails. Storing and manipulating the data thus produced has proven difficult. As conventional DBMSs are not suitable for handling semi-structured data, there is a strong demand for systems that are capable of handling large volumes of complex data in an efficient and reliable way. The Extensible Markup Language (XML) provides such solution. In this paper, we present the concept of ‘vertical view model’ and its uses as a mapping mechanism for converting complex XML data to relational database tables, and as a standalone data model for storing complex XML data.

Title:

DATA MANAGEMENT: THE CHALLENGE OF THE FUTURE

Author(s):

Alan Hodgett

Abstract: There has been an explosion in the generation of data in organizations. Much of this data is both unstructured and decentralized. This raises a number of issues for data management in organizations. This paper reports on an investigation that was undertaken in Australia to study the way in which organizations were dealing with the growth and proliferation of data and are planning for the future. The results show a high level of consciousness of the issues but indicate a prevalent optimism that technology will continue to provide solutions to present and future problems facing organizations. It appears that much magnetically recorded data will inevitably be lost over the next few decades unless positive actions are taken now to preserve the data.

Title:

TOWARDS A TIMED-PETRI NET BASED APPROACH FOR THE SYNCHRONIZATION OF A MULTIMEDIA SCENARIO

Author(s):

Abdelghani GHOMARI

Abstract: This article proposes a new approach for the synchronization of a multimedia scenario based on a new class of p-temporal Petri nets called p-RdPT+. One essential phase during the synchronization of multimedia scenario is related to the characterization of their logical and temporal structure. This structure is expressed through a set of composition rules and synchronization constraints depend on user interactions. An inconsistent situation is detected when some of the constraints specified by the author can not be met during the presentation. Hence, our approach permits verification of the specification by temporal simulation of the Petri net automatically generated or by analysing the graph of accessibility derived from the generated p-RdPT+ model.

Title:

PLANNING FOR ENTERPRISE COMPUTING SERVICES: ISSUES AND NECESSITIES ANALYZED

Author(s):

Jason Tseng, Emarson Victoria

Abstract: While planning, simulation and modeling tools exist for fields like network management and capacity/workload planning, little is known about automated planning tools for computing services. Considering the complexities and difficulties in deploying and managing computing infrastructure and services, we need to examine their planning processes instead, to augment existing enterprise management and planning solutions. In this paper, we present the motivation and advantages of a planning tool that automates the planning of computing services. This requires us to consider the issues and problems in deploying and managing computing services and their infrastructure. It allows us to understand why and how, such a planning tool can be used to alleviate, if not eliminate some of these problems. The planning tool works by actively abstracting properties of actual computing components using an information model/framework and formulating rules to analyze and automate the planning activity, using only abstracted component representations. This will pave the way for plans that closely reflect the actual computing environment, thus allowing users to leverage the flexibility and virtualization in the planning environment

Title:

EXTENDING GROUPWARE FOR OLAP

Author(s):

Sunitha Kambhampati, Daniel Ford, Vikas  Krishna, Stefan Edlund

Abstract: While applications built on top of groupware systems are capable of managing mundane tasks such as scheduling and email, they are not optimised for certain kinds of applications, for instance generating aggregated summaries of scheduled activities. Groupware systems are primarily designed with online transaction processing in mind, and are highly focused on maximizing throughput when clients concurrently access and manipulate information on a shared store. In this paper, we give an overview and discuss some of the implementation details of a system that transforms groupware Calendaring & Scheduling (C&S) data into a relational OLAP database optimised for these kinds of analytical applications. We also describe the structure of the XML documents that carry incremental update information between the source groupware system and the relational database, and show how the generic structure of the documents enables us to extend the infrastructure to other groupware systems as well.

Title:

REPCOM: A CUSTOMISABLE REPORT GENERATOR COMPONENT SYSTEM USING XML-DRIVEN, COMPONENT-BASED DEVELOPMENT APPROACH

Author(s):

Sai Peck Lee, Chee Hoong Leong

Abstract: It is undeniable that report generation is one of the most important tasks in many companies regardless of the size of the company. A good report generation mechanism can increase a company’s productivity in terms of effort and time. This is more obvious in some startup companies, which normally use some in-house report generators. Application development could be complex and thus software developers might require substantial efforts in maintaining application program code. In addition, most of the report generators use a different kind of format to store the report model. An application is no longer considered an enterprise-level product if XML is not being used elsewhere. This paper introduces a XML-driven and Component-based development approach to report generation with the purpose of promoting portability, flexibility and genericity. In this approach, report layout is specified using user-defined XML elements together with queries that retrieve data from different databases. A report is output as an HTML document, which can be viewed using an Internet browser. This paper presents the approach using an example and discusses the usage of the XML-driven report schema and how the proposed reusable report engine of a customisable report generator component system works to output an HTML report format. The customisable report generator component system is implemented to support heterogeneous database models

Title:

E-LEARNING INFORMATION MANAGEMENT ISSUES IN XML-BASED MEDIATION

Author(s):

Boris Rousseau, Eric Leray, Micheal O'Foghlu

Abstract: The advancement in XML-based mediation has made a significant impact on the area of E-Learning. Search engines have now been provided with new ways to improve resource discovery and new tools to customise resulting content. In the early days of XML, this work was undertaken within the context of the European funded project GESTALT (Getting Educational System Talk Across Leading Edge Technologies). Building on this experience, new improvement came from the European funded project GUARDIANS (Gateway for User Access to Remote Distributed Information And Network Services). However, due to the lack of support for native XML databases and XML querying languages, search facilities were limited. This paper builds upon the achievements of both projects and proposes a solution for XML querying in XQuery.

Title:

THE KINDS OF IT SERVICES MOST APPROPRIATE FOR A PARTICULAR SOURCING STRATEGY

Author(s):

Patrick Wall, Larry Stapleton

Abstract: IT processes and services often differ with regard to which sourcing strategies suits them best. The significance of IT within any given organization and the ability of that organization to provide an efficient and innovative information system on its own often determines what sourcing strategy it chooses. However, it is viewed as a better strategy to identify certain IT processes that can be maintained internally and then outsource those that the firm sees would be maintained better by an external vendor. This paper identifies the most commonly insourced, outsourced and selectively sourced IT activities and then asks the question of why is this the case.

Title:

ERP IMPLEMENTATION, CROSS-FUNCTIONALITY AND CRITICAL CHANGE FACTORS

Author(s):

Rolande Marciniak, Redouane El Amrani, Frantz Rowe, Marc Bidan, Bénédicte  Geffroy-Maronnat

Abstract: ERP (Enterprise Resource Planning) systems are characterised by particular features such as functional coverage, interdependent relationships, single database and standard management and processing rules; all of which are capable of bringing about various degrees of change within the company and, potentially, encourage a more cross-functional overview of it. However, few quantitative studies have been conducted to measure these effects. This is the background to this paper, which studied 100 French companies to arrive at the following assessment of ERP adoption. It then goes on to test the relationships between the factors influencing the ERP lifecycle ((preparation (organizational vision, process re-engineering), engineering (specific developments), implementation strategy (functional coverage and speed)), the perception of a more cross-functional overview of the company and, more globally, the scope of the change this technology brings about within the company. All these factors play significant roles, with functional coverage appearing to be a particularly important consideration, which should be addressed in future research.

Title:

LAB INFORMATION MANAGEMENT SYSTEM FOR QUALITY CONTROL IN WINERIES

Author(s):

Manuel Urbano Cuadrado, Maria Dolores Luque de Castro, Pedro Perez Juan

Abstract: The great number of analysis necessary to carry out during the wine production, as well as the storage, treatment and careful study and discussion of the data these analyses provide is of paramount importance for taking correct decisions for a better quality of both the winery and the wine it produces. We describe a system devote to overall management of information generate in the wine production processes. The system based on otirntation to objects technology allows quality control of the wine production in wineries and enables the integration of semiautomated and automated analytical processes.

Title:

INFORMATION SYSTEMS IN MEDICAL IMAGERY: CASE OF THE HOSPITAL OF BAB EL OUED

Author(s):

Abdelkrim MEZIANE

Abstract: The digital medical images got by the different existing modalities, and processed by powerful computers, became a very powerful means of diagnosis and economy. In Algeria, the patient is responsible of the images which are delivered to him. These images are most of the time, lost, not identified (name, date,…), or simply damaged for many reasons. Doctors and radiologists are sometimes, if not most of the time, obliged to ask the same patient to make the same radiography several times. The Algerian park of medical images tools is not well known or exhaustively assessed. The Algerian government reserves an important part of its budget to health medical care. A part of this budget goes to complementary medical tests, such as very expensive images paid by the taxpayer. Some solutions do exist in order to reduce these costs, by investing a small amount of money at the beginning.

Title:

SHIFTING FROM LEGACY SYSTEMS TO A DATA MART AND COMPUTER ASSISTED INFORMATION RESOURCES NAVIGATION FRAMEWORK

Author(s):

Nikitas Karanikolas, Christos Skourlas

Abstract: Computer Assisted Information Resources Navigation (CAIRN) was specified, in the past, as a framework that allows the end-users to import and store full text and multimedia documents and then retrieve information using Natural Language or field based queries. Our CAIRN system is a general tool that has focused on medical information covering the needs of physicians. Today, concepts related to Data Mining and Data Marts have to be incorporated into such a framework. In this paper a CAIRN-DAMM (Computer Assisted Medical Information Resources Navigation & Diagnosis Aid Based On Data Marts & Data Mining) environment is proposed and discussed. This integrated environment offers: document management, multimedia documents retrieval, a Diagnosis–aid subsystem and a Data Mart subsystem that permits the integration of legacy system’s data. The diagnosis is based on the International Classification of Diseases and Diagnoses, 9th revision (ICD-9). The document collection stored in the CAIRN-DAMM system consists of data imported from the Hospital Information System (HIS), laboratory tests extracted from the Laboratory Information System (LIS), patient discharge letters, ultrasound, CT and MRI images, statistical information, bibliography, etc. There are also methods permitting us to propose, evaluate and organize in a systematic way uncontrolled terms and to propose relationships between these terms and ICD-9 codes. Finally, our experience from the use of the tool for creating a Data Mart at the ARETEION University Hospital is presented. Experimental results and a number of interesting observations are also discussed.

Title:

ON OPERATIONS TO CONFORM OBJECT-ORIENTED SCHEMAS

Author(s):

Alberto Abelló, Elena Rodriguez, Elena Rodríguez, Marta Oliva, José Samos, Fèlix Saltor, Eladio Garví

Abstract: To build a Cooperative Information System from several pre-existing heterogeneous systems, the schemas of these systems must be integrated. Operations used for this purpose include conforming operations, which change the form of a schema. In this paper, a set of primitive conforming operations for Object-Oriented schemas are presented. These operations are organized in matrixes according to the Object-Oriented dimensions -Generalization/Specialization, Aggregation/Decomposition- on which they operate.

Title:

A MULTI-LEVEL ARCHITECTURE FOR DISTRIBUTED OBJECT BASES

Author(s):

Markus Kirchberg

Abstract: The work described in this article arises from two needs. First, there is still a need for providing more sophisticated database systems than just relational ones. Secondly, there is a growing need for distributed databases. These needs are adressed by fragmenting schemata of a generic object data model and providing an architecture for its implementation. Key features of the architecture are the use of abstract communicating agents to realize database transactions and queries, the use of an extended remote procedure call to enable remote agents to communicate with one another, and the use of multi-level transactions. Linguistic reflection is used to map database schemata to the level of the agents. Transparency for the users is achieved by using dialogue objects, which are extended views on the database.

Title:

INVESTIGATING THE EFFECTS OF IT ON ORGAISATIONAL DESIGN VARIABLES , TOWARDS A THEORETICAL FRAMEWORK

Author(s):

Rahim Ghasemiyeh, Feng Li

Abstract: Over the past decades many papers have been published about the effects of Information Technology (IT) on organisations. However despite the facts that IT has become a fundamental variable for organisational design very few studies have been done to explore this vital issue in a systematic and convincing fashion. The small amount of information and few theories available on the effects of IT on organisational design is surprising. Also one major efficiency of previous studies is the lack of empirical evidence. This has led researchers to describe IT in general ways and resulted in different and very often contradictory findings. Many researchers have become very concerned about the shortfall of comprehensive study on organizational design and IT which has been apparent for decades. One objective of this research is to fill this gap. This study will investigate three questions, aiming to develop a theoretical framework to evaluate the effects of IT on organisational design,. What are the effects of IT on organisational design variables? How IT influences organisational design variables? Which effects are resulted from which IT technologies? These could be considered as the most important features of this study, which are different with respect to previous literature.

Title:

SERVICES PROVIDERS’ PATTERNS FOR CLIENT/SERVER APPLICATIONS

Author(s):

Samar TAWBI, Bilal CHEBARO

Abstract: In this paper, we define two patterns that fall under the category of the architectural patterns described in (Shaw, 1996), to provide solutions for client-server applications. The first pattern defines the structure of a client-server application by defining the server's functionality in the form of standardized services, and the second defines the structure of a service in this type of application. The solution follows the patterns’ definition prototype used in (Gamma, 1995).

Title:

A DISTRIBUTED JOB EXECUTION ENVIRONMENT USING ASYNCHRONOUS MESSAGING AND WEB TECHNOLOGIES

Author(s):

Rod Fatoohi, Nihar Gokhale

Abstract: This is a project for developing an asynchronous approach to distributed job execution of legacy code. A job execution environment is a set of tools used to run jobs, generated to execute a legacy code, and handles different input and output values for each run. Current job execution and problem solving environments are mostly based on synchronous messaging and customized API that needs to be ported to different platforms. Here we are introducing an Internet-based job execution environment using off-the-shelf J2EE (Java 2 Enterprise Edition) components. The environment allows the execution of computational algorithms utilizing standard Internet technologies such as Java, XML, and asynchronous communication protocols. Our environment is based on four-tier client/server architecture and uses Java messaging, for inter-process communication, and XML fro job specification. It has been tested successfully using several legacy simulation codes on pools of Windows 2000 and Solaris systems.

Title:

DRUID: COUPLING USER WRITTEN DOCUMENTS AND DATABASES

Author(s):

André  Flory, Frédérique  Laforest, Youakim BADR

Abstract: Most database applications capture their data using graphical forms. Text fields have limited size and predefined types. Although data in fields are associated with constrains, it should be modeled in a suitable way to conform to a rigid schema. Unfortunately, too much constrains on data are not convenient in human activities where most activities are document-centric. In fact, documents become a natural way for human production and consumption. Nowadays, an increased interest is put on managing data with irregular structures, exchanging documents over the net, and manipulating their contents as efficiently as with structured data. In this paper, we introduce DRUID, a comprehensive document capturing and wrapping system. It ensures flexible and well-adapted information capture based on a Document User Interface and at the same time information retrieval based on databases. DRUID relies on a wrapper that transforms documents contents into relevant data. Also, it provides an expressive specification language for end-users to write domain-related extraction patterns. We validate our information system with a prototype of different modules, the primary realization is promising for a wide range of applications that use documents as a mean to store, exchange and query information.

Title:

TOWARD A FRAMEWORK FOR MANAGING INTERNET-ORIENTED DATABASE RESOURCES

Author(s):

Guozhou Zheng, Chang Huang, Zhaohui Wu

Abstract: The term “Grid” is used to describe those architectures that manage the distributed resources across the Internet. This paper is intended to introduce the Database Grid, an Internet oriented resource management architecture for database resource. We identify the basic requirements on database in two major application domains: e-science and e-business. Next, we illustrate how a layered service architecture can fulfil these emerging data sharing and data management requirements from Grid computing application. We introduce a series of protocols to define the proposed services.

Title:

A FRAMEWORK FOR GENERATING AND MAINTAINING GLOBAL SCHEMAS IN HETEROGENEOUS MULTIDATABASE SYSTEMS

Author(s):

Rehab Duwairi

Abstract: The problem of creating a global schema over a set of heterogeneous databases is becoming more and more important due the availability of multiple databases within organizations. The global schema should provide a unified representation of local (possibly heterogeneous) local schemas by analyzing them (to exploit their semantic contents), resolving semantic and schematic discrepancies among them, and producing a set of mapping functions that translate queries posed on the global schema to queries posed on the local schemas. In this paper, we provide a general framework that supports the integration of local schemas into a global one. The framework takes into consideration the fact that local schemas are autonomous and may evolve over time, which makes the definition of the global schema obsolete. We define a set of integration operators that integrates local schemas, based on the semantic relevance of their classes, into a set of virtual classes that constitute the global schema. We also define a set of modifications that can be applied to local schemas as a consequence of their local autonomy. For every local modification, we define a propagation rule that will automatically disseminate the effects of that modification to the global schema without having to regenerate it from scratch via integration.

Title:

A SCALABLE DISTRIBUTED SEARCH ENGINE FOR INTRANET INFORMATION RETRIEVAL

Author(s):

Minoru Uehara, Minoru Udagawa, Yoshifumi Sakai, Hideki Mori, Nobuyoshi Sato

Abstract: Intranet information retrieval is very important for corporations in business. They are trying to discover the useful knowledge from hidden web pages by using data mining, knowledge discovery and so on. In this process, search engine is useful. However, conventional search engines, which are based on centralized architecture, are not suited for intranet information retrieval because intranet information is frequently updated. Centralized search engines take a long time to collect web pages by crawler, robots and so on. So, we have developed a distributed search engine, called Cooperative Search Engine (CSE), in order to retrieve fresh information. In CSE, a local search engine located in each Web server makes an index of local pages. And, a Meta search server integrates these local search engines in order to realize a global search engine. In such a way, the communication delay occurs at retrieval time. So, we have developed several speedup techniques in order to realize fast retrieval. As this result, we have succeeded in increasing the scalability of CSE. In this paper, we describe speedup techniques and evaluate them.

Title:

A WEB APPLICATION FOR ENGLISH-CHINESE CROSS LANGUAGE PATENT RETRIEVAL

Author(s):

Wen-Yuan Hsiao, Jiangping Chen, Elizabeth Liddy

Abstract: This paper describes an English-Chinese cross language patent retrieval system built on a commercial database management software. The system makes use of various software products and lexical resources for the purpose of helping English native speakers to search for Chinese patent information. This paper reports the overall system design and cross language information retrieval (CLIR) experiments conducted for performance evaluation. The experimental results and the follow-up analysis demonstrated that commercial database systems could be used as an IR system with reasonable performance. Better performance could be achieved if the translation resources were customized to the document collection of the system, or more sophisticated translation disambiguation strategies were applied.

Title:

TRIGGER-BASED COMPENSATION IN WEB SERVICE ENVIRONMENTS

Author(s):

Randi Karlsen, Thomas Strandenaes

Abstract: In this paper we describe a technique for implementing compensating transactions, based on the active database concept of triggers. This technique enables specification and enforcement of compensation logic in a manner that facilitates consistent and semi-automatic compensation. A web service, with its loosely-coupled nature and autonomy requirements, represents an environment well suited for this compensation mechanism.

Title:

AN ARCHITECTURE OF A SECURE DATABASE FOR NETWORKED COLLABORATIVE ACTIVITIES

Author(s):

Akira  Baba, Michiharu Kudo, Kanta Matsuura, Kanta Matsuura

Abstract: Open network can be used for many purposes, e-commerce or e-government, etc. Different from those conventional applications, we consider networked collaborative activities, for example networked research activities. This application might be very useful and research activities could be significantly promoted. However, we must care about many security problems. Among those problems, we focus on an architecture of a secure database in this paper. The design of such an architecture is not a trivial task, since the data sets in database could be composed of wide range of data types, and each data type needs to satisfy its own security properties, including not only security but also an appropriate management of intellectual-property right, and so on. Thus, we design an architecture of a secure database, considering data types and various security operations.

Title:

USING INFORMATION TECHNOLOGIES FOR MANAGING COOPERATIVE INFORMATION AGENT-BASED SYSTEMS

Author(s):

Nacereddine ZAROUR, Mahmoud BOUFAIDA, Lionel SEINTURIER

Abstract: One of the most important problems encountered by the cooperation among distributed infomation systems is that of heterogeneity that is often not easy to deal with. This problem requires the use of the best combination of software and hardware components for each organization. However, the few suggested approaches for managing virtual factories have not led to satisfaction. Along with motivating the importance of such systems, this paper describes the major design goals of agent-based architecture for supporting the cooperation of heterogeneous information systems. It also shows how this architecture can be implemented using the combination of XML and CORBA technologies. This combination guarantees the interoperability of legacy systems regardless respectiveley of their data models and platforms heterogeneity and, therefore, improves the cooperation process. Examples are given from the supply chains of manufacturing enterprises.

Title:

MODELING A MULTIVERSION DATA WAREHOUSE: A FORMAL APPROACH

Author(s):

Tadeusz Morzy, Robert Wrembel

Abstract: A data warehouse is a large centralized repository that stores a collection of data integrated from external data sources (EDSs). The purpose of building a data warehouse is: to provide an integrated access to distributed and usually heterogeneous information, to provide a platform for data analysis and decision making. EDSs are autonomous in most of the cases. In a consequence, their content and structure change in time. In order to keep the content of a data warehouse up to date, after source data changed, various warehouse refreshing techniques have been developed, mainly based on an incremental view maintenance. A data warehouse will also need refreshing after a schema of an EDS changed. This problem has, however, received little attention so far. Few approaches have been proposed and they tackle the problem by using mainly temporal extensions to a data warehouse. Such techniques expose their limitations in multi–period quering. Moreover, in order to support predictions of trends by decision makers what–if analysis is often required. For these purposes, multiversion data warehouses seem to be very promising. In this paper we propose a model of a multiversion data warehouse, and show our prototype implementation of such a multiversion data warehouse.

Title:

TRADDING PRECISION FOR TIMELINESS IN DISTRIBUTED REAL-TIME DATABASES

Author(s):

Bruno SADEG

Abstract: Many information systems need not to obtain complete or exact answers to queries submitted via a DBMS (Database Management System). Indeed, in certain real-time applications, incomplete results obtained timely are more interesting than complete results obtained late. When the applications are distributed, DBMSs on which these applications are based have a main problem of managing the transactions (concurrency control and commit processes). Since these processes must be done timely (such as each transaction meets its deadline), committing transactions timely seems to be the main issue. In this paper, we deal with the global distributed transaction commit and the local concurrency control problems in applications where transactions may be decomposed into a mandatory part and an optional part. In our model, the means to determine these parts is based on a weight parameter which is assigned to each subtransaction. It is used to help the coordinator process to execute the commit phase when a transaction is close to its deadline. An other parameter, the estimated execution time, is used by each participant site in combination with the weight to solve the possible conflicts that may occur between local subtransactions. The mechanisms used to deal with these issues is called RT-WEP (Real-Time-Weighted Early Prepare) protocol. Some simulation have made to compare RT-WEP protocol with two other protocols designed to the same purpose. The results have shown that RT-WEP protocol may be applied efficiently in a distributed real-time context by allowing more transactions to meet their deadlines.

Title:

A MODEL-DRIVEN APPROACH FOR ITEM SYNCHRONIZATION AND UCCNET INTEGRATION IN LARGE E-COMMERCE ENTERPRISE SYSTEMS

Author(s):

Santhosh Kumaran, Fred Wu, Simon Cheng, Mathews Thomas, Santhosh Kumaran, Amaresh Rajasekharan, Ying Huang

Abstract: The pervasive connectivity of the Internet and the powerful architecture of the WWW are changing many market conventions and creating a tremendous opportunity for conducting business on the Internet. Digital marketplace business models and the advancement of Web related standards are tearing down walls within and between different business artifacts and entities at all granularities and at all levels, from devices, operating systems and middleware to directory, data, information, application, and finally the business processes. As a matter of fact, business process integration (BPI), which entails the integration of all the facets of business artifacts and entities, is emerging as a key IT challenge. In this paper, we describe our effort in exploring a new approach to address the complexities of BPI. More specifically, we study how to use a solution template based approach for BPI and explore the validity of this approach with a frequently encountered integration problem, the item synchronization problem for large enterprises. The proposed approach can greatly reduce the complexities of the business integration task and reduce the time and amount of effort of the system integrators. Different customers are deploying the described Item Synchronization system.

Title:

DATA POSITION AND PROFILING IN DOMAIN-INDEPENDENT WAREHOUSE CLEANING

Author(s):

Ajumobi Udechukwu, Christie Ezeife

Abstract: A major problem that arises from integrating different databases is the existence of duplicates. Data cleaning is the process for identifying two or more records within the database, which represent the same real world object (duplicates), so that a unique representation for each object is adopted. Existing data cleaning techniques rely heavily on full or partial domain knowledge. This paper proposes a positional algorithm that achieves domain independent de-duplication at the attribute level. The paper also proposes a technique for field weighting through data profiling, which, when used with the positional algorithm, achieves domain-independent cleaning at the record level. Experiments show that the positional algorithm achieves more accurate de-duplication than existing algorithms.

Title:

OPTIMIZING ACCESS IN A DATA INTEGRATION SYSTEM WITH CACHING AND MATERIALIZED DATA

Author(s):

Bernadette Farias Lóscio, Ana Carolina Salgado, Maria da Conceição Moraes Batista

Abstract: Data integration systems are planned to offer uniform access to data from heterogeneous and distributed sources. Two basic approaches have been proposed in the literature to provide integrated access to multiple data sources. In the materialized approach, data are previously accessed, cleaned, integrated and stored in the data warehouse and the queries submitted to the integration system are evaluated in this repository without direct access to the data sources. In the virtual approach, the queries posed to the integration system are decomposed into queries addressed directly to the sources. The data obtained from the sources are integrated and returned to the user. In this work we present a data integration environment to integrate data distributed on multiple web data sources which combines features of both approaches supporting the execution of virtual and materialized queries. Other distinguished feature of our environment is that we also propose the use of a cache system in order to answer the most frequently asked queries. All these resources are put together with the goal of optimizing the overall query response time.

Title:

GLOBAL QUERY OPTIMIZATION BASED ON MULTISTATE COST MODELS FOR A DYNAMIC MULTIDATABASE SYSTEM

Author(s):

Qiang Zhu

Abstract: Global query optimization in a multidatabase system (MDBS) is a challenging issue since some local optimization information such as local cost models may not be available at the global level due to local autonomy. It becomes even more difficult when dynamic environmental factors are taken into consideration. In our previous work, a qualitative approach was suggested to build so-called multistate cost models to capture the performance behavior of a dynamic multidatabase environment. It has been shown that a multistate cost model can give a good cost estimate for a query run in any contention state in the dynamic environment. In this paper, we present a technique to perform query optimization based on multistate cost models for a dynamic MDBS. Two relevant algorithms are proposed. The first one selects a set of representative system environmental states for generating an execution plan with multiple versions for a given query at compile time, while the second one efficiently determines the best version to invoke for the query at run time. Experiments demonstrate that the proposed technique is quite promising for performing global query optimization in a dynamic MDBS. Compared with related work on dynamic query optimization, our approach has an advantage of avoiding the high overhead for modifying or re-generating an execution plan for a query based on dynamic run-time information.

Title:

A DATA, COMPUTATION, KNOWLEDGE GRID THE CASE OF THE ARION SYSTEM

Author(s):

Spyros  Lalis, Manolis  Vavalis, Kyriakos  Kritikos, Antonis  Smardas, Dimitris  Plexousakis, Marios Pitikakis, Catherine Houstis, Vassilis Christophides

Abstract: The ARION system provides basic e-services of search and retrieval of objects in scientific collections, such as, datasets, simulation models and tools necessary for statistical and/or visualization processing. These collections may represent application software of scientific areas, they reside in geographically disperse organizations and constitute the system content. The user may invoke on-line computations of scientific datasets when the latter are not found into the system. Thus, ARION provides the basic infrastructure for accessing and deriving scientific information in an open, distributed and federated system.

Title:

SCANNING A LARGE DATABASE ONCE TO MINE ASSOCIATION RULES

Author(s):

Frank Wang

Abstract: Typically 95% of the data in the transaction databases are zero. When it comes to sparse, the performance quickly degrades due to the heavy I/O overheads in sorting and merging intermediate results. In this work, we first introduce a list representation in main memory for storing and computing datasets. The sparse transaction dataset is compressed as the empty cells are removed Accordingly we propose a ScanOnce algorithm for association rule mining on the platform of list representation, which just needs to scan the transaction database once to generate all the possible rules. In contrast, the well-known Apriori algorithm requires repeated scans of the databases, thereby resulting in heavy I/O accesses particularly when considering large candidate datasets. Attributing to its integrity in data structure, the complete itemset counter tree can be stored in a (one-dimensional) vector without any missing gap, whose direct-addressing capability ensures fast access to any counter. In our opinion, this new algorithm using list representation economizes storage space and accesses. The experiments show that this ScanOnce algorithm beats classic Apriori algorithm for large problem sizes, by factors ranging from 2 to more than 6.

Title:

INTEGRATION OF DISTRIBUTED SOFTWARE PROCESS MODELS

Author(s):

Mohamed Ahmed-nacer, Nabila Lardjane

Abstract: Developing software-in-the-large involves many developers, with experts in various aspects of software development and in various aspects of the application area. This paper presents an approach to integrate software process models in a distributed context. It is based on the fusion of process fragments (components) defined with the UML notation (Unified Modelling Language). The integration methodology presented allows unifying the various fragments both at the static level as well as at the dynamic level (behavioural). We consider various possible semantic conflicts; formal definitions of the inter-fragments properties are formulated and solutions for these conflicts are proposed. This integration approach provides multiple solutions for the integration conflicts and gives the possibility to improve and design new software process models by a merging of reusable process fragments.

Title:

A BITEMPORAL STORAGE STRUCTURE FOR A CORPORATE DATA WAREHOUSE

Author(s):

Alberto Abelló, Carme Martín

Abstract: This paper brings together two research areas, i.e. Data Warehouses and Temporal Databases, involving representation of time. Looking at temporal aspects within a data warehouse, more similarities than differences between temporal databases and data warehouses have been found. The first closeness between these areas consists in the possibility of a data warehouse redefinition in terms of a bitemporal database. A bitemporal storage mechanism is proposed along this paper. In order to meet this goal, a temporal study of data sources is developed. Moreover, we will show how Object-Oriented temporal data models contribute to add the integration and subject-orientation that is required by a data warehouse.

Title:

TOWARD A DOCUMENTARY MEMORY

Author(s):

Christine JULIEN, Max CHEVALIER, Kais Khrouf

Abstract: An organisation must enable to share knowledge and information within its employees to optimise their tasks. However, the volume of information contained in documents represents a major importance for these companies. Indeed, companies may be fully reactive to any new information and must follow the fast evolution of spread information. So, a documentary memory, which store this information and allow end-user to access or analyse it, constitutes a necessity for every enterprise. We propose, in this paper, the architecture of such a system, based on a document warehouse, allowing the storage of relevant documents and their exploitation via the techniques of information retrieval, factual data interrogation and information multidimensional analysis.

Title:

DISTRIBUTED OVERLOAD CONTROL FOR REAL-TIME REPLICATED DATABASE SYSTEMS

Author(s):

Samia Saad-Bouzefrane, C. Kaiser

Abstract: In order to meet their temporal constraints, current applications such as Web-based services and electronic commerce use the technique of data replication. To take the replication benefit, we need to develop con-currency control mechanisms with high performance even when the distributed system is overloaded. In this paper, we present a protocol that uses a new notion called importance value which is associated with each real-time transaction. Under conditions of overload, this value is used to select the most important transactions with respect to the application transactions in order to pursue their execution ; the other transactions are aborted. Our protocol RCCOS (Replica Concurrency-Control for Overloaded Systems) augments the protocol MIR-ROR, a concurrency control protocol designed for firm-deadline applications operating on replicated real-time databases in order to manage efficiently transactions when the distributed system is overloaded. A platform has been developped to measure the number of transactions that meet their deadlines when the processor load of each site is controlled.

Title:

INCREMENTAL HORIZONTAL FRAGMENTATION OF DATABASE CLASS OBJECTS

Author(s):

Christie Ezeife, Pinakpani Dey

Abstract: Horizontal fragments of a class in an object-oriented database system contain subsets of the class extent or instance objects. These fragments are created with a set of system input data consisting of the application queries, their access frequencies, the object database schema with components - class inheritance and class composition hierarchies as well as instance objects of classes. When these system input to the fragmentation process change enough to affect system performance, a re-fragmentation is usually done from scratch. This paper proposes an incremental re-fragmentation method that uses mostly the updated part of input data and previous fragments to define new fragments more quickly, saving system resources and making the data at distributed sites more available for network and web access.

Title:

GEONIS - FRAMEWORK FOR GIS INTEROPERABILITY

Author(s):

Leonid Stoimenov, Slobodanka Djordjevic-Kajan

Abstract: This paper presents research in Geographic Information Systems interoperability. Also, paper describes our work in development, introduces interoperability framework called GeoNis, which uses proposed technologies to perform integration task between GIS applications and legacy data sources over the Internet. Our approach provides integration of distributed GIS data sources and legacy information systems in local community environment.

Title:

BUSINESS CHANGE IMPACTS ON SYSTEM INTEGRATION

Author(s):

Fabio Rollo, Gabriele Venturi, Gerardo Canfora

Abstract: Large organizations have disparate legacy systems, applications, processes, and data sources, which interact by means of various kinds of interconnections. Merging of companies can increase the complexity of system integration, with the need to integrate applications like Enterprise Resource Planning and Customer Relationship Management. Even if sometimes these applications provide a kind of access to their underlying data and business logic, Enterprise Application Integration (EAI) is still a challenge. In this paper we analyse the needs that drive EAI with the aim of identifying the features that EAI platforms must exhibit to enable companies to compete in the new business scenarios. We discuss the limitations of current EAI platforms and their evaluation methods, mainly economies of scale and economies of scope, and argue that a shift is needed towards the economies of learning model. Finally, we outline an EAI architecture that addresses current limitations enabling economies of learning.

Title:

TECHNICAL USE QUALITY IN A UNIVERSITY ENTERPRISE RESOURCE PLANNING SYSTEM: PERCEPTIONS OF RESPONSE TIME AND ITS STRATEGIC IMPORTANCE

Author(s):

Michelle Morley

Abstract: Enterprise Resource Planning Systems (ERPs) are large, complex enterprise-wide information system that offer benefits of integration and data-richness to organisations. This paper explores the quality issue of response times, and the impact of poor response times on the ability of the organisation studied to achieve their strategy. The PeopleSoft ERP was implemented within the International Centre (for international student recruitment and support) at an Australian University, as part of a University-wide implementation. To achieve the goal of increased international student enrolments, fast turnaround times on student applications are critical. The ERP offers poor response times and this makes it difficult for the International Centre to achieve high conversion rates (from applications to enrolments) and hence reduces the perceived value, or ‘business quality’ (Salmela 1997), of the system to the organisation. The paper uses a quality model developed from Eriksson and Toern’s (1990) SOLE model, Lindroos’ (1997) Use Quality and Salmela’s (1997) Business Quality model.

Title:

INTEGRATING AUTOMATION DESIGN INFORMATION WITH XML

Author(s):

Seppo Kuikka, Mika Viinikkala

Abstract: Due to the number of parties participating in the design phase of an automation project, various design, engineering and operational systems are needed. At the moment, the means to transfer information from one system to another system, so that it can be further processed or reused, are not efficient. An integration approach in which XML technologies are utilized for implementing systems integration is introduced. Data content of systems are defined by XML Schema instances. XML messages containing automation design information are transformed using transformation stylesheets employing a generic standard vocabulary. Loosely coupled, platform independent, data content-oriented integration is enabled by XML technologies. A case study that proceeds according to the approach is also described. It consists of both a software prototype responsible for communication and data content including XML Schema instances and transformation stylesheets for the systems covered in the study. It is found that XML technologies seem to be a part of the right solution. However, some issues related to schema design and transformations are problematic. If complex systems are integrated, XML technologies alone are not sufficient. Future developments include a general purpose web-service solution that is to answer questions that were not dealt with by this case study.

Title:

IMPRECISION BASED QUERIES OVER MATERIALIZED AND VIRTUAL INTEGRATED VIEWS

Author(s):

Alberto Trombetta, Danilo Montesi

Abstract: The Global-As-View approach to data integration has focused on the (semi-automatic) definition of a global schema starting from a given set of known information sources. In this paper, we investigate how to employ concepts and techniques to model imprecision in defining mappings between the global schema and the source schemas and to answer queries posed over the global schema. We propose an extended relational algebra using fuzzy sets for defining SQL-like query mappings. Such mappings explicitly take into account the similarities between global and source schemas to discard source data items with low similarity and to express the relevance of different sources in populating the global schema. In the case the global schema is not materialized, we propose a query rewriting technique for expressing over the sources the queries posed over the global schema

Title:

THE HAMLET DILEMMA ON EXTERNAL DATA IN DATA WAREHOUSES

Author(s):

Mattias Strand, Marcus Olsson

Abstract: Data warehouses are currently given a lot of attention; both by academics and practitioners, and the amount of literature describing different aspects of data warehousing is ever-increasing. Much of this literature is covering the characteristics and the origin of the data in the data warehouse and the importance of external data is often pinpointed. Still, the descriptions of external data are on a general level and the extent of external data usage is not given much attention. Therefore, in this paper, we describe the results of an interview study, partly aimed at outlining the current usage of external data in data warehouses. The study was directed towards Swedish data warehouse developers and the results shows that the usage of external data in data warehouses is not as frequent as expected. Only 58 % of the respondents had been working in projects that had an objective of integrating external data. Reasons given for rather low usage were problems on assuring the quality of the external data and lack of data warehouse maturity amongst the user organizations.

Title:

PERFORMANCE IMPROVEMENT OF DISTRIBUTED DATABASE MANAGEMENT SYSTEMS

Author(s):

Josep Maria Muixi, August Climent

Abstract: Distributed databases offer a complete range of desirable features: availability, reliability, and responsiveness. However, all of these benefits are at the expense of some extra management; main issues considered in literature as the base of a tuned distributed database system could be data replication and synchronization, concurrency access, distributed query optimization or performance improvement. Work presented here tries to provide some clues to the last point considering an issue which has not been taken enough into account under our humble opinion: load balancing of these distributed systems. It is tried to be shown how the right load balancing policy influences the performance of a distributed database management system, and more concretely a shared-nothing one.

Title:

EMPIRICAL VALIDATION OF METRICS FOR UML STATECHART DIAGRAMS

Author(s):

David Miranda, Marcela Genero, Mario Piattini

Abstract: It is widely recognised that the quality of Object Oriented Software Systems (OOSS) must be assessed from the early stages of their development. OO Conceptual models are key artifacts produced at these early phases, which cover not only static aspects but also dynamic aspects. Therefore, focusing on quality aspects of conceptual models could contribute to produce better quality OOSS. While quality aspects of structural diagrams, such as class diagrams, have being widely researched, the quality of behavioural diagrams such as statechart diagrams have been neglected. This fact leaded us to define a set of metrics for measuring their structural complexity. In order to gather empirical evidence that the structural complexity of statechart diagrams are closed with their understandability we carried out a controlled experiment in a previous work. The aim of this paper is to present a replication of that experiment. The findings obtained in the replication corroborate the results of the first experiment in the sense that at some extent, the number of transitions, the number of states and the number of activities influence statechart diagrams understandability.

Title:

A SOLUTION FOR CONTEXTUAL INTEGRATION BASED ON THE CALCULATION OF A SEMANTIC DISTANCE

Author(s):

Fabrice JOUANOT, Kokou Yétongnon, Nadine Cullot

Abstract: To achieve the interoperation of heterogeneous data sources with respect to their context and rich semantics keeps yet a real challenge. Users need to integrate useful information and query coupled data sources in a transparent way. We propose a solution to help the integration of heterogeneous sources according to their context. We present a model to define contextual information associated to local data and a mechanism which uses this semantics to compare local contexts and integrate relevant data. Our contextual integration approach, using a rule based language, allows us to build virtual objects in a semi-automatic way. They play roles of transparent interfaces for end-users.

Title:

DATA WAREHOUSE – PROCESS TO DEVELOP

Author(s):

Prasad  N. Sivalanka , Rakesh Agarwal

Abstract: Building a data warehouse involves complex details of analysis and design of an enterprise-having wide decision support system. Dimensional modeling can be used to design effective and usable data warehouses. The paper highlights the steps in the implementation of data warehouse in a client project. All the observations and phases mentioned in this document are with reference to the project carried out for medium-to-large multi-dimensional databases for a client in a controlled test environment. The recommendations, conclusions and observations made in this document may not be generalized for all cases unless verified and tested.

Title:

CREATING THE DOCSOUTH PUBLISHER

Author(s):

Tony Bull

Abstract: In this Case Study, a Systems Integration problem is solved using Object-Oriented Perl, XML/XSLT, and Java. Over the last two years, the world-renowned Digitization Project ‘Documenting the American South’ has been slowly converting its SGML-based Legacy system to an XML-centric system. As of September 2002, the “DocSouth Publisher” has been the latest change in realizing the new XML environment.

Title:

A COMPARISON OF DATABASE SYSTEMS FOR STORING XML DOCUMENTS

Author(s):

Roger Davies, Miguel Mira da Silva, Rui Cerveira Nunes

Abstract: As the need to store large quantities of increasingly complex XML documents augments, the requirements for database products that claim to support XML also increases. For example, it is no longer acceptable to store XML documents without using indices for efficient retrieval of large collections. In this paper we analyse the current versions of products representing the three main approaches to XML storage: native XML databases, XML support by relational databases, and object-oriented databases with XML support. Several products are analysed and compared, including performance tests. Our main conclusion is that the market urgently needs a standard query language and API, analogous to SQL and ODBC, which were probably the main drivers for the success of relational databases.

Title:

AUTOMATED DATA MAPPING FOR CROSS ENTERPRISE DATA INTEGRATION

Author(s):

Stefan Böttcher, Sven  Groppe

Abstract: Currently, there are multiple different classifications for product descriptions used in enterprise-internal applications and cross-enterprise applications, e.g. E-procurement systems. A key problem is to run applications developed for one catalogue on product descriptions that are stored in a different classification. A common solution is that a catalogue specialist manually maps different classifications onto each other. Our approach avoids unnecessary manual work for mapping and automatically generates mappings between different classifications wherever possible. This allows us to run E-procurement applications on different catalogues with a fairly reduced manual work needed for mapping, what we consider to be an important step towards enterprise application integration.

Title:

XML-BASED OLAP QUERY PROCESSING IN A FEDERATED DATA WAREHOUSES

Author(s):

Wolfgang Essmayr, Edgar Weippl, Johannes Huber, Oscar  Mangisengi

Abstract: Today, XML is the format of choice to implement interoperability between systems. This paper addresses the XML-based query processing for heterogeneous OLAP data warehouses in a federated architecture. In our approach, XML, as an intermediary representation, can be used as a basis for federated queries and queries for local OLAP data warehouses, whereas XML DTD can be used for query language definition and validation of a XML federated query.

Title:

THE ENHANCED GREEDY INTERCHANGE ALGORITHM FOR THE SELECTION OF MATERIALIZED VIEWS UNDER A MAINTENANCE COST CONSTRAINT IN DATA WAREHOUSES

Author(s):

Omar Karam, Osman Ibrahim, Rasha Ismail, Mohamed El-Sharkawy

Abstract: A Data Warehouse is a central repository of integrated information available for the purpose of efficient decision support or OLAP queries. One of the important decisions when designing a data warehouse is the selection of views to materialize and maintain in a data warehouse. The goal is to select an appropriate set of materialized views so as to minimize the total query response time and the cost of maintaining the selected views under the constraint of a given total view maintenance time. In this paper, the maintenance cost is incorporated to the Greedy Interchange Algorithm (GIA). The performance and behavior of the Greedy Algorithm considering the maintenance costs (GAm) and the proposed Greedy Interchange Algorithm considering maintenance cost (GIAm) are examined through experimentation. The GIAm improves the results over the GAm by 56.5%, 60.6% and 80% for different maintenance time constraints 100%, 75% and 40% of total maximum maintenance time. An enhancement to the GIAm is proposed, the enhancement introduced depends on selecting a subset of views to which the GIA is applied rather than all the views of a view graph. This selection is based upon views dependencies and result in substantial run time.

Title:

RANKING AND SELECTING COMPONENTS TO BUILD SYSTEMS

Author(s):

Alberto Sillitti, Paolo Predonzani, Giampiero Granatella, Tullio Vernazza, Giancarlo Succi

Abstract: Component-Based Software Engineering (CBSE) allows developers to build systems using existing components. Developers need to find the best set of components that implements most of required features. Retrieving components manually can be very complicated and time expensive. Tools that partially automate this task help developers to build better systems with less effort. This paper proposes a methodology for ranking and selecting components to build an entire system instead of retrieving just a single component. This methodology was developed in the European project CLARiFi (CLear And Reliable Information For Integration).

Title:

A CASE STUDY FOR A QUERY-BASED WAREHOUSING TOOL

Author(s):

Rami Rifaieh, Nabila Aicha Benharkat

Abstract: Data warehousing is an essential element of decision support. In order to supply a decisional database, meta-data is needed to enable the communication between various function areas of the warehouse and, an ETL tool (Extraction, Transformation, and Load) is needed to define the warehousing process. The developers use a mapping guideline to specify the ETL tool with the mapping expression of each attribute. In this paper, we will define a model covering different types of mapping expressions. We will use this model to create an active ETL tool. In our approach, we use queries to achieve the warehousing process. SQL queries will be used to represent the mapping between the source and the target data. Thus, we allow DBMS to play an expanded role as a data transformation engine as well as a data store. This approach enables a complete interaction between mapping meta-data and the warehousing tool. In addition, this paper investigates the efficiency of a Query-based data warehousing tool. It describes a query generator for reusable and more efficient data warehouse (DW) processing. Besides exposing the advantages of this approach, this paper shows a case study based on real scale commercial data to verify our tool features.

Title:

EXTENDING TREE AUTOMATA TO MODEL XML VALIDATION UNDER ELEMENT AND ATTRIBUTE CONSTRAINTS

Author(s):

D. Laurent, D. Duarte, B. Bouchou, Mírian Halfeld Ferrari Alves

Abstract: Algorithms for validation play a crucial role in the use of XML as the standard for interchanging data among heterogeneous databases on the Web. Although much effort has been made for formalizing the treatment of elements, attributes have been neglected. This paper presents a validation model for XML documents that takes into account the element and attribute constraints imposed by a given DTD. Our main contribution is the introduction of a new formalism to deal with both kinds of constraints. We deem that our formalism has interesting characteristics: it allows dealing with finite trees with attributes and elements, it is simple, since it is just an extension of regular tree automata and it allows the construction of a deterministic automaton having the same expression power as that of a DTD. Moreover, our formalism can be implemented easily, giving rise to an efficient validation method.

Title:

AN ARCHITECTURAL FRAMEWORK FOR WEB APPLICATIONS

Author(s):

Stefan Jablonski, Ilia  Petrov, Christian Meiler

Abstract: The Web has an ever-changing technological landscape. Standards and techniques utilized for the implementation of Web Applications as well as the platforms on which they are deployed are subject to constant changes. In order to develop Web-Applications in a structured and systematic manner regardless of this dynamics a clear development methodology considering the flexibility and extensibility as central goals is needed. This paper proposes a definition of the term Web-Application and a conceptual architectural framework for Web-Applications. Besides this some important characteristics of such a framework will be investigated and a construction methodology will be presented.

Title:

A MINIMAL COVER FOR DECLARATIVE EXPRESSIONS

Author(s):

Margaret Miró, Josep Miró

Abstract: Descriptive knowledge about a multivalued data table or Information System can be expressed in declarative form by means of a binary Boolean based language. This paper presents a contribution to the study of an arbitrary multivalued Information System by introducing a non-binary array algebra that allows the treatment of multiple valued data tables with systematic algebraic techniques. An Information System can be described by severeal distinct, but equivalent, array expressions. Among these, the all-prime-ar expression is singled out. The all-prime-ar expression is a unique expression, although it is not necessarily minimum in the number of prime-ars. Finally, a completely intensional technique that determines a cover, a minimal prime-ar expression is presented.

Title:

INTEGRATING DISTRIBUTED HETEROGENOUS DATABASES AND DISTRIBUTED GRID COMPUTING

Author(s):

Tapio Niemi

Abstract: The aim of this paper is to present a middleware that combines the flexibility of distributed heterogeneous databases with the performance of local data access. The middleware will support both XML and relational database paradigms and applies Grid security techniques. The computing and database access facilities are implemented using Grid and Java technologies. In our system, data can be accessed in the same way independently of its location, storage system, and even storage format. The system will also support distributed queries and transaction management over heterogeneous databases. Our system can be utilised in many applications related to storing, retrieval, and analysis of information. Because of advanced security components, e-commerce is a potentical application area, too. The implementation is based on the principle that each node on the computing grid containing a database contains also a Java agent. The database requests are first sent to the agent which takes care of security tasks, possibly does some preprocessing or translation to the query, and finally transmits the request to the database system. The agents also take care of distributed transaction management. The system does not have a special master but each agent has a capability to handle distributed transactions by sending requests to other agents.

Title:

FORMALIZING TYPES WITH ULTIMATE CLOSURE FOR MIDDLEWARE TOOLS IN INFORMATION SYSTEMS ENGINEERING

Author(s):

Brian Nicholas Rossiter, David Nelson, Michael A Heather

Abstract: A definition of types in an information system is given from real-world abstractions through constructs employed for data and function descriptions through data schema and definitions to the physical data values held on disk. This four-level architecture of types is considered from the real-world interpretation of the types and the level-pairs between types: in terms of mappings between the types at each level and formally in terms of a composition of functors, adjoints and natural transformations across the various types. The theory suggests that four levels are sufficient to provide ultimate closure for computational types to construct information systems. The Godement calculus can be used to compose mappings at different levels. Examples of information systems are examined in terms of the four-level architecture including the Information Resource Dictionary Standard (IRDS), the Grid, the semantic web using data exchange languages such as XML/RDF and MOF/MDA with meta objects in a model-driven architecture. Only the IRDS and MOF are genuinely based on four levels. IRDS appears to be the more open at the top level but does not support two-way mappings.

Title:

UPDATING GIS DATA USING PERSONAL DIGITAL ASSISTANTS

Author(s):

Alexandre Sousa, João Lopes, Henrique Silva

Abstract: Geo-referenced data is acquired and then postponed into an existing GIS.With the advent of mobile computing devices, namelly personal Digital Assistants (PDAs), this integration task is sure to be avoided. We tried to extend a PDA GIS displaying system (Mordomo) in order to allow metadata update. This way, the task of updating geo-referenced data could be done on-site, in the palce were the data is to be acquired, and the integration in the GIS could be done automatically. In order to have the system coping with many different applications, we decided to provide a transformer from and to GML, the OGC proposed standard.

Title:

CONSTRAINTS AND MULTIDIMENSIONAL DATABASES

Author(s):

Franck Ravat, Faiza Ghozzi, Gilles Zurfluh, Olivier Teste

Abstract: The model we define organises data in a constellation of facts and dimensions with multiple hierarchies. In order to insure data consistence and reliable data manipulation, we extend this constellation model by intra- and inter-dimension constraints. The intra-dimension constraints allow the definition of exclusions and inclusions between hierarchies of same dimension. The inter-dimension constraints are related to hierarchies of different dimensions. Also, we study effects of these constraints on multidimensional operations. In order to validate the solutions we provide, we depict integration of these constraints within GEDOOH prototype.

Title:

CQSERVER: AN EXAMPLE OF APPLYING A DISTRIBUTED OBJECT INFRASTRUCTURE FOR HETEROGENEOUS ENTERPRISE COMPUTATION OF CONTINUAL QUERIES

Author(s):

Jennifer Leopold, Tyler Palmer

Abstract: The revolution in computing brought about by the Internet is changing the nature of computing from a personalized computing environment to a ubiquitous computing environment in which both data and computational resources are network-distributed. Client-server communications protocols permit parallel ad hoc queries of frequently-updated databases, but they do not provide the functionality to automatically perform continual queries to track changes in those data sources through time. The lack of persistence of the state of data resources requires users to repeatedly query databases and manually compare the results of searches through time. To date, continual query systems have lacked both external and internal scalability. Herein we describe CQServer, a scalable, platform- and implementation-independent system that uses a distributed object infrastructure for heterogeneous enterprise computation of both content- and time-based continual queries.

Title:

AN INTEGRATED APPROACH FOR EXTRACTION OF OBJECTS FROM XML AND TRANSFORMATION TO HETEROGENEOUS OBJECT ORIENTED DATABASES

Author(s):

Uzair Ahmad

Abstract: XML is widely used by the database management systems for data representation and transportation of data. In this paper we focus on the integration of latest W3C XML Schema specifications and hash maps for performance and efficient retrieval of objects from XML documents and transforming them into heterogeneous object oriented databases. Remaking of XML-ized databases from sizeable XML document faces a limitation of memory resources. Besides XML Schema incorporation, this research endeavor also provides new options for the handling of large XML-ized database document size.

Title:

CONSTRUCTING FEDERATED ENTERPRISE SCHEMAS WITH CONCEPTUALIZED DATA WRAPPERS

Author(s):

Thiran Philippe

Abstract: The ubiquity of the Internet gives organizations the possibility to form virtual alliances. This not only implies that the business transactions must be linked, but also requires that business applications are integrated to support them. In this paper, we present an integral approach for blending modern business data requirements with existing legacy data resources that offers techniques at both the conceptual and the implementation level. Therefore, we depend on access/integration reverse engineering technologies, including conceptual wrappers and gateways. The reverse engineering strategy allows modernized business data systems to co-exists with legacy repository systems. In particular, the methodology aims at constructing a conceptual Federation Enterprise Schema (FES) for supporting the complicated task of data wrapper integration throughout the development cycle: from specification down to the actual implementation. The FES model plays a pivot role in the creation of the virtual alliances by representing an unified data view for all participants. This unified data model serves as the foundation for the actual integration of the wrapped legacy data systems, possibly with modernized data systems. Thus, in contrast to other available approaches, the FES is not developed from scratch, but instead, composed out of pre-existing legacy wrappers. This methodology is validate by an experimental prototype that is still under development and sits on to of DB-MAIN.

Title:

HEURISTIC METHOD FOR A REAL WORLD TRANSPORT

Author(s):

Meriam Kefi, Khaled Ghédira

Abstract: Within the framework of an international sporting manifestation which involved 23 countries in 23 disciplines and gathered not less than 15000 participators (VIP, disciplines and gathered not less than 15000 participators (VIP, officials, athletes, judges, referees, doctors, journalists, technicians, voluntaries), the central committee of organization was obliged to automatize its activities and to distribute them among 16 committees in order to guarantee especially the best conditions of organization and safety. Thus, we were called to elaborate a prototype dealing with the transport activity.

Title:

METADATA-DRIVEN MODELING OF TOOL-INDEPENDENT DATA TRANSFORMATIONS

Author(s):

Heiko Tapken, Arne Harren

Abstract: Due to their analytically oriented and cleansed integration of data from several operational and external data sources, data warehouse systems serve as a substantial technical foundation for decision support. Within the scope of our research we are seeking novel solutions for handling data acquisition within such environments. In this paper we present some aspects of our approach to data acquisition. We briefly sketch our framework and outline the underlying process model. We introduce an approach for tool­independent modeling of data transformations at a logical design layer in detail including a partial description of our meta­model and an introduction to the transformation language TL2.

Title:

EXTENDED PROTECTED DATABASES: A PRACTICAL IMPLEMENTATION

Author(s):

Steve  Barker, Paul Douglas

Abstract: We show how logic programs may be used to protect secure databases that are accessed via a web interface from the unauthorized retrieval of positive and negative information, and from unauthorized insert and delete requests. To achieve this protection, we use a deductive database expressed in a form that is guaranteed to permit only authorized access requests to be performed. The protection of the positive information that may be retrieved from a database and the information that may be inserted are treated in a uniform way as is the protection of the negative information in the database and the information that may be deleted. The approach we describe has a number of attractive technical results associated with it, it enables access control information to be seamlessly incorporated into a deductive database, and it enables security information to be used to help to optimize the evaluation of access requests. These properties are particularly useful in the context of a database which is accessed via the internet, since this form of access requires a practical access control method which is both powerful and flexible. We describe our implementation of a web-server front-end to a deductive database which incorporates our access authorization proposals.

Title:

USABILITY AND WEB SITE EVALUATION: QUALITY MODELS AND USER TESTING EVALUATIONS

Author(s):

Francisco Montero

Abstract: As the Internet expands, and the amount of information that we can find on the web grows along with it, the usability of the pages gets more important. Many of the sites still get quite low evaluations from participants when it came to certain aspects of usability. This paper proposes a set of quantitative and qualitative metrics under a usability-centred quality model and an usability testing experiment where this model can be validated. But finally, usability tests may do a great job by showing what is not working in a design, but do not get caught in the trap of asking testers to suggest design improvements because creating Web sites is easy, however, creating sites that truly meet the needs and expectations of the wide range of online users is quite another story.

Title:

COPLA: A PLATFORM FOR EAGER AND LAZY REPLICATION IN NETWORED DATABASES

Author(s):

Francesc Daniel Muñoz-Escoí, Jose Manuel Bernabeu-Auban, Luis Irún-Briz, Hendrik Decker

Abstract: COPLA is a software tool that provides an object-oriented view of a network of replicated relational databases. It supports a range of consistency protocols, each of which supports different consistency modes. The resulting scenario is a distributed environment where applications may start multiple database sessions, which may use different consistency modes, according to their needs. This paper describes the COPLA platform, its architecture, its support for database replication and one of the consistency algorithms that have been implemented on it. A system of this kind may be used in the development of applications for companies that have several branch offices, such as banks, hypermarkets, etc. In such settings, several applications typically use on-site generated data in local branches, while other applications also use information generated in other branches and offices. The services provided by COPLA enable an efficient catering for both local and non-local data querying and processing.

Title:

ONTOLOGIES: SOLVING SEMANTIC HETEROGENEITY IN A FEDERATED SPATIAL DATABASE SYSTEM

Author(s):

Villie  Morocho Zurita, Lluis Pérez Vidal

Abstract: Information integration has been an important area of research for many years, and the problem of integration of geographic data has recently emerged. This paper presents an approach based on the use of Ontologies for solving the problem of semantic heterogeneity in the process of the construction of a \emph{Federated Schema} in the framework of geographic data. We make use of a standard technology (OMT-G based UML, XMI based XML, GML from OpenGIS).

Title:

OLAPWARE: ONLINE ANALYTICAL PROCESSING MIDDLEWARE

Author(s):

Fernando  Souza, Valeria Times, Robson Fidalgo

Abstract: This paper presents OLAPWare, which is a Java middleware for providing OLAP Services complaint with the OLE DB for OLAP standard. OLE DB for OLAP is an industrial standard for allowing interoperability among OLAP tools. However, it is limited to the Windows/COM platform, even when a Java API is used. OLAPWare aims to overcome this limitation by allowing its Java clients to query the objects of a dimensional data cube without depending on the chosen implementation platform. In addition, OLAPWare can be used as a server to other applications requiring online analytical processing, such as Geographical Information System and Data Mining.

Title:

A METHODOLOGICAL FRAMEWORK FOR BUSINESS MODELLING

Author(s):

Judith Barrios Albornoz, Jonás Montilva Calderón

Abstract: Globalisation phenomenon has created a very competitive environment for modern business organisations. In order to survive and continue being competitive in that environment, an organisation has to adapt to it quickly with a minimal negative impact over its current ways of working and organising. A business model contains the knowledge needed not only to support managers’ decisions concerned with change and adaptation, but to ensure the opportunity and relevance of the information produced by the automated systems supporting them. The purpose of this paper is to present a methodological framework for business modelling. This framework allows its users to represent organisation’s elements from different perspectives taking into account their relationships. A business model is presented as a set of three interrelated models – the Business Goals model, the Business Processes model, and the Information Systems model. The main contribution of our paper is to make visible and explicit the relationships among the three levels: goals, business processes and information systems. These relationships are commonly hidden or implicit in most business modelling methods. Our proposition has proven its usefulness as a strategic management tool in two studies cases.

Title:

MODELLING DATA WAREHOUSING MANAGEMENT IN ENTERPRISE PERFORMANCE

Author(s):

Alberto Carneiro

Abstract: This paper intends to contribute to a better understanding of the process through which data warehouse (DW), information technology, other technical tools, and organisation actors can contribute to enterprises’ effectiveness facing the challenges that are continuously happening in the new information technology domain. Firstly, it presents some researchers’ opinions about the role of Data Warehousing Management (DWM) in the decision-making process. Consequently, it sustains that a set of variables influences the relationship between decision effectiveness and a valuable utilisation of DWM’s potential skills. A conceptual model for the optimisation of enterprises’ performance as a function of DWM is suggested.

Title:

DIA: DATA INTEGRATION USING AGENTS

Author(s):

Ulrich Schiel, Philip Medcraft, Cláudio Baptista

Abstract: The classic problem of information integration has been addressed for a long time. The Semantic Web project is aiming to define an infrastructure that enables machine understanding. This is a vision that tackles the problem of semantic heterogeneity by using ontologies for information sharing. Agents have an important role in this infrastructure. In this paper we present a new solution, known as DIA (Data Integration using Agents), for semantic integration using mobile agents and ontologies.

Title:

DATA WAREHOUSE REFRESHMENT MAINTAINING TEMPORAL CONSISTENCY

Author(s):

Araque Francisco

Abstract: The refreshment of a data warehouse is an important process which determines the effective usability of the data collected and aggregated from the sources. Indeed, the quality of data provided to the decision makers depends on the capability of the data warehouse system to convey in a reasonable time, from the sources to the data marts, the changes made at the data sources. We present our current work related to: maintaining the temporal consistency between the data extracted from semi-structured information sources and the data loaded in the data warehouse according to temporal data warehouse designer requirements; and monitoring the web in accordance with the temporal requirements of the data warehouse designer. We use different approaches to maintain temporal coherency of data gathered from web sources; and wrappers extended with temporal characteristics to keep temporal consistency. Besides we present, an integrated database architecture in which data warehouses are part of the database, extended in order to express temporal concepts.

Title:

PATTERNS AND COMPONENTS TO CAPITALIZE AND REUSE A COOPERATIVE INFORMATION SYSTEM ARCHITECTURE

Author(s):

Magali SEGURAN, Vincent COUTURIER

Abstract: The growth and variety of distributed information sources imply a need to exchange and/or to share information extracted from various and heterogeneous databases. Cooperation of legacy information systems requires advanced architectures able to solve conflicts coming from data heterogeneity: technical, syntactic, structural and semantic conflicts. So, we propose a multi-level architecture based on object-oriented and distributed artificial intelligence to solve these conflicts. Thanks to cooperation patterns and components we propose to capitalize knowledge from this architecture to reuse it to develop new cooperative applications.

Title:

SOFTWARE PROCESS IMPROVEMENT DEFINED

Author(s):

Ivan Aaen

Abstract: This paper argues in favor of the development of explanatory theory on software process improvement. The last one or two decades commitment to prescriptive approaches in software process improvement theory may contribute to the emergence of a gulf dividing theorists and practitioners. It is proposed that this divide be met by the development of theory evaluating prescriptive approaches and informing practice with a focus on the software process policymaking and process control aspects of improvement efforts.

Title:

SANGAM: A FRAMEWORK FOR MODELING HETEROGENEOUS DATABASE TRANSFORMATIONS

Author(s):

Kajal Claypool

Abstract: A broad spectrum of data is available on-line in distinct heterogeneous sources, and stored under different formats. As the number of systems that utilize the heterogeneous data sources grows, the importance of data translation and conversion mechanisms increases greatly. The goal of our work is a to design a framework that simplifies the task of translation specification and execution. Translation specification between the source and the target schema can be accomplished via (1) the discovery of matches between the source and the target schemata; (2) the application of a pre-defined translation templates; or (3) via manual user specification. In this paper we present a {\em flexible}, {\em extensible} and {\em re-usable} translation modeling framework wherein users can (1) explicitly model the translations between schemas; (2) compose translations from an existing library of modeled translation patterns; (3) choose from a library of translation operators; (4) generate translation models based on a match process; (5) edit such translation models; and (5) for all of these translation models, choose automated execution strategies that transform the source schema and data to the desired target schema and data. In this paper, we present the system architecture for such a translation modeling framework.

Title:

A COMPONENT-BASED METHOD FOR DEVELOPING WEB APPLICATIONS

Author(s):

Jonas Montilva, Judith Barrios

Abstract: We describe, in this paper, a component-based software engineering method for helping development teams to plan, organize, control, and develop web applications. The method is described in terms of three methodological elements: a product model that captures the architectural characteristics of web applications, a team model that describes the different roles to be played by the members of a team during the development of web applications, and a process model that integrates the managerial and technical activities that are required to develop componentized web applications of high quality. The main features of the model are its component-based approach that helps reduce costs and development time; its ability to integrate managerial and development processes into a unique process model; and its emphasis on business modelling as a way of gaining a better understanding of the application domain objectives, functions and requirements.

Title:

ENTERPRISE MIDDLEWARE FOR SCIENTIFIC DATA

Author(s):

Judi Thomson

Abstract: We describe an enterprise middleware system that integrates, from a user’s perspective, data located on disparate data storage devices without imposing additional requirements upon those storage mechanisms. The system provides advanced search capabilities by exploiting a repository of metadata that describes the integrated data. This search mechanism integrates information from a collection of XML documents with diverse schema. Users construct queries using familiar search terms, and the enterprise system uses domain representations and vocabulary mappings to translate the user’s query, expanding the search to include other potentially relevant data. The enterprise architecture allows flexibility with respect to domain dependent processing of user data and metadata.

Title:

RECURSIVE PATCHING - AN EFFICIENT TECHNIQUE FOR MULTICAST VIDEO STREAMING

Author(s):

Jack Y. B. Lee, Y. W. Wong

Abstract: Patching and transition patching are two techniques proposed to build efficient video-on-demand (VoD) systems. Patching works by allowing a client to playback video data from a patching stream while caching video data from another multicast video stream for later playback. The patching stream can be released once video playback reaches the point where the cached data begins, and playback continues via the cache and the shared multicast channel for the rest of the session. Transition patching takes this patching technique one step further by allowing a new client to cache video data not only from a full-length multicast channel, but also from a nearby in-progress patching channel as well to further reduce resource consumption. This study further generalizes these patching techniques into a recursive patching scheme where a new client can cache video data recursively from multiple patching streams to further reduce resource consumption. This recursive patching scheme unifies the existing patching schemes as special cases. Simulation results show that it can achieve significant reductions (e.g. 60%~80%) in startup latency at the same load and with the same system resources.

Title:

DESIGN OF A LARGE SCALE DATA STREAM RECORDER

Author(s):

Roger Zimmermann

Abstract: Presently, digital continuous media (CM) are well established as an integral part of many applications. In recent years, a considerable amount of research has focused on the efficient retrieval of such media. Scant attention has been paid to servers that can record such streams in real time. However, more and more devices produce direct digital output streams. Hence, the need arises to capture and store these streams with an efficient data stream recorder that can handle both recording and playback of many streams simultaneously and provide a central repository for all data. We propose a design for a large scale data stream recorder. Our goal was to introduce a unified architecture that integrates multi-stream recording and retrieval in a coherent manner. The discussion raises practical issues such as support for multizone disk drives, variable bit rate media, and disk drives that have a different write than read bandwidth. We provide initial solutions for some issues while others will need to be investigated further.

Title:

DATA CLEANSING FOR FISCAL SERVICES: THE TAVIANO PROJECT

Author(s):

Antonella Longo, Mario Bochicchio

Abstract: Fiscal incomes are vital for Governments, both for central and local agencies, therefore data quality policies and on-line fiscal services will play a key role in the e-Government scenario. In the opinion of authors, in fact, no matter how well an Agency implements innovative services, poor data quality can destroy its utility and cost real money. The original contribution of this paper is about the Taviano project, a real experience of data quality management for on-line fiscal services in Italy: as first, we introduce the architecture of the system used to clean fiscal data. As second, we show how appropriate data analysis procedures can reduce the need for clerical review (manual inspection implies higher costs) of fiscal data. The proposed system is based on an innovative variant of the well known LCS (Longest Common Subsequence) approximate string matching algorithm.

Title:

ASSUMING A ROADMAP STRATEGY FOR E-BUSINESS

Author(s):

Luis Borges Gouveia, Feliz Ribeiro Gouveia

Abstract: Current developments towards the adoption of e-business practices within existing organisations show that a number of requirements must be met before inside and outside satisfaction, integration and success is achieved. In order to provide a clear and straightforward adoption of e-business practices, a roadmap strategy is proposed based on the accomplishment of a number of steps to provide a more stable environment for conducting an electronic-based business. The paper proposes a roadmap strategy based on organisation needs to have an inside experience on topics such as technology management, information systems, information management and knowledge management to approach e-business practices. The discussion is made using the e-readiness concept. For this discussion, electronic business is considered as the conduction of business using mostly an electronic support to interact both between the organisation and people involved: suppliers, customers and partners and the organisation’s professionals.

Title:

TRY AND PATCH: AN APPROACH TO IMPROVE THE TRUST IN SOFTWARE COMPONENTS

Author(s):

Philippe Mauran, Gerard Padiou, Pham Loc

Abstract: We investigate the adaptability of components to their client use. This goal implies to specify the user behavior to control the effective use of components. Furthermore, this control may be completed by carrying out the dynamic adaptation of components to increase the provided service. Through an illustrative sample, we first define the problem of use safety. Then, we propose an approach to insure this safety thanks to the notion of profile. Lastly, a pattern is proposed for the implementation of a such a safety service.

Title:

BUSINESS MODELLING WITH UML: IMPLEMENTING CUSTOMER ORIENTED IT SYSTEMS

Author(s):

Ashok Ranchhod, Calin Gurau, Pauline Wilcox

Abstract: The World Wide Web has allowed companies to reach customers in markets which were previously inaccessible, and to compete efficiently with the traditional store based retailers (de Kare-Silver, 1998). However, the low barriers to entry, the size of the market and the relatively low costs of on-line business activities have created a situation of intense competition. The only answer to this situation is to build a strong brand name and to obtain the customers' long-term loyalty (Novo, 2001a). The Internet empowers the customer, by offering accessibility and ease of communication to previously inaccessible markets(globally) (Chaston, 2001). The Internet user has the opportunity to switch suppliers with several mouse clicks, to compare price and products on a worldwide basis and to select without external pressure the best available offer. The classical offer of low price/high quality product does not work properly on the Internet because the same offer may be available to hundreds of other on-line retailers (Wundermann, 2001). One of the main ways in which on line retailers can create competitive advantage is by offering customer-firm satisfaction (by developing customer relatonship strategies), in addition to product-related satisfaction. The adoption of a customer-oriented strategy is referred to as Customer Relationship Management (CRM). In the on-line retailing environment the introduction and maintenance of CRM requires a complex process of planning, analysis, strategy design and implementation. This paper discusses the importance of the business modelling to support this process in the digital retailing arena and advocates the use of Unified Modelling Language (UML) as a standard modelling language to support business modelling.

Title:

AN ONTOLOGY-BASED APROACH FOR EXCHANGING DATA BETWEEN HETEROGENEOUS DATABASE SYSTEMS

Author(s):

Yamine AIT-AMEUR, Mourad Mimoune, Guy PIERRA

Abstract: This paper presents an approach which allows data exchange between heterogonous databases. It targets at simultaneously semantic and structural heterogeneity. From the semantic point of view, this approach proposes an ontology based approach. On the one hand this ontology can be referenced by universal identifiers and acceded by queries; on the other hand, it can be exchanged between heterogonous databases systems. From the structural point of view, this approach is based on the use of a generic meta-schema, formalised in the EXPRESS language, and allowing the exchange of any instance of any database schema. Exchanged instances reference, as much as needed, the global unique identifiers defined by the ontology. However, the conversion of exchange files to the various target systems can be achieved in a generic manner (e.g. independently of the particular exchanged model). The interest of the EXPRESS language to achieve directly such a program is presented as well.

Title:

AN XML VIEW OF THE "WORLD"

Author(s):

Leonardo Mariani, Emanuela Merelli, Ezio Bartocci

Abstract: The paper presents "Any Input XML Output" (AIXO), a general and flexible software architecture for wrappers. The architecture has been designed to present data sources as collections of XML documents. The use of XSLT as extraction language permits extensive reuse of standards, tools and knowledge. A prototype developed in Java has been effectively proven in several case studies. The tool has also been successfully integrated as a wrapper service into BioAgent, a mobile agent middleware specialized for use in the molecular biology domain.

Title:

EXPLOITATION OF THE DATAWAREHOUSE AT THE SERVICE OF THE HOTELS: A PROPOSAL OF CLIENT ORIENTATION

Author(s):

Rosario Berriel, Antonia Gil, Isabel Sánchez, Zenona González

Abstract: The tourist business´ are seeing their work conditioned by the changes and transformations that are derived from the global environment in which they operate. Facing this new situation, it is necessary that they consider changes in the business methods and substitute the focus of management by process with the management orientated to the client. For this, they should focus on a strategy of integration of the information which permits them to join the client to the Value Chain of the Business. With this strategy, they could make use of all the potential of tools like DataWarehouse and Customer Relationship Management, to obtain knowledge of the clients and services adapted to the demand.

Title:

ADAPTIVE SOFTWARE QUALITY

Author(s):

Jeffrey Voas

Abstract: In this paper, I discuss what I believe is the grand challenge facing the software quality research community: the ability to accurately determine, in the very earliest stages of development, the techniques that will be needed to achieve desired levels of non-functional attributes such as: reliability, availability, fault tolerance, testability, maintainability, performance, software safety, and software security. I will further consider the associated technical and economic tradeoffs that must be made in order to: (1) achieve these desired qualities, and (2) to certify that these qualities will be exhibited when the software is deployed. And I will also take into account the fact that satisfying a particular level of each attribute requires specific cost expenditures, some of these attributes conflict with each other, and when the environment or usage profile of the software is modified, all guarantees or claims of quality should be viewed suspiciously until additional evidence is provided.

Title:

SOFTWARE APPLICATION PACKAGES SELECTION: AN EVALUATION PROCESS BASED ON THE SPIRAL MODEL

Author(s):

Claudine Toffolon, Salem Dakhli

Abstract: Cost overruns, late deliveries, poor quality, and users resistance are examples of the seemingly intractable problems encountered in the software development and maintenance activities, and related to the “software crisis”. In particular, maintenance of existing software systems results in visible and invisible application backlogs that create ill-will between software users and software developers. To reduce the problems related to application backlogs, two strategies have been proposed: software productivity improvement and amount of work reduction. The use of standard application packages permits implementing the second strategy. Although software packages are available quickly and usually less expensive then software developed in-house, the procurement of such packages involves many risks. In this paper, we propose a tool evaluation process based on the spiral model to cope with the software packages selection. This process rests on the global software engineering model elaborated by the authors in a previous work. It has been applied in a French insurance company to select three categories of tools: a software project management tool, a software development tool and a software package to support the litigation department activity

Title:

A FORMAL MODEL FOR A OBJECT-RELATIONAL DATABASES

Author(s):

Valéria Magalhães Pequeno

Abstract: This paper describes a new object-relational data model that will be used for modeling the integrated view schema and the source databases schemas of a data warehouse environment. The model distinguishes between object classes and literal classes. Furthermore, it divides a schema in structure schema and behaviour schema. The main contribution of this work is to define a formal model for an object-relational data model which is general enough to encompass the constructs of any object-oriented data model and most value-oriented models.

Title:

QUEROM : AN OBJECT-ORIENTED MODEL TO REWRITING QUERY USING VIEWS

Author(s):

Abdelhak Seriai

Abstract: We propose in this article an object-oriented approach to rewriting queries using views. Our approach aims to mitigate certain limitations of existing query rewriting approaches. Among these limitations, the inconsideration of certain types of object-oriented complex queries or the lack of uniformity of this approaches compared to the object-oriented model. The proposed approach is based, on one hand, on an object-oriented representation model of queries and, on other hand, on the object-oriented classification mechanism to determine queries containment. In fact, classes representing queries defining existing views are organized in an heritance hierarchy. Then, classification in this heritance hierarchy of a class representing a query is exploited to generate possible rewritings of this query.

Title:

REPLICATION MANAGEMENT IN DISTRIBUTED DATABASES

Author(s):

Dejan Damnjanovic, Miodrag Stanic, Ivana Mijajlovic, Anastasija Kokanovic

Abstract: Data replication is basically defined as maintaining copies of data. In a replicated database where copies of the same data are stored on multiple sites, replication can provide faster data access and fault tolerance. One of the main challenges in introducing replication is maintaining consistency without affecting performance. Since sinhronous replication tehniques violate system performance considerably, in a great number of commercial database management systems, asinhronous replication is implemented as a solution built into these systems. Data replication is widely used in several application types which work with distributed databases, such as data warehouses, mobile environments and large scale systems. These systems, quite different by its nature, put different requirements which impose various problems that replication tehnology has to solve. In this paper, one replication framework is explained. The framework is meant to present a basis for replication environment configuration. Superposing over the solution offered by Oracle, a special algorithm is developped. The algorithm aimes to solve the problem of data updating on every site, with garanteed data consistency considering avoided conflicts

Title:

A DESIGN OF A DISTRIBUTED APPLICATION: UTILISING MICO IN A PROTOTYPE OODB

Author(s):

Wei Tse Chew

Abstract: In distributed systems, objects are distributed in an environment that utilises different hardware architecture, operation system and programming languages. Communication between objects in a heterogeneous distributed environment is accomplished via middleware. Middleware software resides between the application and the operating system thus hiding some underlying complexities of both the application and operating system. The increase in the diversity of computer platforms used in worldwide IT infrastructure has helped make middleware popular. Today many organizations have used the Internet, which is a very large distributed environment, to integrate the various different systems used in their organizations. Hence there is a need for a standard specification such as CORBA to describe the basic infrastructure required to support distributed objects. The design process of a distributed application consists of several steps which can be divided into three main groups, which is the OODB section, the application that utilises the OODB and the IDL that enables object to be transferred from one OODB to another.

Title:

DYNAMIC SYSTEM OF MANAGEMENT AND BUSINESS PROCESSES

Author(s):

Arminda Guerra, Eurico Ribeiro Lopes

Abstract: Lots of valuable information hidden in industrial environments is barely exploited, since we need abstract and high-level information that is tailored to the user's needs (Staud et al, 1998). The real value of information technology organization legacy systems consists in the "accumulation of years of business rules, policies, expertise and ‘know-how’” embedded in the system. In many cases it may be necessary to build and test a prototype to develop a good understanding of the system’s needs and requirements (Jurison, 1999) (Christina, 2000). In this paper we describe a system, which consolidates the database systems of the legacy systems with the business process rules. Done this, information retrieval will be easy by any business section independently of the legacy systems.

Area 2 - ARTIFICIAL INTELLIGENCE AND DECISION SUPPORT SYSTEMS

Title:

THE ESSENCE OF KNOWLEDGE MANAGEMENT

Author(s):

Marco Bettoni, Sibylle Schneider

Abstract: We contend in this presentation that more sustainable and successful Knowledge Management (KM) solutions can be built by using the principles of Knowledge Engineering (KE) to understand knowledge in a more appropriate way. We will basically explore five aspects of practical knowledge relevant for promoting the essential Human Factors (HF) involved in KM tasks: the value and function of knowledge, the motor and mechanism of knowledge, the two states and 3 conversions of individual knowledge, the logic of experience (organisation of knowledge) and knowledge processes (wheel of knowledge). We explain their consequences under the form of five principles that we suggest could be used as leading criteria for designing and evaluating KM solutions and systems in a new way more appropriate for implementing successfully the old insight of the essential role of people.

Title:

CONVENTIONAL VERSUS INTERVAL CLUSTERING USING KOHONEN NETWORKS

Author(s):

Mofreh Hogo, Pawan Lingras, Miroslav Snorek

Abstract: This paper provides a comparison between conventional and interval set representations of clusters obtained using the Kohonen neural networks. The interval set clustering is obtained using a modification of the Kohonen algorithm based on the properties of rough sets. The paper includes experimental results for a web usage mining application. Clustering is one of the important functions in web usage mining. The clusters and associations in web usage mining do not necessarily have crisp boundaries. Researchers have studied the possibility of using fuzzy sets in web mining clustering applications. Recent attempts have adapted genetic algorithms, K-means clustering algorithm, and Kohonen neural networks based on the properties of rough sets to obtain interval set representation of clusters. The comparison between interval and conventional clustering, provided in this paper, may be helpful in understanding the usefulness of some of the non-conventional clustering algorithms in certain data mining applications.

Title:

PARTIALLY CONNECTED NEURAL NETWORKS FOR MAPPING PROBLEMS

Author(s):

Can Isik, Sanggil Kang

Abstract: In this paper, we use partially connected feedforward neural networks (PCFNNs) for input-output mapping problems to avoid a difficulty in determining epoch while fully connected feedforward neural networks (FCFNNs) are being trained. PCFNNs can also, in some cases, improve generalization. Our method can be applicable to real input-output mapping problems such as blood pressure estimation and etc.

Title:

MAPPING DESIGNS TO USER PERCEPTIONS USING A STRUCTURAL HMM: APPLICATION TO KANSEI ENGINEERING

Author(s):

Jun Tan, D. Bouchaffra

Abstract: This paper presents a novel approach for mapping designs to user perceptions. We show how this interaction can be expressed using three classification techniques. We introduce a novel classifier called "structural hidden Markov model" (SHMM) that enables to learn and predict user perceptions. We have applied this approach to Kansei engineering in order to map car external contours (shapes) to customer perceptions. The accuracy ob-tained using the SHMM is 90%. This model has outperformed the neural network and the k-nearest-neighbor classifiers.

Title:

IMPROVING SELF-ORGANIZING FEATURE MAP (SOFM) TRAINING ALGORITHM USING K-MEANS INITIALIZATION

Author(s):

Abdel-Badeeh Salem, Mostafa Syiam, Ayad Fekry Ayad

Abstract: Self-Organizing Feature map (SOFM) is a competitive neural network in which neurons are organized in an l-dimensional lattice (grid) representing the feature space. The principal goal of the SOFM is to transform an incoming pattern of arbitrary dimension into a one- or two- dimensional discrete map, and to perform this transformation adaptively in a topologically ordered fashion. Usually, SOFM can be initialized using random values for the weight vectors. This paper presents a different approach for initializing SOFM. This approach depends on the K-means algorithm as an initialization step for SOFM. The K-means algorithms is used to select N 2 (the size of the feature map to be formed) cluster centers from the data set. Then, depending on the interpattern distances, the N 2 selected cluster centers are organized into an N x N array so as to form the initial feature map. Later, the initial map will be fine-tuned by the traditional SOFM algorithm. Two data sets are used to compare between the proposed method and the traditional SOFM algorithm. The comparison results indicated that: using the first data set, the proposed method required 5,000 epochs to fine tune the map while the traditional SOFM required 20,000 epochs (4 times faster). Using the second data set, the traditional SOFM required 10,000 epochs while the proposed method required only 1,000 epochs (10 times faster)

Title:

MODEL-BASED NEURAL NETWORKS FOR BRAIN TUMOR DIAGNOSIS

Author(s):

A. Salem, Safaa Amin, M. Tolba

Abstract: This study aims to develop an intelligent neural network based system to automatically detect and classify brain tumors from head Magnetic Resonance Image (MRI) to help nonexperts doctors in diagnosing Brain tumors. Three types of brain tumors have been investigated which are acoustic neuroma tumor, which is a benign tumor occurring in the acoustic canals, optical glioma which occurs in the optic nerve or in the area connecting the two nerves and astrocytomas tumor. Two NN-based systems were developed for brain tumor diagnosis. The first system uses the Principal Component Analysis (PCA) for dimentionality reduction and feature extraction that extract the global features of the MRI cases. The second system uses manual and the expectation maximization segmentation algorithm to extract the local features of the MRI cases.Then Multi-Layer Perceptron (MLP) network is used for the classification of these features that obtaind from the PCA and the segmentation. A comparision study is made between the performance of MLP. Experimental results of real cases shows that peak recognition rate of 100% is achieved using PCA and 96.7% when applying the segmentation algorithm before the classification.

Title:

AGENTS FOR HIGH-LEVEL PROCESS MANAGEMENT: THE RIGHT ACTIVITIES, PEOPLE AND RESOURCES TO SATISFY PROCESS CONSTRAINT

Author(s):

John Debenham

Abstract: Multiagent systems are an established technology for managing high-level business processes. High-level business processes are considerably more complex to manage than production workflows. They are opportunistic in nature whereas production workflows are routine. Each stage in a high-level process usually has a well-defined sub-goal, but the best way to achieve that sub-goal within value, time and cost constraints may not be known for certain. To achieve each sub-goal, resources, including human resources, must be found and brought together in an appropriate way. Alternatives include face-to-face meetings, and email exchanges. In a multiagent system for high-level process management each player is assisted by a personal agent. The system manages goal-driven sub-processes and manages the commitments that players make to each other. These commitments will be to perform some task and to assume some level of responsibility. The way in which the selection of tasks and the delegation of responsibility is done attempts to reflect high-level corporate principles and to ‘sit comfortably’ with the humans involved. Commitments are derived through a process of inter-agent negotiation that considers each individual’s constraints and performance statistics. The system has been trialed on business process management in a university administrative context.

Title:

A COMPARISON OF AUSTRALIAN FINANCIAL SERVICE FAILURE MODELS:HYBRID NEURAL NETWORKS, LOGIT MODELS AND DISCRIMINANT ANALYSIS

Author(s):

Juliana Yim, Heather Mitchell

Abstract: This study investigated whether two artificial neural networks (ANNs), multilayer perceptron (MLP) and hybrid networks using statistical and ANN approaches, can outperform traditional statistical models for predicting Australian financial service failures one year prior to the financial distress. The results suggest that hybrid neural networks outperform all other models one and two years before failure. Therefore, hybrid neural network model is a very promising tool for failure prediction in terms of predictive accuracy. This supports the conclusion that for researchers, policymakers and others interested in early warning systems, hybrid networks would be useful.

Title:

THREE-DIMENSIONAL OBJECT RECOGNITION USING SUPPORT VECTOR MACHINE NEURAL NETWORK BASED ON MOMENT INVARIANT FEATURES

Author(s):

Doaa Hegazy, Ashraf Ibrahim, Mohamed Said Abdel Wahaab, Sayed Fadel

Abstract: A novel scheme using a combination of moment invariants and Support Vector Machine (SVM) network is proposed for recognition of three-dimensional (3-D) objects from two-dimensional (2-D) views. The moment invariants are used in the feature extraction process since they are invariant to translation, rotation and scaling of objects. Support Vector Machines (SVMs) have been recently proposed as a new technique for pattern recognition. In the proposed scheme, SVM neural network, which trained using the Kernel Adatron (KA) with Gaussian kernel, is used for training (classification) and testing step. The proposed scheme is applied to a database of 1440 different views for 20 complex 3-D objects and very good results are achieved without adding noise to the test views. Using noisy test data also yielded promising results.

Title:

A QUALITY-OF-SERVICE-AWARE GENETIC ALGORITHM FOR THE SOURCE ROUTING IN AD-HOC MOBILE NETWORKS

Author(s):

Said Ghoniemy, Mohamed Hashem, Mohamed Hamdy

Abstract: A QoS-aware delay-constrained unicast source routing algorithm for ad-hoc networks based on a genetic algorithm is proposed in this paper. The proposed algorithm is based on a new chromosomal encoding which depends on the network links instead of nodes. Advantages of the link-based encoding in the ad-hoc routing problem were studied. Promising results have been obtained when the proposed algorithm was compared to other routing algorithms. Results also showed that the proposed algorithm shows a better performance for heavy QoS constraints the average delay requirements and cost

Title:

SUPPORTING STRATEGIC ALLIANCES THE SMART WAY

Author(s):

Iain Bitran, Steffen Conn

Abstract: The Network Economy forces managers to pursue opportunities and engage competition through alliances and networks of alliances. Managers and organisations must therefore nurture the skills that successful alliance development and management require, and attain the “partnering mindset” pertinent to this new industrial paradigm. Studies indicate that alliance success remains an elusive aspiration for the majority of organisations, with up to seventy percent failing to meet their initial objectives. The SMART Project addresses this issue by developing a systematic managerial method for strategic alliance formation and management. This method provides the structure for a software-based decision support system that includes extensive learning and support materials for manager and business consultant training. Following a brief introduction, this paper provides an overview of the concepts and issues relating to strategic alliances and networks. Subsequently, the requirements and functioning of the SMART System are described. Finally, the future direction and validation strategy of the project are relayed.

Title:

A HYBRID APPROACH FOR HANDWRITTEN ARABIC CHARACTER RECOGNITION: COMBINING SELF-ORGANIZING MAPS (SOMS) AND FUZZY RULES

Author(s):

E. Moneer, Mohamed Hussien, Abdel-Badeeh Salem, Mostafa Syiam

Abstract: This paper presents a hybrid approach combining self-organizing feature map (SOM) and fuzzy rules to develop an intelligent system for handwritten Arabic character recognition. In the learning phase, the SOM algorithm is used to produce prototypes which together with the corresponding variances are used to determine fuzzy regions and membership functions. Fuzzy rules are then generated by learning from training characters. In the recognition phase, an input character is classified by a fuzzy rule based classifier. An unknown character is then re-classified by an SOM classifier. Experiments on a database of 41,033 handwritten Arabic character (20,142 used for training and 20,891 used for testing). The experimental results achieve a classification rate 93.1%.

Title:

KNOWLEDGE MANAGEMENT IN ENTERPRISES: A RESEARCH AGENDA

Author(s):

Konstantinos Karnezis, Konstantinos Ergazakis

Abstract: Knowledge Management is an emerging area, which is gaining interest by both enterprises and academics. The effective implementation of a KM strategy is considering as a “must” and as a precondition of success for contemporary enterprises, as they enter the era of the knowledge economy. However, the field of Knowledge Management has been slow in formulating a universally accepted conceptual framework and methodology, due to the many pending issues that have to be addressed. This paper attempts to propose a novel taxonomy for Knowledge Management research by co instantaneously presenting the current status with some major themes of Knowledge Management research. The discussion presented on these issues should be of value to researchers and practitioners.

Title:

AN ALGORITHM FOR MINING MAXIMAL FREQUENT SETS BASED ON DOMINANCY OF TRANSACTIONS

Author(s):

Srikumar Krishnamoorthy, Bharat Bhasker

Abstract: Several algorithms for mining maximal frequent sets have been proposed in the recent past. These algorithms, mostly, follow the bottom-up approach. In this paper, we present a top-down algorithm for mining the maximal frequent sets. The proposed algorithm uses a concept of dominancy factor of a transaction for limiting the search space. The algorithm is especially efficient for longer patterns. We theoretically model and compare the proposed algorithm with MaxMiner (an algorithm for mining long patterns) and show it to be more efficient

Title:

THE STRATEGIC AND OPERATIONAL ROLES OF MICROCOMPUTERS IN SMES: A PERCEPTUAL GAP ANALYSIS

Author(s):

ZELEALEM TEMTIME

Abstract: Although strategic planning and information technology are key concepts in management research, they have been widely used in relation to only large firms. Only few studies attempted to examine the perceptions of small and medium enterprises (hereafter, SMEs) about the role of IT in strategy making. Moreover, these studies are of less significance for developing countries as the definition and environment of SMEs vary from developed to developing country. This article analyses the strategic use of microcomputers and software packages in corporate planning and decision-making in small and medium enterprises (hereafter, SMEs). Data were collected from 44 SMEs from 3 cities in the Republic of Botswana to study their perceptions about the use of computer-based technology to solve managerial problems, and analysed using simple descriptive statistics. The findings indicate that SMEs in Botswana engaged in both strategic and operational planning activities. However, microcomputers and software packages were used primarily for operational and administrative tasks rather than for strategic planning. They perceive that strategic planning is costly, time-consuming, and hence appropriate for only large firms. The study also showed that firm size and strategic orientation have direct and positive relation to the use of computer technology for strategic decision making. The major implication of the findings for future research has been identified and presented.

Title:

THE USE OF NEUROFUZZY COMPUTABLE SYSTEM TO IDENTIFY PROMINENT BEHAVIOR CHARACTERISTICS IN SUCCESSFUL ENTREPRENEURS

Author(s):

Rogério Bastos, Angelita Ré, Lia Bastos

Abstract: Ahead of small and medium companies there are individuals responsible for the company’s process of creation and development. It is of high importance to identify which characteristics and attributes that contribute to determine the success of these entrepreneurs are. In the present work, has been used a neurofuzzy computable system which permits to identify prominent characteristics in individuals who got success in their enterprises, considered so successful entrepreneurs. For that, a research was taken among entrepreneurs of textile and furniture fields from Santa Catarina State.

Title:

KNOWLEDGE ACQUISITION THROUGH CASE-BASED ADAPTATION FOR HYDRAULIC POWER MACHINE DESIGN

Author(s):

Chi-man VONG, Yi-ping Li, Pak-kin WONG

Abstract: Knowledge acquisition is the first but usually the most important and difficult stage in building an intelligent decision-support system. Existing intelligent systems for hydraulic system design use production rules as its source of knowledge. However, this leads to problems of knowledge acquisition and knowledge base maintenance. This paper describes the application of CBR to hydraulic circuit design for production machines, which helps acquiring knowledge and solving problems by reusing this acquired knowledge (experience). A technique Case-Based Adaptation (CBA) is implemented in the adaptation stage of CBR so that adaptation becomes much easier. A prototype system has been developed to verify the usefulness of CBR in hydraulic power machine design.

Title:

KNOWLEDGE MANAGEMENT AND DATA CLASSIFICATION IN PELLUCID

Author(s):

Tung Dang, Baltazar Frankovic

Abstract: Abstract: The main aim of the Pellucid project is to develop a platform based on the multi-agent technology for assisting public employees in their organization. This paper deals with one of many problems associated with building such a system. There is the problem of classification and identification of required information for agent’s performance. Pellucid agents use historical experience and information to assist newly arriving employees, therefore searching for some specific data from the database is a routine task that they have often to do. This paper presents methods for encoding data and creating the database, so that agents can have an easy access to the required information. Furthermore, two methods applicable with every type of database for classification and selection of historical information are presented.

Title:

SCALING UP INFORMATION UPDATES IN DISTRIBUTED CONDITION MONITORING

Author(s):

Sanguk Noh, Paul  Benninghoff

Abstract: Monitoring complex conditions over multiple distributed, autonomous information agents can be expensive and difficult to scale. Information updates can lead to significant network traffic and processing cost, and high update rates can quickly overwhelm a system. For many applications, significant cost is incurred responding to changes at an individual agent that do not result in a change to an overriding condition. But often we can avoid much work of this sort by exploiting application semantics. In particular, we can exploit constraints on information change over time to avoid the expensive and frequent process of checking for a condition that cannot yet be satisfied. We motivate this issue and present a framework for exploiting the semantics of information change in information agents. We partition monitored objects based on a lower bound on the time until they can satisfy a complex condition, and filter updates to them accordingly. We present and implement a simple analytic model of the savings that accrue to our methods. Besides significantly decreasing the workload and increasing the scalability of distributed condition monitoring for many applications, our techniques can appreciably improve the agents' response time between a condition occurrence and its recognition.

Title:

A WEB-BASED DECISION SUPPORT SYSTEM FOR TENDERING PROCESSES

Author(s):

Noor Maizura Mohamad Noor, Brian Warboys, Nadia Papamichail

Abstract: A decision support system (DSS) is an interactive computer-based system that helps decision makers utilise data and models to solve complex and unstructured problems. Procurement is a decision problem of paramount importance for any business. A critical and vital procurement task is to select the best contractor during the tendering or bidding process. This paper describes a Web-based DSS that aids decision makers in choosing among competitive bids for building projects. The system is based on a framework of a generic process approach and is intended to be used as a general decision-making aid. The DSS is currently being implemented as a research prototype in a process-support environment. It coordinates the participants of tendering processes and supports the submission, processing and evaluation of bids. A case study is drawn from the construction business to demonstrate the applicability of our approach.

Title:

ONE APPROACH TO FUZZY EXPERT SYSTEMS CONSTRUCTION

Author(s):

Dmitry Vetrov, Dmitry Kropotov

Abstract: Some pattern recognition tasks contain expert information, which can be expressed in the terms of linguistic rules. Theory of fuzzy sets presents one of the most successive ways for using these rules. However, in this case there appear two main problems of forming fuzzy sets and generating fuzzy rules, which cannot be fully solved by expert in some areas. These are two "weak points" which hold in the expansion of fuzzy expert systems. The article below proposes one of possible solutions based on the use of precedent information.

Title:

A CAPABILITY MATURITY MODEL-BASED APPROACH TO THE MEASUREMENT OF SHARED SITUATION AWARENESS

Author(s):

Edgar Bates

Abstract: Emerging technologies for decision aids offer the potential for large volumes of data to be collected, processed, and displayed without overloading users and has tremendous implications for the ability of decision makers to approach total situation awareness and achieving a dominant competitive advantage. In industry the measures of effectiveness are clearly linked to performance in the marketplace, but in the military measures of shared situational awareness generally lack the analogous objective rigor. This paper, thus attempts to provide the framework for assessing shared situational awareness using fundamental system engineering and knowledge management paradigms.

Title:

THE COMMUNIGRAM: MAKING COMMUNICATION VISIBLE FOR ENTERPRISE MANAGEMENT

Author(s):

Piotr Lipinski, Jerzy Korczak, Helwig Schmied, Kenneth Brown

Abstract: The Communigram is a new methodological approach to project and process management which illustrates the information flows in the enterprise in a simple and intuitively comprehensible manner. It complements currently existing information systems by providing a means to plan organizational communication explic-itly such that the crucial exchange of information may be suitably controlled. This considerably improves the usefulness of information systems both in terms of information transmission effectiveness and user ac-ceptance. In this paper, the practical implementation of the Communigram in information systems is de-scribed with some notes on technical details and on the practical experience gained in its use.

Title:

THE DESIGN AND IMPLEMENTATION OF IMPROVED INTELLIGENT ANSWERING MODEL

Author(s):

Ruimin Shen, Qun Su

Abstract: Based on the analysis of the main technical problems in the designs of the Intelligent Answering System, the traditional Answering System Model and its working mechanism is provided. Based on the analysis of the model, a Improved Intelligent Answering Model based on the data generalization based on the patterns tree, association rule mining of patterns, and the mergence and deletion of the rules based on the knowledge tree is come up with and implemented. In the end, the improvement of this model in intelligence is analyzed and proved with some data in a experiment.

Title:

INTEGRATED KNOWLEDGE BASED PROCESS IN MANUFACTURING ENVIRONMENT

Author(s):

jyoti K, Dino  Isa, Peter  Blanchfield, V.P Kallimani

Abstract: Abstract: Industries in Malaysia are facing the threat of survival in this global competitive world. This factor is more evident in small scale industries. They are unable to sustain due to the various factors like expensive ,labour, Market fluctuations and the technology additions. Hence to leverage the system there is a need of the structure where in industry can expertise them selves by utilizing their own tacit and explicit knowledge and for betterment and survival. This paper is focused on the various factors in designing the knowledge platform in manufacturing sector using the environments like J2EE, Artificial Intelligence and prolog programming. Thus supporting the decisions taken in the industry

Title:

ACT E-SERVICE QUESTION ANSWERING SYSTEMS BASED ON FAQ CORPUS

Author(s):

Ben Chou, Hou-Yi Lin, Yuei-Lin Chiang

Abstract: World Wide Web (WWW) is a huge platform of information interchange. Users can utilize search engine to search, interchange information on the Internet. Nowadays, there are about 5 hundred millions of web pages at least in the world. With information overloading everywhere on the Internet, users are often swamped with keyword-based search engine and waste much time on impertinent web pages because of the keyword appearance in the pages. After several innovations of search engine, search results are more and more precision and intelligent. In the future, semantic processing and intelligent sifting and ranking technologies are integrated into the third generation search engine. Thus, it is useful for satisfying and closing to the needs users wanted. In this research, we try to combine text mining, concept space, and some related technologies to implement a search engine, which has an appropriate capability of understanding natural language questions. And we will demonstrate it with ACT e-Service.

Title:

TME: AN XML-BASED INFORMATION EXTRACTION SYSTEM

Author(s):

Shixia Liu, Liping Yang

Abstract: Information extraction is a form of shallow text processing that locates a specified set of relevant information in a natural-language document. In this paper, a system—Template Match Engine (TME) is developed to extract useful information from unlabelled texts. The main feature of this system is that it describes the extraction task by an XML template profile, which is more flexible than traditional pattern match methods. The system first builds an initial template profile by utilizing domain knowledge. Then the initial template profile is used to extract information from electronic documents. This step produces some feedback words by enlarging and analyzing the extracted information. Next, this template profile is refined by the feedback words and concept knowledge related to them. Finally, the refined profile is used to extract specified information from electronic documents. The experiment results show that TME system increases recall without loss of precision.

Title:

A GENERAL KNOWLEDGE BASE FOR COMPARING DESCRIPTIONS OF KNOWLEDGE

Author(s):

Susanne Dagh, Harald Kjellin

Abstract: The complexity associated with managing knowledge bases makes it necessary to use a simple syntax when formalising knowledge for a knowledge base. If a large number of people contribute with descriptions of objects to such a knowledge base and if it is necessary to make precise comparisons between the objects of the knowledge base, then some important requirements must be fulfilled; 1) It is necessary that all contributors of knowledge descriptions perceive the knowledge in a similar way; 2) It is crucial that the definitions in the descriptions are on the right level of abstraction; 3) It must be easy for the contributors of knowledge descriptions to create knowledge structures and also to remove them. We propose principles for creating a general knowledge base that fulfils these requirements. We constructed a prototype to test the principles. The tests and inquiries showed that the prototype satisfies the requirements, and thus our conclusion is that the proposed general knowledge base facilitates comparisons of knowledge descriptions.

Title:

CONSTRAINT-BASED CONTRACT NET PROTOCOL

Author(s):

Alexander Smirnov, Nikolai Chilov, Tatiana Levashova, Michael Pashkin

Abstract: The paper describes and analyses a constraint-based contract net protocol designed as a part of the being developed KSNet-approach. This approach addresses the problem of knowledge logistics and considers it as a problem of configuring a knowledge source network. Utilizing intelligent agents is motivated by a distributed and scalable nature of the problem. Made improvements to the contract net protocol concern a formalism of agents’ knowledge representation and a scenario of the agents’ interaction. For the agents’ knowledge representation and manipulation a formalism of object-oriented constraint networks was chosen. Modifications related to the interaction scenarios include introduction of iterative negotiation, concurrent conformation of proposals, extended set of available messages, additional role for agents and agents’ ability to change their roles during scenarios. Examples of the modifications are shown via UML diagrams. A short scenario at the end of the paper illustrates advantages of the developed modifications.

Title:

SIMULATING DATA ENVELOPMENT ANALYSIS USING NEURAL NETWORKS

Author(s):

Pedro Gouvêa Coelho

Abstract: This article studies the creation of efficiency measurement structures of Decision-Making Units (DMUs) by using high-speed optimisation modules, inspired in the idea of an unconventional Artificial Neural Network (ANN) and numerical methods. In addition, the Linear Programming Problem (LPP) inherent in the Data Envelopment Analysis (DEA) methodology is transformed into an optimisation problem without constraints, by using a pseudo-cost function, including a penalty term, causing high cost every time one of the constraints is violated. The LPP is converted into a differential equations system. A non-standard ANN implements a numerical solution based on the gradient method.

Title:

SET-ORIENTED INDEXES FOR DATA MINING QUERIES

Author(s):

Janusz Perek, Zbyszko Krolikowski, Mikolaj Morzy

Abstract: One of the most popular data mining methods is frequent itemset and association rule discovery. Mined patterns are usually stored in a relational database for future use. Analyzing discovered patterns requires excessive subset search querying in large amount of database tuples. Indexes available in relational database systems are not well suited for this class of queries. In this paper we study the performance of four different indexing techniques that aim at speeding up data mining queries, particularly improving set inclusion queries in relational databases. We investigate the performance of those indexes under varying factors including the size of the database, the size of the query, the selectivity of the query, etc. Our experiments show significant improvements over traditional database access methods using standard B+ tree indexes.

Title:

USING KNOWLEDGE ENGINEERING TOOL TO IDENTIFY THE SUBJECT OF A DOCUMENT - RESEARCH RESULTS

Author(s):

Offer Drori

Abstract: Information databases today contain many millions of electronic documents. Locating information on the Internet today is problematic, due to the enormous number of documents it contains. Several other studies have found that associating documents with a subject or list of topics can improve locatability of information on the Internet [5] [6] [7]. Effective cataloguing of information is performed manually, requiring extensive resources. Consequently, most information is currently not catalogued. This paper aims to present a software tool that automatically locates the subject of a document and to show the results of a test performed, using the software tool (TextAnalysis) specially developed for this purpose

Title:

SUMMARIZING MEETING MINUTES

Author(s):

Carla Lopo

Abstract: In this paper it is analyzed the problem of summarization, and specifically the problem of summarization of meeting verbatim. In order to solve it, it is proposed an approach that consists of structuring the meeting data and complementary data related to the environment in which the meeting is integrated. Then, the creation of possible summaries is based in the identification of genre of summaries and SQL queries.

Title:

ON FAST LEARNING OF NEURAL NETWORKS USING BACK PROPAGATION

Author(s):

Kanad Keeni

Abstract: This study discusses the subject of training data selection for neural networks using back propagation. We have made only one assumption that there are no overlapping of training data belonging to different classes, in other words the training data is linearly/semi-linearly separable . Training data is analyzed and the data that affect the learning process are selected based on the idea of Critical points. The proposed method is applied to a classification problem where the task is to recognize the characters A,C and B,D. The experimental results show that in case of batch mode the proposed method takes almost 1/7 of real and 1/10 of user training time required for conventional method. On the other hand in case of online mode the proposed method takes 1/3 of training epochs, 1/9 of real and 1/20 of user and 1/3 system time required for the conventional method. The classification rate of training and testing data are the same as it is with the conventional method.

Title:

A PORTAL SYSTEM FOR PRODUCTION INFORMATION SERVICES

Author(s):

Yuan-Hung Chen, Jyi-Shane  Liu

Abstract: Production data are usually voluminous, continuous, and tedious. Human efforts to derive production information from raw data often result in extra work loading, lagging, and errors. Undesirable results may occur when related functional units are not integrated in parallel with the same updated information. Therefore, successful production information management must address two significant problems: speed of information and effect of information. We propose a production information portal (PIP) architecture to facilitate information derivation efficiency and information utilization performance. The architecture is developed by integrating concepts of data and information management, event monitoring, configurable services, decision support, and information portal. A rigorous system analysis and modelling process is conducted to produce detailed specifications on functional modules, operation procedures, and data/control flows. The utility of the architecture and the prototype system was verified in a semiconductor fabrication domain and was tested by actual users on real data from a world class semiconductor company.

Title:

AN EXPERT SYSTEM FOR PREVENTING AND CORRECTING BURDEN SLIPS, DROPS AND HANGS IN A BLAST FURNACE

Author(s):

David Montes, Raquel Blanco, Eugenia Diaz, Javier Tuya, Faustino Obeso

Abstract: This paper describes an expert system for preventing and correcting burden slips, drops and hangs inside a blast furnace. The system monitors and takes the decisions through the analysis and evaluation of more than a hundred parameters considered as input variables. The main difference between the system proposed here and a classical diagnostic system is the coexistence of three different models of behaviour: one based on a theoretical model of behaviour of permeability, a second empirical model based on the considerations given by the human experts, and a third model derived from the study of the real behaviour observed in the furnace over time, obtained by means of the study of historical files, using machine learning techniques.

Title:

PREDICTING OF CUSTOMER DEFECTION IN ELECTRONIC COMMERCE:USING BACK-PROPAGATION NEURAL NETWORKS

Author(s):

Ya-Yueh Shih

Abstract: Since the cost of retaining an existing customer is lower than that of developing a new one, exploring potential customer defection becomes an important issue in the fiercely competitive environment of electronic commerce. Accordingly, this study used artificial neural networks (ANNs) to predict customers’ repurchase intentions and thus avoid defection based on a set of criteria of quality attributes satisfaction and three beliefs in theory of planned behavior (TPB). The predicted repurchase intentions found by utilizing ANNs was compared with traditional analytic tools such as multiple discriminant analysis (MDA). Finally, via T-test analysis indicated that predicted accuracy of ANNs is better in both training and testing phases.

Title:

KNOWLEDGE MANAGEMENT SYSTEMS FOR LEVERAGING ENTERPRISE DATA RESOURCES: TAXONOMY AND KEY ISSUES

Author(s):

Mahesh S. Raisinghani

Abstract: With today’s emphasis on competitiveness, team-based organizations, and responsiveness, top management cannot separate their responsibilities between people management and traditional/e-business management, since they are both interrelated in knowledge management systems (KMS). Understanding how to manage under conditions of rapid change is a critical skill in the knowledge economy. Today, work in organizations of KMS is increasingly organized with teamwork-based; instead of, the traditional organization charts. As the workforce becomes increasingly diverse and global, it is important for top management to recognize that diversity is a positive force for KMS. Today’s team based, geographically dispersed employees are increasingly guided by a network of values and tradition as part of an organizational culture in KMS. Managing that culture and establishing those changed values are crucial KMS management tasks. This paper explores, describes, and assesses the integration, impact, and implications of KMS for theory and practice.

Title:

CONTENT-BASED REASONING IN INTELLIGENT MEDICAL INFORMATION SYSTEMS

Author(s):

Marek Ogiela

Abstract: This paper describes an innovative approach to the use of linguistic methods of structural image analysis in intelligent systems of visual data perception. They are directed at understanding medical images and a deeper analysis of their semantic contents. This type of image reasoning and understanding is possible owing to the use of especially defined graph grammars enabling one both the correct recognition of significant disease lesions and conducting a deeper analysis of the discovered irregularities on various specific levels. The proposed approach will be described on selected examples of images obtained in radiological diagnosis.

Title:

KNOWLEDGE BASE GRID: TOWARD GLOBAL KNOWLEDGE SHARING

Author(s):

Wu Zhaohui, Xu Jiefeng, Wu Zhaohui

Abstract: Grid technologies enable widespread sharing and coordinated use of networked resources. Bringing knowledge into Grid can be more challenging because in such settings, we encounter difficulties such as standardization of knowledge representation, developing standard protocols to support semantic interoperability, and developing methodology to construct on-demand intelligent services. In this paper, we present an open Knowledge Base Grid architecture that addresses these challenges. We first discuss the requirements of knowledge representation in the Internet, and then argue about the importance of developing standard protocols in such a knowledgeable Internet, at last we present some inference services which provide high level knowledge services such as correlative semantic browsing, knowledge query, forward and backward chaining inference etc. KB-Grid provides a platform for Distributed Artificial Intelligence.

Title:

FACE PATTERN RECOGNITION AND EXTRACTION FROM MULTIPLE PERSONS SCENE

Author(s):

Tetsuo Hattori

Abstract: A method for face recognition of acquaintance as a subpattern in a given image is proposed. We consider that the face pattern to be recognized in the input image is approximately an affine transformed (rotated, enlarged and/or reduced, and translated) pattern of a registered original one. In order to estimate the parameters of the affine transformation, the method uses a Karhunen-Loeve (KL) expansion, spatial correlation, and an approximate equation based on Taylor’s expansion of affine transformation. In this paper, we deal with two types of pattern representation: ordinary grey level representation and a normalized gradient vector field (NGVF) one. The experimental result shows that our method using NGVF representation is considerably effective.

Title:

EXTRACTION OF FEELING INFORMATION FROM CHARACTERS USING A MODIFIED FOURIER TRANSFORM

Author(s):

Tetsuo Hattori

Abstract: An automated feature extraction and evaluation method of feeling information from printed and handwritten characters is proposed. This method is based on image processing and pattern recognition techniques. First, an input binarized pattern is transformed by a distance transformation. Second, a two-dimensional vector field is composed from the gradient of the distance distribution. Third, a divergence operator extracts source and sink points from the field, and also the vectors on those points. Fourth, the Fourier transform is done for the vector field as a complex valued function. Differently from conventional methods, we deal with the Fourier transform with Laplacian operated phase. Fifth, applying the KL expansion method to the data of the complex vectors obtained from some kinds of character fonts, we extract some common feature vectors of each character font. Using those common vectors and linear multiple regression model, an automated quantitative evaluation system can be constructed. The experimental results show that our vector field method using the combination of Fourier transform and KL expansion is considerably more efficient in the discrimination of printed characters (or fonts), comparing with conventional method using gray level (or binarized) character pattern and KL expansion. Moreover, we obtain the results that the evaluation system based on the regression model comparatively meets well to the human assessment.

Title:

A CONCEPTUAL MODEL FOR A MULTIAGENT KNOWLEDGE BUILDING SYSTEM

Author(s):

Barbro Back, Adrian Costea, Tomas Eklund, Antonina Kloptchenko

Abstract: Financial decision makers are challenged by the access to massive amounts of both numeric and textual financial information made achievable by the Internet. They are in need of a tool that makes possible rapid and accurate analysis of both quantitative and qualitative information, in order to extract knowledge for decision making. In this paper we propose a conceptual model of a knowledge-building system for decision support based on a society of software agents, and data and text mining methods.

Title:

BRIDGING THE GAP BETWEEN SOCIAL AND TECHNICAL PROCESSES TO FACILITATE IT ENABLED KNOWLEDGE DISSEMINATION

Author(s):

James Cunningham, Yacine Rezgui, Brendan Berney, Elaine Ferneley

Abstract: The need for organizations to encourage collaborative working through knowledge sharing in order to better exploit their intellectual capital is recognized. However, much of the work to date suggests that despite the intuitive appeal of a collaborative approach significant knowledge remains locked away. It has been argued that the problem is both technological and cultural. Whilst technologically mature, sophisticated information communication technologies (ICTs) exist, providing a technological medium to support a collaborative culture in which knowledge can be elicited, stored, shared and disseminated is still elusive. This paper presents the work being undertaken as part of the IST funded e-COGNOS project that is developing an open, model-based infrastructure and a set of web-based tools that promote consistent knowledge management within collaborative construction environments. The e-COGNOS project has adopting an approach which moves away from the notion of technology managing information and toward the idea of social processes and technological tools evolving reciprocally – the notion of co-construction. Within this co-construction metaphor the project is developing a set of tools that mimic the social process of knowledge discovery thus aiming to bridge the gap between social and technological knowledge discovery and dissemination.

Title:

THE DEVELOPMENT OF A PROTOTYPE OF AN ENTERPRISE MARKETING DECISION SUPPORT SYSTEM

Author(s):

Junkang Feng, Xi Wang, Fugen Song

Abstract: Against the background of the increasing importance of marketing decision making for the manufacturing enterprises and yet relatively weak and insufficient research on systematic methodologies for overall marketing decision making, we build up a model-based framework for marketing decision making. The framework offers an approach of fusing quantitative calculations with qualitative analysis for marketing decision making. Our review of the literature on the architecture of a Decision Support System (DSS) would seem to show that there exists a gap between the theories of the architecture of a DSS, which consists of mainly a database (DB), a model base (MB) and a knowledge base (KB), and the use of this architecture in practical design and implementing a DSS. To fill this gap, we put forward a notion of “Tri-Base Integration”, based upon which we have developed and tested an innovative architecture for a DSS. We have built a prototype of an Enterprise Marketing Decision Support System based upon these ideas. This prototype would seem to have proven the feasibility of our model-based framework for overall marketing decision making and our innovative architecture for a DSS.

Title:

APPLICATION OF NEURAL NETWORKS TO WATER TREATMENT: MODELING OF COAGULATION CONTROL

Author(s):

M. Salem, Hala Abdel-Gelil, L. Abdel All

Abstract: Water treatment includes many complex phenomena, such as coagulation and flocculation. These reactions are hard or even impossible to control by conventional methods. The paper presents a new methodology for determining the optimum coagulate dosage in water treatment process. The methodology is based on a neural network based-model; the learning process is implemented by using the Error Backpropagation algorithm using raw water quality parameters as input.

Title:

USING KNOWLEDGE DISCOVERY IN DATABASES TO IDENTIFY ANALYSIS PATTERNS

Author(s):

Paulo Engel, Carolina Silva, Cirano Iochpe

Abstract: Geographic information systems (GIS) are becoming more popular, increasing the need of implementing geographic databases (GDB). But the GDB design is not easy and requires experience in the task. To support that, the use of analysis patterns has been proposed. Although very promising, the use of analysis patterns in GDB design is yet very restrict. The main problem is that patterns are based on specialists’ experience. In order to help and speed up the identification of new and valid patterns, which are less dependent on specialists’ knowledge than those now available, this paper proposes the identification of analysis patterns on the basis of the process of knowledge discovery in databases (KDD).

Title:

SEMANTIC ANNOTATIONS AND SEMANTIC WEB USING NKRL (NARRATIVE KNOWLEDGE REPRESENTATION LANGUAGE)

Author(s):

Gian Zarri

Abstract: We suggest that it should be possible to come closer to the Semantic Web goals by using ‘semantic annotations’ that enhance the traditional ontology paradigm by supplementing the ontologies of concepts with ‘ontologies of events’. We present then some of the properties of NKRL (Narrative Knowledge Representation Language), a conceptual modelling formalism that makes use of ontologies of events to take into account the semantic characteristics of those ‘narratives’ that represent a very large percentage of the global Web information.

Title:

INDUCTION OF TEMPORAL FUZZY CHAINS

Author(s):

Jose Jesus Castro Sanchez, Luis Rodriguez Benitez, Luis Jimenez Linares, Juan Moreno Garcia

Abstract: The aim of this paper is to present an algorithm to induce the Temporal Fuzzy Chains (TFCs) (Eurofuse 2002). TFCs are used to model the dynamic systems in a linguistic manner. TFCs make use of two different concepts: the traditional method to represent the dynamic systems named state vectors, and the linguistic variables used in fuzzy logic. Thus, TFCs are qualitative and represents the "temporal zones" using linguistic states and linguistic transitions between the linguistic states.

Title:

THE PROTEIN STRUCTURE PREDICTION MODULE OF THE PROT-GRID

Author(s):

Dimitrios  Frossyniotis, George Papadopoulos, Dimitrios Vogiatzis

Abstract: In this work, we describe the protein secondary structure prediction module of a distributed bio-informatics system. Protein databases contain over a million of sequenced proteins, however there is structuring information for at most 2\% of that number. The challenge is to reliably predict the structure based on classifiers. Our contribution is the evaluation of architectures of multiple classifier systems on a standard dataset (CB-396) containing protein sequencing information. We compare the results of a single classifier system based on SVMs, as well as with our version of an SVM based adaBoost algorithm and a novel fuzzy multi-SVM classifier.

Title:

WITH THE "DON'T KNOW" ANSWER IN RISK ASSESSMENT

Author(s):

Luigi Troiano, Canfora Gerardo

Abstract: Decision making often deals with incomplete and uncertain information. Uncertainty concerns the level of confidence associated with the value of a piece of information, while incompleteness derives from the unavailability of data. Fuzzy numbers capture the uncertainty of information, but they are not able to explicitly represent incompleteness. In this paper we discuss an extension of fuzzy numbers, called fuzzy numbers with indeterminateness, and show how they can be used to model decision process involving incomplete information. In particular, the paper focuses on the ``Don't Know'' answer to questionnaires and develops an aggregation model that accounts for these type of answers. The main contribution lies in the formalization of the interrelationships between the risk of a decision and the incompleteness of the information on which it is made.

Title:

FUZZY INFERENCING IN WEB PAGE LAYOUT DESIGN

Author(s):

Abdul-Rahim Ahmad, Otman Basir, Khaled  Hassanein

Abstract: The Web page layout design is a complex and ill-structured problem where the evolving tasks, inadequate information processing capabilities, cognitive biases and socio-emotional facets frequently hamper the procurement of a superior alternative. An important aspect in selection of a superior Web page layout design is the evaluation of its fitness value. Automating the fitness evaluation of layouts seems to be a significant step forward. It requires quantification of highly subjective Web page design guidelines in the form of some fitness measure. The Web usability and design guidelines come from experts who provide vague and conflicting opinions. This paper proposes the exploitation of fuzzy technology in modeling such subjective, vague, and uncertain Web usability and design guidelines.

Title:

MAPPING DOCUMENTS INTO CONCEPT DATABASES FOR THRESHOLD-BASED RETRIEVAL

Author(s):

REGHU RAJ PATINHARE COVILAKAM, RAMAN S

Abstract: The trajectory of topic description in text documents such as news articles generally covers a small number of domain-specific concepts. Domain-specific phrases are excellent indicators of these concepts. Any form of representation of the concepts must invariably use finite strings of some finite representation language. Then, the design of a grammar with good selectivity and coverage is a viable solution to the problem of content capturing. This paper deals with the design of such a grammar for a small set of domains, which helps the representation of the concepts using the relational framework. This paradigm throws light into the possibility of denoting the text portion of web pages as a relational database, which can facilitate information retrieval using simple SQL queries obtained by translating a user's query. The advantage is that highly relevant results can be retrieved by looking for a threshold value in a specific attribute column.

Title:

A NEW METHOD OF KNOWLEDGE CREATION FOR KNOWLEDGE ORGANIZATIONS

Author(s):

Mingshu Li, Ying Dong

Abstract: Knowledge creation is an interesting problem in knowledge management (KM). Topic maps, especially XML Topic Map (XTM), is used to organize information in a way that can be optimized for navigation. In this paper, we adopt XTM as a new method to discuss the problem of knowledge creation. Since XTM can be modeled as a formal hypergraph, we study the problem based on XTM hypergraph. New XTM knowledge operations have been designed for based on graph theories for knowledge creation. Moreover, they have been implemented as a toolkit, and applied on our KM platform. When applying the XTM knowledge operations, new knowledge can be generated for knowledge organizations. The application of the operations can fit users’ requests on the intelligent retrieval of the knowledge, or on the analysis of the system knowledge structure.

Title:

AN ARTIFICIAL NEURAL NETWORK BASED DECISION SUPPORT SYSTEM FOR BUDGETING

Author(s):

Barbro Back, Eija Koskivaara

Abstract: This paper introduces an artificial neural network (ANN) based decision support system for budgeting. The proposed system estimates the future revenues and expenses of the organisation. We build models based on four to six years’ monthly account values of a big organisation. The monthly account values are regarded as a time-series and the target is to predict the following year’s account values with the ANN. Thus, the ANN’s output information is based on similar information on prior periods. The prediction results are compared to the actual account values and to the account values budgeted by the organisation. We found that ANN can be used for modeling the dynamics of the account values on monthly basis and for predicting the yearly account values.

Title:

A DATA MINING METHOD TO SUPPORT DECISION MAKING IN SOFTWARE DEVELOPMENT PROJECTS

Author(s):

José Luis Álvarez-Macías

Abstract: In this paper, we present a strategy to induce knowledge as support decision making in Software Development Projects (SDP). The motive of this work is to reduce the great quantity of SDP do not meet the initial cost requirements, delivery date and the quality of the final product. The main objective of this strategy is to support the manager in the decision taking to establish the policies from management when beginning a software project. Thus, we apply a data mining tool, called ELLIPSES, on databases of SDP. The database are generated by means of the simulation of a dynamic model for the management of SDP. ELLIPSES tool is a new method oriented to discover knowledge according to the expert's needs, by the detection of the most significant regions. The method essence is found in an evolutionary algorithm that finds these regions one after another. The expert decides which regions are significant and determines the stop criterion. The extracted knowledge is offered through two types of rules: quantitative and qualitative models. The tool also offers a visualization of each rule by parallel coordinate systems. In order to present this strategy, ELLIPSES is applied to a database which has already been obtained by means of the simulation of a dynamic model on a project concluded.

Title:

USABILITY ISSUES IN DATA MINING SYSTEMS

Author(s):

Fernando Berzal

Abstract: When we build data mining systems, we should reflect upon some design issues which are often overlooked in our quest for better data mining techniques. In particular, we usually focus on algorithmic details whose influence is minor when it comes to users’ acceptance of the systems we build. This paper tries to highlight some of the issues which are usually neglected and might have a major impact on our systems usability. Solving some of the usability problems we have identified would certainly add to the odds of successful data mining stories, improve user acceptance and use of data mining systems, and spur renewed interest in the development of new data mining techniques. Our proposal focuses on integrating diverse tools into a framework which should be kept coherent and simple from the user's point of view. Our experience suggests that such a framework should include bottom-up dataset-building blocks to describe input datasets, expert systems to propose suitable algorithms and adjust their parameters, as well as visualization tools to explore data, and communication and reporting services to share the knowledge discovered from the massive amounts of data available in actual databases.

Title:

PLANNING COOPERATIVE HOMOGENEOUS MULTIAGENT SYSTEMS USING MARKOV DECISION PROCESSES

Author(s):

Bruno Scherrer, François Charpillet, Iadine Chadès

Abstract: This paper proposes a decision-theoric approach for designing a set of situated agents so that they can solve a cooperative problem. The approach we propose is based on reactive agents. Although they do not negotiate, reactive agents can solve complex tasks such as surrounding a mobile object : agents self-organize their activity through the interaction with the environment. The design of each agent's behavior results from solving a decentralized partially observable markov decision process (DEC-POMDP). But, as solving a DEC-POMDP is NEXP-complete, we propose an approximate solution to this problem based on both subjectivity and empathy. An obvious advantage of the proposed approach is that we are able to design agents' reactive policies considering features of a cooperative problem (top-down conception) and not the opposite (down-top conception).

Title:

AN EFFICIENT PROCEDURE FOR ARTIFICIAL NEURAL NETWORKS RETRAINING

Author(s):

Razvan Matei, Dumitru Iulian Nastac

Abstract: The artificial neural networks (ANNs) ability to extract significant information from an initial set of data allows both an interpolation in the a priori defined points, and an extrapolation outside the range bordered by the extreme points from the training set. The main purpose of this paper is to establish how a viable ANN structure at a previous moment of time could be re-trained in an efficient manner in order to support modifications of the input-output function. To be able to fulfill our goal, we use an anterior memory, scaled with a certain convenient value. The evaluation of the computing effort involved in the retraining of an ANN shows us that a good choice for the scaling factor can substantially reduce the number of training cycles independent of the learning methods.

Title:

PROMAIS: A MULTI-AGENT DESIGN FOR PRODUCTION INFORMATION SYSTEMS

Author(s):

Lobna Hsairi, Khaled Ghédira, Faiez Gargouri

Abstract: In the age of information proliferation and communication technology advances, Cooperative Information System (CIS) technology becomes a vital factor for production system design in every modern enterprise. In fact, current production system must hold to new strategic, economic and organizational structures in order to face new challenges. Consequently, intelligent software based on agent technology emerges to improve system design on the one hand, and to increase production profitability and enterprise competitive position on the other hand. This paper starts with an analytical description of logical and physical flows dealt with manufacturing, then proposes one Production Multi-Agent Information System (ProMAIS). ProMAIS is a collection of stationary and intelligent agent-agencies with specialized expertises, interacting to carry out the shared objectives: cost-effective production in promised delay and adaptability to the changes. In order to bring ProMAIS’s dynamic aspect out, interaction protocols are specially zoomed out by cooperation, negotiation and Contract Net protocols.

Title:

TEXT SUMMARIZATION: AN UPCOMING TOOL IN TEXT MINING

Author(s):

S. Raman, M. Saravanan

Abstract: As Internet’s user base expands at an explosive rate, it provides great opportunities as well as grand challenges for text data mining. Text Summarization is the core functional task of text mining and text analysis, and it consists of condensing documents and also in a coherent order. This paper discusses the application of term distribution models to text summarization for the extraction of key sentences based on the identification of term patterns from the collection. The evaluation of the results is based on the human-generated summaries as a point of reference. Our system outperforms the other auto-summarizers considered at different percentage a level of summarization, and the final summary is close to the intersection of the frequently occurring sentences found in the human-generated summaries at 40% summarization level.

Title:

AUTOMATION OF CORE DESIGN OPTIMIZATION IN BWR

Author(s):

Yoko Kobayashi

Abstract: This paper deals with the application of evolutionary algorithm and multi-agents algorithm to the information system in a nuclear industry. The core design of a boiling water reactor (BWR) is a hard optimization problem with nonlinear multi objective functions and nonlinear constrains. We have developed an integrative two-stage genetic algorithm (GA) to the optimum core design of a BWR and have realized the automation of a complex core design. In this paper, we further propose a new algorithm for combinatorial optimization using multi-agents. We name it as multi-agents algorithm (MAA). In order to improve the convergence performance of the core design optimization of BWR, we introduce this new algorithm to the first stage of the two-stage GA previously developed. The performance of the new algorithm also compared with the conventional two-stage GA.

Title:

LEARNING BAYESIAN NETWORKS FROM NOISY DATA.

Author(s):

Mohamed BENDOU, Paul MUNTEANU

Abstract: This paper analyzes the effects of noise on learning Bayesian networks from data. It starts with the observation that limited amounts of noise may cause a significant increase of the complexity of learned networks. We show that, unlike classical over-fitting which affects other classes of learning methods, this phenomenon is theoretically justified by the alteration of the conditional independence relations between the variables and is beneficial for the predictive power of the learned models. We also discuss a second effect of noise on learning Bayesian networks: the instability of the structures learned from DAG-unfaithful noisy data.

Title:

BUILDING INTELLIGENT CREDIT SCORING SYSTEMS USING DECISION TABLES

Author(s):

Manu De Backer, Rudy Setiono, Christophe Mues, Jan  Vanthienen, Bart  Baesens

Abstract: Accuracy and comprehensibility are two important criteria when developing decision support systems for credit scoring. In this paper, we focus on the second criterion and propose the use of decision tables as an alternative knowledge visualization formalism which lends itself very well to build intelligent and user-friendly credit scoring systems. Starting from a set of propositional if-then rules extracted by a neural network rule extraction algorithm, we develop decision tables and demonstrate their efficiency and user-friendliness for 2 real-life credit scoring cases.

Title:

EVALUATING THE SURVIVAL CHANCES OF VERY LOW BIRTHWEIGHT BABIES

Author(s):

Anália Lourenço, Ana Cristina Braga, Orlando Belo

Abstract: Scoring systems that quantify neonatal mortality have an important role in health services research, planning and clinical auditing. They provide means to monitoring, in a more accurate and reliable way, the quality of care among and within hospitals. The classical analyses based on a simple comparison of mortality or dealing with the newborns birthweight solely have proved to be insufficient. There are a large number of variables that influence the survival of newborns that must to be taken into account. From strictly physiological information through more subjective data, concerning medical care, there are many variables to attend to. Scoring systems try to embrace such elements, providing more reliable comparisons of the outcome. Notwithstanding, if a clinical score intends to gain widespread between clinicians, it must be simple and accurate and use routine data. In this paper, it is presented a neonatal mortality risk evaluation case study, pointing out data specificities and how different data preparation approaches (namely, feature selection) will affect the overall outcome.

Title:

THE USE OF NEURAL NETWORK AND DATABASE TECHNOLOGY TO REENGINEER THE TECHNICAL PROCESS OF MONITORING COAL COMBUSTION EFFICIENCY

Author(s):

Farhi Marir

Abstract: Monitoring the combustion process for electricity generation using coal as a primary resource, is of a major concern to the pertinent industries, power generation companies in particular. The carbon content of fly ash is indicative of the combustion efficiency. The determination of this parameter is useful to characterise the efficiency of coal burning furnaces. Traditional methods such as thermogrametric analysis (TGA) and loss on ignition which are based on ash collection and subsequent analysis, proved to be tediously difficult, time consuming and costly. Thus, a need for a new technology was inevitable and needed to monitor the process in a more efficient method yielding a better exploitation of the resources at the expense of a low cost. The main aim of this work is to introduce a new automated system which can be bolted onto a furnace and work online. The system consists of three main components, namely, a laser instrument for signal acquisition, a neural network tool for training, learning and simulation, and a database system for storage and retrieval. The components have been designed, adapted and tuned to communicate for knowledge acquisition of this multidimensional problem. The system has been tested for a range of coal ashes and proved to be efficient . reliable, fast and cost effective.

Title:

A KNOWLEDGE MANAGEMENT TOOL FOR A COLLABORATIVE E-PROJECT

Author(s):

Luc Lamontagne, Tang-Ho Lê

Abstract: In this paper, we provide an overview of our software tool to exploit and interchange procedural knowledge represented as networks of semi-structured units. First, we introduce the notion of Procedural Knowledge Hierarchy; then we present the modeling of Procedural Knowledge by our software. We claim that the “bottom-up” approach, that is being carried out with this tool, is appropriate to gather new candidate terms for the construction of a new domain ontology. We also argue that the KU modeling together with a pivot KU structure (rather than individual keywords) would contribute a solution to the search engine on the Web. We detail the updating technique basing on the distributed tasks of an e-project. We also discuss some ideas pertaining to the identity issue for the web based on some space and time representation.

Title:

STRUCTURED CONTEXTUAL SEARCH FOR THE UN SECURITY COUNCIL

Author(s):

Irineu Theiss, Ricardo Barcia, Marcelo Ribeiro, Eduardo Mattos, Andre Bortolon, Tania C. D.  Bueno, Hugo Hoeschl

Abstract: This paper presents a generic model of a methodology that emphasises the use of information retrieval methods combined with the Artificial Intelligence technique named CBR – Case-Based Reasoning. In knowledge-based systems, this methodology allows the human knowledge to be automatically indexed. This type of representation turns compatible the user language with the language found in the data contained in the knowledge base of the system, retrieving to the user more adequate answers to his/her search question. The paper describes the Olimpo System, a knowledge based system that enables to retrieve information from textual files, which is similar to the search context described by the user in natural language. For the development of the system, 300 Resolutions of the UN Security Council available on the Internet were indexed.

Title:

APPLYING FUZZY LOGIC AND NEURAL NETWORK FOR QUANTIFYING FLEXIBILITY OF SUPPLY CHAINS

Author(s):

Bjørn Solvang, Ziqiong  Deng, Wei Deng Solvang

Abstract: Fuzzy Logic (FL) is the method that deals with uncertainty and vagueness in the model or description of the systems involved as well as those in the variables. A fuzzy logic system is unique in that it is able to handle numerical and linguistic knowledge, simultaneously. This is precisely the method that we’ve looking for when the quantification of supply chain flexibility has become an urgent task. This paper first elaborates the necessity of quantification of supply chain flexibility. Thereafter, a methodological framework for measurement of supply chain flexibility is introduced with the purpose of providing the research background of this paper. Fuzzy logic system is applied to quantify six types of supply chain flexibility as each depends on both qualify and quantify measures. Further, since the value of supply chain flexibility is also decided by the degree that it depends on each type of flexibility and the decision of these degrees needs the incorporation of expert knowledge, we apply Artificial Neural Network (ANN) to conduct the task.

Title:

AN APPROACH OF DATA MINING USING MONOTONE SYSTEMS

Author(s):

Rein Kuusik, Grete Lind

Abstract: This paper treats data mining as a part of the process called knowledge discovery in databases (KDD in short), which consists of particular data mining algorithms and, under some acceptable computational efficiency limitations, produces a particular enumeration of patterns. Pattern is an expression (in a certain language) describing facts in a subset of facts. The data mining step is one of the most implemented steps of the whole KDD process. Also the KDD process involves preparing data for analysis and interpreting results found in data mining step. The main approach to data mining and its main disadvantage is shown and new method, called generator of hypotheses, and its base algorithm MONSA is presented.

Title:

DEVELOPMENT OF AN ORGANIZATIONAL SUBJECT

Author(s):

Chamnong Jungthirapanich, Parkpoom Srithimakul

Abstract: Due to the globalization of markets are competitive, skillful employees are most wanted. Therefore, it reflects to high turn over rate in each organization. This research would create the pattern to retain the knowledge of those employees as called “the organizational subject model”. This pattern captures inner capability of the employees and develops to be the contents for the organization, then uses the educational method transform these contents to be the subject which is called “the organizational subject”. The organizational subject model is the new strategy to retain the knowledge of the skillful employees. This research also shows the statistical method to evaluate the efficiency and the effectiveness of the organizational subject and the hypothesis testing to evaluate the achievement of the organizational subject model. This model saves the knowledge capital investment, time, and futhermore to identify the unity of the organization.

Title:

MINING VERY LARGE DATASETS WITH SUPPORT VECTOR MACHINE ALGORITHMS

Author(s):

François Poulet, Thanh-Nghi Do

Abstract: In this paper, we present new support vector machines (SVM) algorithms that can be used to classify very large datasets on standard personal computers. The algorithms have been extended from three recent SVMs algorithms: least squares SVM classification, finite Newton method for classification and incremental proximal SVM classification. The extension consists in building incremental, parallel and distributed SVMs for classification. Our three new algorithms are very fast and can handle very large datasets. An example of the effectiveness of these new algorithms is given with the classification into two classes of one billion points in 10-dimensional input space in some minutes on ten personal computers (800 MHz Pentium III, 256 Mb RAM, Linux).

Title:

EXTENSION OF THE BOX-COUNTING METHOD TO MEASURE THE FRACTAL DIMENSION OF FUZZY DATA

Author(s):

Antonio B. Bailón

Abstract: The box-counting is a well known method used to estimate the dimension of a set of points that define an object. Those points are expressed with exact numbers that don't reflect the uncertainty that affects them in many cases. In this paper we propose an extension to the box-counting method that allows the measure of the dimension of sets of fuzzy points, i.e. sets of points affected by some degree of uncertainty. The fuzzy box-counting method allows the extension of algorithms that use the fractal dimension of sets of crisp points to enable them to work with fuzzy data.

Title:

TRACKER: A FRAMEWORK TO SUPPORT REDUCING REWORK THROUGH DECISION MANAGEMENT

Author(s):

Andy Salter, Phil Windridge, Alan Dix, Rodney Clarke, Caroline Chibelushi, John Cartmell, Ian Sommerville, Victor Onditi, Hanifa Shah, Devina Ramduny, Amanda Queck, Paul Rayson, Bernadette  Sharp, Albert Alderson

Abstract: The Tracker project is studying rework in systems engineering projects. Our hypothesis is that providing decision makers with information about previous relevant decisions will assist in reducing the amount of rework in a project. We propose an architecture for the flexible integration of the tools implementing the variety of theories and models used in the project. The techniques include ethnographic analysis, natural language processing, activity theory, norm analysis, and speech and handwriting recognition. In this paper, we focus on the natural language processing components, and describe experiments which demonstrate the feasibility of our text mining approach.

Title:

EVALUATION OF AN AGENT-MEDIATED COLLABORATIVE PRODUCTION PROTOCOL IN AN INSTRUCTIONAL DESIGN SCENARIO

Author(s):

Ignacio Aedo, Paloma Díaz, Juan Manuel Dodero

Abstract: Distributed knowledge creation or production is a collaborative task that needs to be coordinated. A multiagent architecture for collaborative knowledge production tasks is introduced, where knowledge-producing agents are arranged into knowledge domains or marts, and a distributed interaction protocol is used to consolidate knowledge that is produced in a mart. Knowledge consolidated in a given mart can be in turn negotiated in higher-level foreign marts. As an evaluation scenario, the proposed architecture and protocol are applied to facilitate coordination during the creation of learning objects by a distributed group of instructional designers.

Title:

SYMBOLIC MANAGEMENT OF IMPRECISION

Author(s):

Mazen EL-SAYED, Daniel PACHOLCZYK

Abstract: This paper presents a symbolic model for handling nuanced information like "John is very tall". The model presented is based on a symbolic M-valued predicate logic. The first object of this paper has been to present a new representation method for handling nuanced statements of natural language and which contains linguistic modifiers. These modifiers are defined in a symbolic way within a multiset context. The second object has been to propose new Generalized Modus Ponens rules dealing with nuanced statements.

Title:

LIVE-REPRESENTATION PROCESS MANAGEMENT

Author(s):

Daniel  Corkill

Abstract: We present the live-representation approach for managing and working in complex, dynamic business processes. In this approach, important aspects of business-process modeling, project planning, project management, resource scheduling, process automation, execution, and reporting are integrated into an detailed, on-line representation of planned and executing processes. This representation provides a real-time view of past, present, and anticipated process activities and resourcing. Changes resulting from process dynamics are directly reflected in the live representation, so that, at any point in time, the latest information about process status and downstream expectations is available. Managers can directly manipulate the live representation to change process structure and execution. These changes are immediately propagated throughout the environment, keeping managers and process participants in sync with process changes. A fundamental aspect of the live-representation approach is obtaining and presenting current and anticipated activities as an intrinsic and organic aspect of each participant's daily activities. By becoming an active partner in these activities, the environment provides tangible benefits in keeping everyone informed and coordinated without adding additional duties and distractions. Equally important are providing individuals the flexibility to choose when and how to perform activities and allowing them to provide informative details of their progress without being intrusive into the details of their workdays. In this paper, we describe the technical and humanistic issues associated with the live-representation approach and summarize the experiences gained in providing a commercial implementation used in the automotive and aerospace industries.

Title:

MR-BRAIN IMAGE SEGMENTATION USING GAUSSIAN MULTIRESOLUTION ANALYSIS AND THE EM ALGORITHM

Author(s):

Mohammed A-Megeed, Mohammed F. Tolba, Mostafa Gad, Tarek Gharib

Abstract: We present a MR image segmentation algorithm based on the conventional Expectation Maximization (EM) algorithm and the multiresolution analysis of images. Although the EM algorithm was used in MRI brain segmentation, as well as, image segmentation in general, it fails to utilize the strong spatial correlation between neighboring pixels. The multiresolution-based image segmentation techniques, which have emerged as a powerful method for producing high-quality segmentation of images, are combined here with the EM algorithm to overcome its drawbacks and in the same time take its advantage of simplicity. Two data sets are used to test the performance of the EM and the proposed Gaussian Multiresolution EM, GMEM, algorithm. The results, which proved more accurate segmentation by the GMEM algorithm compared to that of the EM algorithm, are represented statistically and graphically to give deep understanding.

Title:

EPISTHEME: A SCIENTIFIC KNOWLEDGE MANAGEMENT ENVIRONMENT

Author(s):

Julia  Strauch, Jonice Oliveira, Jano Souza

Abstract: Nowadays, researchers create and change information faster than in the past. Although great part of this exchanging is made by documental form, there is also a great informal or tacit knowledge exchange in people interactions.For the success of a scientific activity, it is necessary that researchers be provided with all necessary knowledge to execute their tasks, to make decisions, collaborate with one another and disseminate individual knowledge to transform it into organizational knowledge. In this context, we propose the scientific knowledge management environment called Epistheme. This environment having as goals: to help the organizational knowledge management, to be a learning environment, to facilitate the communication of people on the same domain research and to unify different perspectives and expertise in a single environment. This article shows the Epistheme framework with the modules of identification, creation, validation, integration, acquisition and knowledge dissemination.

Title:

A PROCESS-CENTERED APPROACH FOR KDD APPLICATION MANAGEMENT

Author(s):

Karin Becker, Karin Becker

Abstract: KDD is the knowledge-intensive task consisting of complex interactions, protracted over time, between a human and a (large) database, possibly supported by a heterogeneous suite of tools. Managing this complex process, its underlying activities, resources and results, is a laborious and complex task. In this paper, we present a documentation model to structure and organize information necessary to manage a KDD application, based on the premise that documentation is important not only for better managing efforts, resources, and results, but also to capture and reuse project and corporate experiences. The documentation model is very flexible, independent of the particular process methodology and tools applied, and its use through a supporting environment allows the capture, storage and retrieval of information at any desired detail level, thus adaptable to any analyst profile or corporation policy. The approach presented is based on process-oriented organizational memory information systems, which aim at capturing the informal knowledge generated and used during corporate processes. The paper presents the striking features of the model, and discusses its use in a real case study.

Title:

A HYBRID CASE-BASED ADAPTATION MODEL FOR THYROID CANCER DIAGNOSIS

Author(s):

Abdel-Badeeh M. Salem, Khaled A. Nagaty, Bassant Mohamed  El Bagoury

Abstract: : Adaptation in Case-Based Reasoning (CBR) is a very difficult knowledge-intensive task, especially for medical diagnosis. This is due to the complexities of medical domains, which may lead to uncertain diagnosis decisions. In this paper, a new hybrid adaptation model for cancer diagnosis has been developed. It combines transformational and hierarchical adaptation techniques with certainty factors (CF’s) and artificial neural networks (ANN’s). The model consists of a hierarchy of three phases that simulates the expert doctor reasoning phases for cancer diagnosis, which are the Suspicion, the To-Be-Sure and the Stage phases. Each phase uses the learning capabilities of a single ANN to learn the adaptation knowledge for performing the main adaptation task. Our model first formalizes the adaptation knowledge using IF-THEN transformational rules and then maps the transformational rules into numeric or binary vectors for training the ANN at each phase. The transformational rules of the Suspicion phase encode assigned CF’s to reflect the expert doctors’ feelings of cancer suspicion. The model is applied to thyroid cancer diagnosis and is tested with 820 patient cases, which are obtained from the expert doctors in the National Cancer Institute of Egypt. Cross-validation test has shown a very high diagnosis performance rate that approaches 100% with error rate 0.53%. The hybrid adaptation model is described in the context of a prototype namely: Cancer-C that is a hybrid expert system, which integrates neural networks into the CBR cycle.

Title:

DYNAMICS OF COORDINATION IN INTELLIGENT SOCIAL MULTI-AGENTS ON ARTIFICAL MARKET MODEL

Author(s):

Junko SHIBATA, Wataru SHIRAKI, Koji OKUHARA

Abstract: We propose market selection problems in consideration of agent's preference. The artificial market is based on Hogg-Huberman model with reward mechanism. By using our model, agents can not only make use of imperfect and delayed information but also take the preference of the agent into account on market selection. Our model includes a conventional model that the benefit is an only factor for selecting. Finally the dynamical behaviors of our system are investigated numerically. From results of simulation, it is shown that agent's preference and uncertainty for market selection.

Title:

PARTIAL ABDUCTIVE INFERENCE IN BAYESIAN NETWORKS BY USING PROBABILITY TREES

Author(s):

Jose A. Gámez

Abstract: The problem of partial abductive inference in Bayesian networks is, in general, more complex to solve than other inference problems as probability/evidence propagation or total abduction. When join trees are used as the graphical structure over which propagation will be carried out, the problem can be decomposed into two stages: (1) to obtain a join tree containing only the variables included in the explanation set, and (2) to solve a total abduction problem over this new join tree. In De Campos et al. (2002) different techniques are studied in order to approach this problem, obtaining as a result that not always the methods which obtain join trees with smaller size are also those requiring less CPU time during the propagation phase. In this work we propose to use (exact and approximate) {\em probability trees} as the basic data structure for the representation of the probability distributions used during the propagation. From our experiments, we observe how the use of exact probability trees improves the efficiency of the propagation. Besides, when using approximate probability trees the method obtain very good approximations and the required resources decrease considerably.

Title:

ONTOLOGY LEARNING THROUGH BAYESIAN NETWORKS

Author(s):

Mario Vento, Francesco Colace, Pasquale Foggia, Massimo De Santo

Abstract: In this paper, we propose a method for learning ontologies used to model a domain in the field of intelligent e-learning systems. This method is based on the use of the formalism of Bayesian networks for representing ontologies, as well as on the use of a learning algorithm that obtains the corresponding probabilistic model starting from the results of the evaluation tests associated with the didactic contents under examination. Finally, we present an experimental evaluation of the method using real world data

Title:

LOGISTICS BY APPLYING EVOLUTIONARY COMPUTATION TO MULTICOMMODITY FLOW PROBLEM

Author(s):

Koji OKUHARA, Wataru SHIRAKI, Eri DOMOTO, Toshijiro TANAKA

Abstract: In this paper, we propose an application of one of evolutionary computation, genetic algorithm, to logistics in multicommodity flow problem. We chose a multicommodity flow problem which can evaluate its congestion by traffic arrival ratio in a link. In simulation, we show that a proposed network control method using genetic algorithm is superior to the usual method which makes a path selection by the Dijkstra method and a traffic control by the gradient method.

Title:

TOOL FOR AUTOMATIC LEARNING OF BAYESIAN NETWORKS FROM DATABASE: AN APPLICATION IN THE HEALTH AREA

Author(s):

Cristiane Koehler

Abstract: The learning of Bayesian Networks process is composed of two stages: learning topology and learning parameters associated to this topology. Currently, one of the most important research in the Artificial Intelligence area is the development of efficient inference techniques to use in intelligent systems. However, the usage of such techniques need the availability of a valid knowledge model. The necessity to extract knowledge from databases is increasing exponentially. More and more, the amount of information exceeds the analysis capacity by the traditional methods that do not analyse the information under the knowledge focus. It is necessary the development of new techniques and tools to extract knowledge from databases. In this article, the concepts of Data Mining and knowledge breakthrough based on the Bayesian Networks technology had been used to extract valid models of knowledge. Some learning bayesian algorithm were been studied, where problems were founded, mainly in the generation of the topology of the network with all the available variable in the database. The application domain of this research is the healht area, it was attested that in the clinical practice, the experts think only with the more important variables to the decision taking. Some algorithms have been analysed, and finally, a new algorithm was considered to extract bayesian models considering only the more relevant variables to the construction of the network topology.

Title:

COMPUTER GAMES AND ECONOMICS EXPERIMENTS

Author(s):

Kay-Yut Chen, Ren Wu

Abstract: HP Labs has developed a software platform, called MUMS, for moderating economics games between human and/or robot participants. The primary feature of this platform is a flexible scripting language that allows a researcher to implement any economics games in a relative short time. This scripting language eliminates the need to program low-level functions such as networking, databases and interface components. The scripts are description of games including definitions of roles, timing rules, the game tree (in a stage format), input and output (with respect to a role, not client software). Definitions of variables and the use of common mathematical and logical operations are also allowed to provide maximum flexibility in handling the logic of games. This platform has been used to implement a wide variety of business related games including variations of a retailer game with simulated consumers and complex business rules, a double sided call market and negotiation in a procurement scenario. These games are constructed to accurately simulate HP business environments. Carefully calibrated experiments, with human subjects whose incentives were controlled by monetary compensations, were conducted to test how different business strategies result in different market behavior. For example, the retailer game was used to test how the market reacts to changes of HP's contract terms such as return policies. Experiment results were used in major HP consumer businesses to make policy decisions.

Title:

MINING WEB USAGE DATA FOR REAL-TIME ONLINE RECOMMENDATION

Author(s):

Stephen Rees, Mo Wang

Abstract: A user's browser history contains a lot of information about the relationship between web pages and users. If this information can be fully exploited, it may provide better knowledge about the user's online behaviours and can provide better customer services and site performance. In this paper, an online recommendation model is proposed based on the web usage data. A special data structure for storing the discovered item sets is described. This data structure is especially suitable for online real time recommendation systems. Users are first classified using neural network algorithm. Then within each group, association rules algorithm is employed to discover common user profiles. In this process, users' interested web sections has been traced and modeled. Multiple support levels for different types page views and varying window sizes are also considered. Finally, a recommendation sets are generated based on user's active session. A demo website is provided to demonstrate the proposed model.

Title:

TEXT MINING FOR ORGANIZATIONAL INTELLIGENCE

Author(s):

Hercules do Prado, Edilberto  Silva, Edilson Ferneda

Abstract: In this article it is presented a case study on the creation of organisational intelligence in a Brazilian news agency (Radiobras) with the application of text mining tools. Departing from the question about if Radiobras is fulfilling its social role, we construct an analysis model based on the enormous volume of texts produced by its journalists. CRISP-DM method was applied including the acquisition of the news produced during 2001, preparation of this material, with the cleansing and formatting of the archives, creation of a model of clustering and the generation of many views. The views had been supplied to the administration of the company allowing them to develop more accurate self-knowledge. Radiobras is an important company of Brazilian State, that spreads out the acts of the public administration and needs a self evaluation based in the knowledge of its results. As any other company, Radiobras is subject to the increasing requirement of competitiveness imposed to the modern organisations. In this scene, the generation and retention of organisational intelligence have been recognised as a competitive differential that can lead to a more adequate management of the businesses, including its relationship with customers and in the adequacy of its structure of work. The importance of the information for the elaboration of the knowledge and, conse-quently, the synthesis of intelligence is widely recognised, and requires a proper treatment adjusted to reach insights that can lead to the activation of the mental processes that will lead to that synthesis. Many internal and external views on the organisation can be built with the use of tools for the extraction of patterns from a large amount of data, subsidising decisively the managers in the decision making process. These views, constructed to answer the specific questions, constitute knowledge in a process of Organisational Learning that influences radically the way in which the organisation is managed. The contributions of IT in this field were developed, initially, aiming at the extraction of patterns from transactional databases that contains well structured data. However, considering that most of the information in the organisations are found in textual form, recent developments allows the extraction of interesting patterns from this type of data. Some patterns extracted in our case study are: (i) the measure of production and geographic distribution of Radiobras news, (ii) a survey of the most used words, (iii) the discovery of the covering areas of the news, (iv) the evaluation of how the company is fulfilling its role, in accordance with the subjects approached in its news, and (v) the evaluation of the journalistic covering of the company.

Title:

STAR – A MULTIPLE DOMAIN DIALOG MANAGER

Author(s):

Márcio Mourão, Nuno Mamede, Pedro Madeira

Abstract: In this work we propose to achieve not only a dialogue manager for a domain, but also the aggregation of multiple domains in the same dialogue management system. With this in mind, we have developed a dialogue manager that consists of five modules. One of them, called Task Manager, deserves special attention. Each domain is represented by a frame, which is in turn composed by slots and rules. Slots define the domain data relationship, and rules define the system’s behavior. Rules are composed by operators (logical, conditional, and relational) and functions that can reference frame slots. The use of frames made possible all the remaining modules of the dialogue manager to become domain independent. This is, beyond any doubt, a step ahead in the design of conversational systems.

Title:

REQUIREMENTS OF A DECISION SUPPORT SYSTEM FOR CAPACITY ANALYSIS AND PLANNING IN ENTERPRISE NETWORKS

Author(s):

Américo Azevedo, Abailardo Moreira

Abstract: Capacity analysis and planning is a key activity in the provision of adequate customer service levels and the management of the company’s operational performance. Traditional capacity analysis and planning systems have become inadequate in the face of several emerging manufacturing paradigms. One such paradigm is the production in distributed enterprise networks, consisting of subsets of autonomous production units within supply chains working in a collaborative and coordinated way. In these distributed networks, capacity analysis and planning becomes a complex task, especially because it is performed in a heterogeneous environment where the performance of individual manufacturing sites and of the network as a whole should be simultaneously considered. Therefore, the use of information system solutions is desirable in order to support effective and efficient planning decisions. Nevertheless, it seems that there is a lack of a clear definition of the most important requirements that must be met by supporting solutions. This paper attempts to identify some general requirements of a decision support system to be used for capacity analysis and planning in enterprise networks. Adaptability of capacity models, computational efficiency, monitoring mechanisms, support for distributed order promising, and integration with other systems, are some important requirements identified.

Title:

A SUBSTRATE MODEL FOR GLOBAL GUIDANCE OF SOFTWARE AGENTS

Author(s):

Guy Gouardères, Nicolas Guionnet

Abstract: We try to understand how large groups of software agents can be given the means to achieve global tasks, while their point of view on the situation is only local (reduced to a neighbourhood.) To understand the duality between local abilities and global constraints, we introduced a formal model. We used it to evaluate the possibility of existence of an absolute criteria, for a local agent, to detect global failure (in order to change the situation). The study of a sample of examples led us to the fact that such a criteria does not always exist. When it exists, it’s often too global for local agents to apply (it demands too a large field of view to be employed.) That’s why we left, for a moment, the sphere of absolute criteria, to look for something more flexible. We propose a tool of domain globalisation that is inspired by continuous physics phenomena: If the domain is too partitioned, we can add to it, a propagation layer, to let the agents access data concerning its global state. This layer can be a pure simulation of wave or heat equations, or an exotic generalisation. We applied the concept to a maze obstruction problem.

Title:

APPLYING CASE-BASED REASONING TO EMAIL RESPONSE

Author(s):

Luc Lamontagne

Abstract: In this paper, we describe a case-based reasoning approach for the semi-automatic generation of responses to email messages. This task poses some challenges from a case-based reasoning perspective especially to the precision of the retrieval phase and the adaptation of textual cases. We are currently developing an application for the Investor relations domain. This paper discusses how some of the particularities of the domain corpus, like the presence of multiple requests in the incoming email messages, can be addressed by the insertion of natural language processing techniques in different phases of the reasoning cycle.

Title:

THE INOVATION PLANNING TASK FOR PRODUCTS AND SERVICES

Author(s):

Alfram Albuquerque, Marcelo Barros, Agenor Martins, Edilson Ferneda

Abstract: Innovation is crucial for business competitive intelligence and knowledge-based society. In this context, companies use to base their activities on the efficiency of their processes for supporting innovation of prod-ucts and services. Knowledge-based systems should leverage the innovation process and its planning by storing internal and external user information. In this paper, the authors detail this innovation process by presenting and discussing an architecture for the task of user support oriented to the innovation planning process. The proposed architecture is based on QFD – a methodology that translates the client voice into engineering requisites for products and services. Our methodological proposal increases efficiency on the base of the integration of both knowledge-based processes (KBPs) and mechanical processes (MPs) used to transform quality specification or requisites into engineering requirements.

Title:

DECISIO: A COLLABORATIVE DECISION SUPPORT SYSTEM FOR ENVIRONMENTAL PLANNING

Author(s):

Julia Strauch, Manuel de Castro, Jano de Souza

Abstract: Environmental planning projects often face problems such as: difficulties to manage spatial data as a component of the process, lack of coordination of the different areas, difficulties of knowledge access, badly defined decision processes, and absence of documentation of the entire process and its relevant data. Our proposal is a web-based system that provides a workflow tool to design and execute the decision process and group decision support tools that help the decision makers in finding similar solutions, analyzing and prioritizing alternatives and helping the interaction among users. The main goals of the proposal are: Document the environmental process and data, provide tools to support collaboration, conflict management and alternative analysis and make available previous successful and failure similar cases. These functionalities have their human-computer interaction adapted to incorporate spatial data manipulation and geo-reference. The tool is being used in agro-meteorological projects with the purpose of improving the effectiveness and efficiency of the decision process and its result, maximize profit and preserving natural resources.

Title:

CLASSIFYING DATABASES BY K-PROPAGATED SELF-ORGANIZING MAP

Author(s):

Takao Miura, Taqlow Yanagida, Isamu Shioya

Abstract: In this investigation, we discuss classifiers to databases by means of Neural Network. Among others, we introduce k-propagated Self Organizing Map (SOM) which involves learning mechanism of neighbors. And we show the feasibility of this approach. Also we evaluate the tool from the viewpoint of statistical tests.

Title:

MAKE OR BUY EXPERT SYSTEM (MOBES): A KNOWLEDGE-BASED DECISION SUPPORT TOOL TO MAXIMISE STRATEGIC ADVANTAGE

Author(s):

noornina dahlan, ai pin lee, reginald theam kwooi see, teng hoon lau, eng han gan

Abstract: This paper presents a knowledge-based tool, which aids strategic make or buy decisions that are key components in enhancing an organization’s competitive position. Most companies have no firm basis for evaluating the make or buy decision; thereby using inaccurate costing analyses for sourcing strategies, which are directly responsible for the flexibility, customer service quality, and the core competencies of an organization. As a result, a prototype of the Make or Buy Expert System (MOBES) with multi-attribute analytic capability is developed. The proposed model comprises four main dimensions: identification and weighting of performance category; analysing technical capability category; comparison of retrieved internal and external technical capability profiles, and analysis of supplier category. This model aims to enable an organisation to enhance its competitiveness by improving its decision making process as well as leveraging its key internal resources to move further forward in its quest for excellence.

Title:

AGENT TECHNOLOGY FOR DISTRIBUTED ORGANIZATIONAL MEMORIES: THE FRODO PROJECT

Author(s):

Ludger  van Elst, Andreas Abecker, Ansgar Bernardi

Abstract: Comprehensive approaches to knowledge management in modern enterprise are confronted with scenarios which are heterogeneous, distributed, and dynamic by nature. Pro-active satisfaction of information needs across intra-organizational boundaries requires dynamic negotiation of shared understanding and adaptive handling of changing and ad-hoc task contexts. We present the notion of a Distributed Organizational Memory (DOM) as a meta-information system with multiple ontology-based structures and a workflow-based context representation. We argue that agent technology offers the software basis which is necessary to realize DOM systems. We sketch a comprehensive Framework for Distributed Organizational Memories which enables the implementation of scalable DOM solutions and supports the principles of agent-mediated knowledge management.

Title:

USING THE I.S. AS A (DIS)ORGANIZATION GAUGE

Author(s):

Pedro Araujo, Pedro Mendes

Abstract: The textile and garment industry in Portugal is undergoing some struggles. In their origin is a lack of organization of many companies. This situation, together with an increasing dynamics of the products and the markets, considerably complicates decision-making and information systems can be a precious aid. But contrary to academics, managers must be shown evidence of the advantages of using information technology. So, to help attain this objective, we propose the definition of an index quantifying the level of disorganization of the productive sector of the company. Continuously using the information system to monitor this index allows managers to improve the performance of the company's operations.

Title:

HELPING USER TO DISCOVER ASSOCIATION RULES. A CASE IN SOIL COLOR AS AGGREGATION OF OTHER SOIL PROPERTIES

Author(s):

Manuel Sanchez-Marañon, Jose-Maria Serrano, Gabriel Delgado, Julio Calero, Daniel Sanchez, Maria-Amparo Vila

Abstract: As commercial and scientific databases size increases dramatically with little control on the overall application of this huge amount of data, knowledge discovery techniques are needed in order to obtain relevant and useful information to be properly used later. Data mining tools, as association rules and approximate dependencies, has been proven as effective and useful when users are looking for implicit or non-intuitive relations between data. The current and main disadvantage of rule-extraction algorithms rests on the sometimes excessive number of obtained results. Since human expert aid is needed in order to give an interpretation to results, a very interesting task is to make easier the expert's work. An user interface and a knowledge discovery management system would provide a comfortable way to easily sort out rules, according to their utility. An example of this necessity is shown in a case involving soil color as aggregation of other soil properties and as a interesting descriptor for soil-forming processes.

Title:

PRODUCTION ACTIVITY CONTROL USING AUTONOMOUS AGENTS

Author(s):

Eric Gouardères, Mahmoud Tchikou

Abstract: The need of adaptability of production structures is continuously increased due to decrease of product life cycle and increase of the competition. The efficiency of a production system is now described not only in term of time cycle, due date, inventory level, but also in term of flexibility and reactivity in order to integrate the evolution of the market. Current methods for real time control of production system do not provide sufficient tools for an effective production activity control. The origin of such a problem is at the level of existing control structures. This work details the design of a production activity control system based on distributed structure. The structure is based on the distributed artificial intelligence concepts. After having introduced the context and reasoning work, we describe the different parts of our multi-agent model. Lastly, we illustrate this approach on a practical example of production cell.

Title:

HUMAN IRIS TEXTURE SEGMENTATION ALGORITHM BASED ON WAVELET THEORY

Author(s):

Taha El-Arief, Nahla El-Haggar, M. Helal

Abstract: Iris recognition is a new biometric technology that exceptionally accurate that has stable and distinctive features for personal identification. For iris classification it is important to isolate the iris pattern by locating its inner (pupil) and outer (limbus) boundaries. This paper presents a texture segmentation algorithm for segmenting the Iris from the human eye in more accurate and efficient manner. A quad tree wavelet transform is first constructed to extract the texture feature. The fuzzy c-means (FCM) algorithm is then applied to the quad tree with the coarse-to-fine approach. Finally, the results demonstrate its potential usefulness.

Title:

AN EXPERT SYSTEM FOR CREDIT MANAGEMENT FOLLOW-UP

Author(s):

Nevine Labib, Ezzat Korany, Hamdy Latif, Mohamed Abderabu

Abstract: Commercial risk assessment nowadays has become the major concern of banks since they are faced with severe losses of unrecoverable credit. The proposed system is an Expert System prototype for Credit management follow-up. The system uses rule-based inference mechanism of reasoning. The knowledge were obtained from Experts woring in six commercial Egyptian banks. It starts following up the granted loan. If the customer refrains from paying, it calculates his credit rating. If the customer credit rating is bad, it analyzes the problem causes and reasons and accordingly takes the suitable remedial action. When tested, the system proved to be efficient.

Title:

APPLICATION OF GROUP METHOD OF DATA HANDLING TO VIRTUAL ENVIRONMENT SIMULATOR

Author(s):

Wataru SHIRAKI

Abstract: In this paper, we propose decision support system that selects the most useful development plan for preservation of natural environment and target species from two or more development plan. For the purpose, after recognizing the environmental situation and the impact among environmental factors where the species exist, we select a sustainable development based on evaluation and prediction of environment assessment by reconstructing dynamics in computer simulation. Then, we present hybrid system using artificial life technology such as cellular automaton and group method of data handling, which can apply to environmental assessment. From results of numerical example, proposal system approximates coefficients with sufficient accuracy if the structure of a model is known, it was also shown that near future dynamics can be predicted, even if the structure of a model is unknown.

Title:

AN EFFICIENT CLASSIFICATION AND IMAGE RETRIEVAL ALGORITHM BASED ON ROUGH SET THEORY

Author(s):

Jafar Mohammed, Aboul Ella Hassanien

Abstract: : With an enormous amount of image data stored in databases and data warehouses, it is increasingly important to develop powerful tools for analysis of such data and mining interesting knowledge from it. In this paper, we study the classification problem of image databases and provide an algorithm for classification and retrieval image data in the context of Rough Set methodology. We present an efficient distance function called quadratic which works more efficiently with retrieval image data. We also demonstrate that by choosing the useful subset of rules based on simple decision table, the algorithm have high accuracy for classification.

Title:

USING SPECIALIZED KNOWLEDGE IN AUTOMATED WEB DOCUMENT SUMMARIZATION

Author(s):

Zhiping Zheng

Abstract: Automated text summarization is a natural language processing task to generate short, concise, and comprehensive descriptions of essential content of documents. This paper is going to describe some new features in a real-time automated web document summarization system used in Seven Tones Search Engine, a search engine specialized in linguistics and languages. The main feature of this system is to use algorithms designed specifically for Web pages in a specific knowledge domain to improve the quality of summarization. It also considers the unique characteristics of search engines. In special, linguistics features should be very important to linguistics document. The documents are assumed either HTML or plain text. A good HTML parser will affect summarization quality very much although it is not a part of summarization algorithm.

Title:

A NEW APPROCH OF DATA MINING

Author(s):

Stéphane Prost, Claude Petit

Abstract: This paper describe a trajectories classification algorithm ( each trajectory is defined by a finite number of values), it gives for each class of trajectories a characteristic trajectory: the meta-trajectory. The pathological trajectories are removed by the algorithm. Classes are built by an ascendant method. Two classes are built, then three and so on, a partition containing n classes allow to built a partition with n+1 classes. For each class a meta-trajectory is determined ( for example the gravity centre). The number of classes depends on the minimum number of trajectory by classes allowed and a parameter given by the user, which is compared with the inter-classes inertia gain, other dispersion may be chosen.

Title:

EXPERIENCE MANAGEMENT IN THE WORK OF PUBLIC ORGANIZATIONS: THE PELLUCID PROJECT

Author(s):

Simon LAMBERT, Sabine DELAITRE, Gianni VIANO, Simona STRINGA

Abstract: One of the major issues in knowledge management for public organisations is organisational mobility of employees, that is the continual movement of staff between departments and units. As a consequence of this, the capture, capitalisation and reuse of experience become very important. In the PELLUCID project, three general scenarios have been identified from studies of the pilot application cases. They are contact management, document management and critical timing management. These scenarios are outlined, and a corresponding approach to experience formalisation is described. Requirements are also set out on the technical solution able to support experience management

Title:

USING GRAMMATICAL EVOLUTION TO DESIGN CURVES WITH A GIVEN FRACTAL DIMENSION

Author(s):

Manuel Alfonseca, Alfonso Ortega, Abdel Dalhoum

Abstract: Lindenmayer Grammars have been applied to represent fractal curves. In this work, Grammatical Evolution is used to automatically generate and evolve Lindenmayer Grammars that represent curves with a fractal dimension that approximates a pre-defined required value. For many dimensions, this is a non trivial task to be performed manually. The procedure used parallels biological evolution, acting through three different levels: a genotype (a vector of integers) subject to random modifications in different generations), a protein-like intermediate level (a Lindenmayer Grammar with a single rule, generated from the genotype by applying a transformation algorithm) and a phenotype (the fractal curve).

Title:

DETECTION OF CARDIAC ARRHYTHMIAS BY NEURAL NETWORKS

Author(s):

Noureddine Belgacem, F. Reguig, M. Chikh

Abstract: The classification of heart beats is important for automated arrhythmia monitoring devices. The study describes a neural classifier for the identification ad detection of cardiac arrhythmias in surface (Electrocardiogram) ECGs. Traditional features for the classification task are extracted by analyzing the heart rate and morphology of QRS complex and P wave of the ECG signal. The performance of the classifier is evaluated on the MIT-BIH database. The method achieved a sensitivity of 94.60% and a specificity of 96.49% in discrimination of six classes.

Area 3 - INFORMATION SYSTEMS ANALYSIS AND SPECIFICATION

Title:

CODE GENERATION FOR DISTRIBUTED SYSTEMS

Author(s):

Ralf Gitzel, Markus Aleksy

Abstract: Due to the complexity of distributed code as opposed to the easy way the corresponding designs can be described graphically, interest in code generators which create applications based on abstract system descriptions, is high. An indicator for this are the many commercial products. This paper aims at exploring the theoretical foundations of code generation for distributed systems with regard to the data structures and template language syntax. Several existing approaches are analysed and a new hybrid-form data structure is proposed. The goal, that this paper aims at, is an adaptable, middleware-independent way to produce software with minimal human code.

Title:

SOFTWARE DEVELOPMENT GUIDED BY MODELS - THE XIS UML PROFILE

Author(s):

Miguel Luz, Alberto Silva

Abstract: The UML is used to detail high level software specifications that will be interpolated for XMI and XIS (XML Information Systems) as interchange formats based on XML. UML and XML are expected to be the next generation of modeling and data interchange standards respectively. In this paper, we describe the UML Profile for XIS architecture as a proposal for software development guided by UML models. The XIS system is based on a multi-phase generative programming approach, starting from high-level UML models till software artifacts (such as Java code and SQL scripts), passing through different representations, namely OMG’s XMI, and our (XIS) specific XML vocabulary. The main contribute of this paper is the overview of the XIS system and the proposal and discussion of the XIS UML profile.

Title:

KEY ISSUES IN INFORMATION SYSTEMS AND SOFTWARE ENGINEERING - VIEWS FROM A JOINT NETWORK OF PRACTITIONERS AND ACADEMICS

Author(s):

M. RAMAGE, D. TARGETT, Kecheng Liu, R. HARRISON, D. AVISON, K. BENNETT, R. BISHOP

Abstract: SEISN (Software Engineering and Information Systems Network), a research project supported by the British research council EPSRC, aims to promote the understanding of the two research communities and practitioners. The network focuses on the exchange of ideas and enables these communities to clarify their beliefs and present experiences, findings and views. This paper summaries the work of this research network, and investigates where there is common ground between the IS and SE communities and practitioners and where differences remain. With discussions on the key issues, the paper shows the future directions of research in software engineering and information systems.

Title:

SUPPORTING DELIBERATION PROCESS MECHANISM IN SOFTWARE SYSTEMS DEVELOPMENT

Author(s):

OSMAN EBRAHIM, RANAI ELGOHARY, AHMED HAMAD

Abstract: A model for providing automated support for the deliberation process inherent in the software requirements engineering is proposed. The model provides the representation and formal mechanisms that supports stakeholders in their evaluating the available alternatives to choose among them, based on specified criteria that precedes the final decision. The same mechanism is used to quantify and formalize independent judgment of each stakeholder and then combines these individual judgments into a decision that express the group final decisions. The database capable of representing and encompassing this huge amount of the process knowledge in a way close to the conceptual data model has been designed. The model will also provide a representation mechanism for capturing and recording design rationale. This can assist in design replay or justification of decisions as well as an important history trails for management references. The developed model is applied and validated on a software requirements engineering on a case study at Air Traffic Control system (ATC).

Title:

ANALYSIS ON RELATION BETWEEN SERVICE PARAMETERS FOR SERVICE LEVEL MANAGEMENT AND SYSTEM UTILIZATION

Author(s):

Norihisa Komoda, Shoji Konno, Masaharu Akatsu

Abstract: Accompanying the rise of IT service providers such as ASP (Application Service Provider) and iDC (internet Data Center), it becomes popular to define the quality of information systems as SLA (Service Level Agreement). The true goal of SLA is to guarantee a valuable service to users. Hence, it is desirable to identify service parameters that are highly related to user satisfaction. The service parameters concerning every system should be different. We focus attention on system utilization to select the parameters. Our expectation is the following hypothesis; by investigating the characteristic on system utilization, we can statistically predict the parameters that are the most critical to the satisfaction of the system users. In this paper, we examine parameters for availability and responsiveness, which are known as two major factors in SLA. First, we provide three parameters for availability and responsiveness, respectively. Next, we prepare a questionnaire about system utilization. To analyze the relation between service parameters and system utilization, we had several experienced system engineers answer the questions for each system they had developed. Quantification theory type II is applied for the analysis and validity of our hypothesis is demonstrated. We also clarify characteristics on system that emphasizes each service parameter.

Title:

ALIGNING AN ENTERPRISE SYSTEM WITH ENTERPRISE REQUIREMENTS: AN ITERATIVE PROCESS

Author(s):

Pnina Soffer

Abstract: Aligning an off-the-shelf software package with the business processes of the enterprise implementing it is one of the main problems in the implementation of enterprise systems. The paper proposes an iterative alignment process, which takes a requirement-driven approach. It benefits from reusing business process design without being restricted by predefined solutions and criteria. The process employs an automated matching between a model of the enterprise requirements and a model of the enterprise system capabilities. It identifies possible matches between the two models and evaluates the gaps between them despite differences in their completeness and detail level. Thus it provides the enterprise with a set of feasible combinations of requirements that can be satisfied by the system as a basis for making implementation decisions. The automated matching is applied iteratively, until a satisfactory solution is found. Object Process Methodology (OPM) is applied for modeling both the system and the enterprise requirements, which are inputs for the automated matching. The alignment process has been tested in an experimental study, whose encouraging results demonstrate its ability to provide a satisfactory solution to the alignment problem.

Title:

TRACKING BUSINESS RULE EVOLUTION TO SUPPORT IS MAINTENANCE

Author(s):

Marko Bajec

Abstract: Business rules describe how organisations are doing business. Their value has also been recognised within the information system (IS) domain, mostly because of their ability to make applications flexible and amenable to change. In this paper we argue that business rules can be used as a link between organisations and their ISs. We show that business rules originate in organisations and that many business rules are explicitly or implicitly captured in enterprise models. We advocate, based on research work, that if business rules are managed in an appropriate manner they can help keeping IS aligned and consistent with the business environment. In the paper we propose a business rule management scenario for managing business rules from an organisational perspective. The scenario recognises business rule management as an interface between enterprise modelling and IS development and maintenance.

Title:

DESIGN AND REALIZATION OF POWER PLANT SUPERVISORY INFORMATION SYSTEM (SIS) BASED ON INFI 90

Author(s):

Guozhong Zhang, Zhen Ye

Abstract: To improve the management level of power plants and to adapt to the requirements of market-oriented reform for electric enterprises, a method of design and realization of a Supervisory Information System (SIS) based on INFI 90 DCS for manufacturing management in power plant is put forward in this paper. By adding a CIU to the INFI net, the real-time data of manufacture process is retrieved to the historical data platform through interface PCs, fibers and switches. The whole system used OpsCon as interface driver, iHistorian as the historical data platform, iFIX as the configuration software, and infoAgent as a tool for assistant decision-making such as on-line historical data analysis, equipment status monitoring and malfunction diagnosis, equipment reliability and life management etc. Practice shows that the SIS makes full use of resources of DCS and MIS, forms a synthetic automation system integrated by DCS, SIS, and MIS, and realizes an automatic control covering the whole process of electric production.

Title:

TOWARDS A DEFINITION OF THE KEY-PROBLEMS IN INFORMATION SYSTEM EVOLUTION - FORMULATING PROBLEMS TO BETTER ADDRESS INFORMATION SYSTEM PROJECTS

Author(s):

Virginie Goepp, François Kiefer

Abstract: Through up the years, a lot of methods and approaches were proposed in the information system design (ISD) field. In spite of this variety and number of propositions, over 80 % of information system projects fail (Clancy 1995). On the one hand, this situation seems to be very surprising but on the other hand, this diversity of works let the research area in a state of fragmentation. A basic problem is the lack of consensus on the information system (IS) notion. However, according to Alter, its comprehension is essential to better understand project failures. So, we come back up to this notion and show that IS have to fulfil contradictory roles linked to individual and collective aspects of information. This contradiction is the starting point for establishing a key-problem framework of IS in general. Indeed, the contradiction notion is an integrating part of TRIZ, Cyrillic acronym for "Theory of Inventive Problem Solving", which is efficient for better formalize and address problems in technical system design. So, we analyse its potential contributions for developing modular and contingent IS project approaches, which are project success factors. Then, we apply the TRIZ approach on our first contradiction in order to obtain the key-problem framework. This one, based on three contradiction classes, is developed and presented. Each class of contradiction is linked with the semiotics features of information and enables to formalize the intrinsic problems of information systems. The potential applications of such a framework are also discussed.

Title:

AN ENVIRONMENT FOR SOFTWARE DEVELOPMENT BASED ON A DISTRIBUTED COLLABORATIVE MODEL

Author(s):

Angélica de Antonio, Marco Villalobos

Abstract: In this paper, we present a model for collaborative software design and an environment, called Sinergia, that is being constructed based on this model. We describe the different concepts and components of the proposed model, as well as the functional features that the environment under development includes. The Sinergia tool uses a combination of technologies, such as distributed CORBA objects, Java servlets or relational databases, that make it useful in the context of a distributed multidisciplinary software development team.

Title:

DEONTIC CONSTRAINTS: FROM UML CLASS DIAGRAM TO RELATIONAL MODEL

Author(s):

Pedro Ramos

Abstract: Sometimes, because of one atypical situation, an important mandatory association between classes in a UML Class Diagram must be replaced by an optional one. That semantic and functionality impoverishment happens because the mandatory constraint must have a boolean value. In this paper we propose the introduction of a deontic constraint in the UML Class Diagram and its automatic repercussion in the correspondent Relational Model. The deontic constraint allows the formal representations of requirements that ideally should always be fulfilled, but that can be violated in atypical situations. If the violable requirement is explicitly represented it is possible to maintain both the requirement and its violation and, consequently, recur to monitoring procedures for violation warnings. We present our proposal in the general context of automatically mapping from object models into relational ones. We adopt a formal approach, based on predicate calculus, because, apart its soundness properties, it is an easy and understandable way to integrate both models and the transposition rules.

Title:

STRUCTURAL CONFLICT AVOIDANCE IN COLLABORATIVE ONTOLOGY ENGINEERING

Author(s):

Ziv Hellman, Ziv Hellman

Abstract: Given the increasing importance of ontologies in enterprise settings, mechanisms enabling users working simultaneously to edit and engineer ontologies in a collaborative environment are required. The challenges in preventing structural conflicts from arising from simultaneous user editing of ontologies are not trivial, given the high level of dependencies between concepts in ontologies. In this paper we identify and classify these dependencies. Sophisticated ontology locking mechanisms based on a graph depiction of the dependencies that are sufficient for preventing structural conflicts arising in collaborative settings are proposed. Applications of this research to the Semantic Web are also considered.

Title:

BEYOND END USERS COMPUTING

Author(s):

Michael Heng

Abstract: Three central problems in the development, introduction, maintenance, operation and up-grading of computer-based information systems are (1) users and IS designers fail to understand each other well, (2) it takes too long to introduce IS into organizations, and (3) maintenance requires dedication associated with a sense of ownership, and upgrading requires continuous attention and in-depth knowledge of the business and potentials of IT. These three problems are often tackled separately. It is argued that based on the recent advances in IS development tools, environment and method as well as increased IT literacy, a new way of handling all these problems in an integrated way is possible in certain type of organizational setting. The key idea is to form a team from the users' group in a big organization who are computer enthusiasts to function as the core of a continuing team to take over the responsibilities of developing, introducing, maintaining, and upgrading the IS. The approach is essentially a synthesis of the structured way of building systems and the end user computing. Some problems related to this approach would also be surveyed.The approach is also very much in the spirit of the idea of growing systems in emergent organizations as propounded by Treux, Baskerville and Klein (1999) in their CACM paper.

Title:

USING LOTOS IN WORKFLOW SPECIFICATION

Author(s):

Alessandro Longheu

Abstract: Complexity of business processes is getting higher and higher, due to the rapid evolution of market and technologies, and to the reduced time-to-market for new products. Moreover, it is also essential to check workflow (WF) correctness, as well as to guarantee specific business rules. This can be achieved using specific tools within workflow management systems (WFMS), but a formal approach (mathematical-based) is a more effective methodology to guarantee workflow requirements. Formal description techniques (FDT) based on process algebra allow both to formally describe WF at any level of abstraction, and to formally verify properties as correctness and business rules. In this paper, we apply FDT in production workflows specification using the LOTOS language. In particular, we first show how most recurrent WF patterns can be described in LOTOS, then showing an application example of how WFs can be defined based on LOTOS patterns, allowing a more effective verification of correctness / business rules.

Title:

ANALYSIS OF LEVEL OF INVOLVEMENT OF SIX BEST PRACTICES OF RUP IN OOSP

Author(s):

Muhammad Saeed, Faheem Ahmed

Abstract: No one in the software industry can deny the overwhelming importance of software process model to increase the quality of software product. It is a unanimous consent that better quality software can be produced through well defined and refined process model. Rational Unified Process (RUP) has emerged as a leading software process model, which has incorporated the best among the best industry practices in it. The adaptation of these best practices has almost fulfilled the requirements of an ideal software process model, which should be based upon the actual practices and theoretical concerns of software engineering in the development of software product. In this paper we analyzes RUP with some other object oriented process models like Object-Oriented Software Process (OOSP) with respect to level of incorporation of six best industrial proven practices i.e., iterative development, manage requirements, use component architecture, model visually, verify quality and control changes. The analysis will give us a true picture of involvement of six best practices in OOSP, which will ultimate enables us to perform a comparative analysis of the two-process model with respect to performance and quality.

Title:

A TEMPORAL REASONING APPROACH OF COMMUNICATION BASED WORKFLOW MODELLING

Author(s):

ANDRES AGUAYO, ANTINO  CARRILLO, SERGIO GALVEZ, ANTONIO  GUEVARA, JOSE L CARO

Abstract: Implementation of formal techniques to aid the design and implementation of workflow management systems (WfMS) is still required. We believe that formal methods can be applied in the field of properties demonstration of a workflow specification. This paper develops a formalization of the workflow paradigm based on communication (speech-act theory) by using a temporal logic, namely, the Temporal Logic of Actions (TLA). This formalization provides the basic theoretical foundation for the automated demonstration of the properties of a workflow map, its simulation, and fine-tuning by managers.

Title:

REFACTORING USE CASE MODELS: A CASE STUDY

Author(s):

Gregory Butler

Abstract: Refactoring is a behavior-preserving program trasnformation. Our research shows that refactoring as a concept can be broadened to apply to use case models to improve their understandability, changeability, reusability and traceability. In this paper we describe a metamodel for use case modeling in detail. Based on this metamodel we define and categorize a list of use case refactorings. Then we present a case study to illustrate the practical use of these refactorings. Several examples are described to show different views on refactoring use case models.

Title:

ON THE SYSTEMIC ENTERPRISE ARCHITECTURE METHODOLOGY (SEAM)

Author(s):

Alain  Wegmann

Abstract: For companies to be more competitive, they need to align their business and IT resources. Enterprise Architecture is the discipline whose purpose is to align more effectively the strategies of enterprises together with their processes and their resources (business and IT). Enterprise architecture is complex because it involves different types of practitioners with different goals and practices during the lifecycle of the required changes. Enterprise Architecture can be seen as an art and is largely based on experience. But Enterprise Architecture does not have strong theoretical foundations. As a consequence, it is difficult to teach, difficult to apply, and does not have true computer-supported tool. This lack of tool is unfortunate as such tools would make the discipline much more practical. This paper presents how system sciences, by defining the concept of the systemic paradigm, can provide these theoretical foundations. It then gives a concrete example of the application of these foundations by presenting the SEAM paradigm. With the systemic paradigm, the enterprise architects can improve their understanding of the existing methodologies, and thus find explanations for the practical problems they encounter. With the SEAM paradigm, architects can use a methodology that alleviates most of these practical problems and can be supported by a tool.

Title:

THE RELEVANCE OF A GLOBAL ACCOUNTING MODEL IN MULTI-SITE ERP IMPLEMENTATIONS

Author(s):

Ksenca Bokovec, Talib Damij

Abstract: ERP systems and their processes are cross-functional. They transform companies' practice from traditional functional and local oriented environments to global operations, where they integrate functions, processes and locations. They can support company-specific processes in the framework of globally defined organisational structures and procedures if properly implemented. This paper seeks to contribute to the area of multi-site ERP implementations. A case study from several companies in a large retail corporation is presented, focusing on the global accounting model from the perspective of an ERP implementation project. This case study analyses the most important elements of a globally designed financial and management accounting model and their 'translation' to the structures and processes of the ERP system. Moreover, It demonstrates the importance of the application methodology in early project phases. Central standardisation and maintenance issues of the global accounting model are also outlined.

Title:

IMPLEMENTING USER CENTRED PARTNERSHIP DESIGN - CHANGE IN ATTITUDE MADE A DIFFERENCE

Author(s):

Paul Maj, Gurpreet Kohli, Anuradha Sutharshan

Abstract: IT project success depends upon a number of factors. There are many in the information systems discipline who believe that user participation is necessary for successful development. This paper is primarily concerned with end users and implements a method of incorporating end user participation in all the phases of an IT project. The proposed qualitative, case-based approach aims to achieve high level of usability of the delivered system and to make sure that skills and knowledge of the team are better used. This approach enables users to better understand and accept the new systems as well as ensuring that the final deliverable is really what the users required. Significantly this new method required a change in attitude and perception of not only the end users but also the IT development staff. This process involves studying the user tasks better, make users define what they want, make regular and early prototypes of the user interface, and user involvement from start until the end of the project. The aim of this paper was to identify the user centred factors involved in different stages of the project and to understand how the steps involved could make a positive difference to an organisation. This approach was implemented and evaluated in a local government agency in Western Australia. The results were impressive. The suggested User oriented approach was then implemented in 3 other projects in the same organisation and the approach had made a positive difference.

Title:

A THREE-DIMENSIONAL REQUIREMENTS ELICITATION AND MANAGEMENT DECISION-MAKING SCHEME FOR THE DEVELOPMENT OF NEW SOFTWARE COMPONENTS

Author(s):

Andreas Andreou, Andreas Zografos, George Papadopulos

Abstract:

Requirements analysis and general management issues within the development process of new software components are addressed in this paper, focusing on factors that result from requirements elicitation and significantly affect management decisions and development activities. A new methodology performing a certain form of requirements identification and collection prior to developing new software components is proposed and demonstrated, the essence of which lays on a three-entity model that describes the relationship between different types of component stakeholders: Developers, reusers and end-users. The model is supported by a set of critical factors analysed in the context of three main directions that orient the production of a new component, that is, the generality of the services offered, the management approach and the characteristics of the targeted market. The investigation of the three directions produces critical success factors that are closely connected and interdependent. Further analysis of the significance of each factor according to the priorities set by component developers can provide a detail picture of potential management implications during the development process and more importantly can support management decisions related to if and how development should proceed.


Title:

DEFINING STABILITY FOR COMPONENT INTEGRATION ASSESSMENT

Author(s):

Alejandra Cechich, Mario Piattini

Abstract: The use of commercial of-the-shelf (COTS) products as elements of larger systems is becoming increasingly commonplace. Component-Based Software Development is focused on assembling previously existing components (COTS and other non-developmental items) into larger systems, and migrating existing systems toward component approaches. Ideally, most of the application developer’s time is spent integrating components. We present an approach that can be used in the process of establishing component integration’s quality as an important field to resolving CBS quality problems – problems ranging from CBS quality definition, measurement, analysis, and improvement to tools, methods and processes. In this paper, we describe and illustrate the use of the first phase of our approach to determine relevant perturbations when incorporating a COTS component into given software.

Title:

AUGMENTATION OF VIRTUAL OBJECT TO REAL ENVIRONMENT

Author(s):

Felix Kulakov

Abstract: A problem of immersion of an arbitrary computer-synthesized virtual body into a real environment at considerable distance from observer is considered. The problem under discussion refers to so-called Augmented Reality which is a rapidly developing trend within Virtual Reality. A virtual body in this case is an augmentation to the reality. The problem has "visual" and "tactile-force" aspects. Advanced approaches to realization of these aspects of immersion are proposed.

Title:

SOFTWARE ENGINEERING ENVIRONMENT FOR BUSINESS INFORMATION SYSTEMS

Author(s):

Alar Raabe

Abstract: There is a growing need to make business information systems development cycle shorter, and independent of underlying technologies. Model driven synthesis of software offers solutions to these problems. In the article we describe a set of tools and methods, applicable for synthesizing business software from technology independent models. This method and these tools are distinguished by the use of extended meta-models, which embody knowledge of problem domain and target software architecture of synthesized software system, by the use of model conversion rules described using the combined meta-model, and by the use of reference models of problem domains and sub-domains, which are combined and extended during the construction of descriptions of software system. The difference of our method from other domain specific methods is the separate step of solution domain analysis and the use of meta-model extensions. Work presented in the article has been done in the context of developing product-line architecture for insurance applications.

Title:

ANALYSING SECURITY REQUIREMENTS OF INFORMATION SYSTEMS USING TROPOS

Author(s):

Abdullah Gani, Gordon Manson, Paolo Giorgini, Haralambos Mouratidis

Abstract: Security is an important issue when developing complex information systems, however very little work has been done in integrating security concerns during the analysis of information systems. Current methodologies fail to adequately integrate security and systems engineering, basically because they lack concepts and models as well as a systematic approach towards security. We believe that security should be considered during the whole development process and it should be defined together with the requirements specification. This paper introduces extensions to the Tropos methodology to accommodate security. A description of new concepts is given along with an explanation of how these concepts are integrated to the current stages of Tropos. The above is illustrated using an agent-based health and social care information system as a case study.

Title:

CUSTOMIZING WEB-BASED SYSTEMS WITH OBJECT-ORIENTED VIEWS

Author(s):

Markus Schett, Renate Motschnig-Pitrik

Abstract: Although views have proved their place in relational data models, their role in customizing object-oriented (OO) systems has been severely underestimated. This phenomenon occurs despite the fact that views in the OO paradigm can be designed such that their functionality by far exceeds that of their relational cousins. Based on research in OO databases and on the Viewpoint Abstraction, the purpose of this paper is to integrate views into UML, to sketch the implementation of the tool RoseView, and to illustrate applications of views in web-based systems. We argue that designing system increments or adaptations as view contexts allows for full-fledged customized system versions without ever affecting the code of the original application, meaning significant savings in maintenance. Further, we introduce RoseView, a tool implemented to make views available in UML and thus to extend OO languages by an essential abstraction dimension.

Title:

AN XML BASED ADMINISTRATION METHOD ON ROLE-BASED ACCESS CONTROL IN THE ENTERPRISE ENVIRONMENT

Author(s):

Chang N. Zhang, Chang Zhang

Abstract: In the distributed computing environments, users would like to share resources and communicate with each other to perform their jobs more efficiently. It is important to keep resources and the information integrity from the unexpected use by unauthorized users. Therefore, there is a strong demand on the access control of distributed shared resources in the past few years. Role-Based-Access-Control (RBAC) has been introduced and has offered a powerful means of specifying access control decisions. In this paper, we propose an object-oriented RBAC model for distributed system (ORBAC), it efficiently represents the real world. Though ORBAC is a good model, administration of ORBAC including building and maintaining access control information remains a difficult problem. This paper describes practical method that can be employed in a distributed system for managing security policies using extensible Markup Language (XML). Based on the XML ORBAC security policy, an intelligent role assignment backtracking algorithm is also presented, the computation complexity of the algorithm is O (N) where N is the number of roles in the user’s authorized role set.

Title:

DERIVING USE CASES FROM BUSINESS PROCESSES, THE ADVANTAGES OF DEMO

Author(s):

Boris Shishkov, Jan L.G. Dietz

Abstract: The mismatch between the business requirements and the actual functionality of the delivered software application is considered to be a crucial problem in modern software development. Solving this problem means to find out how to consistently place the software specification model on a previously developed business process model. If considering in particular the UML-based software design, we need to answer in this regard a fundamental question, namely – how to find all relevant use cases, based on sound business process modeling? Adopting the business process modeling as a basis for identification of use cases has been studied from three perspectives – it has been studied how use cases could be derived based on DEMO, Semiotic and Petri Net business process models. The goal of the current paper is, by considering the mentioned achieved results, to study and analyze the strengths of DEMO concerning the derivation of use cases. This could be helpful not only for the investigation of DEMO but also for the further activities directed towards finding out the most appropriate way(s) of identifying use cases from business processes.

Title:

REQUIREMENTS ENGINEERING VERSUS LANGUAGE/ACTION PERSPECTIVE: DIFFERENT FACETS AND POSSIBLE CONTRIBUTION

Author(s):

Joseph Barjis, Tychon Galatonov

Abstract: In today’s Requirements Engineering (RE) it’s often assumed that the users of the system-to-be understand well enough their business needs and the overall goals of the system, therefore the data provided by them is of utmost importance for engineering system requirements; this strategy is sometimes called the “waiter strategy”. While it is often justified, there are approaches that question the validity of this “waiter strategy”. One of them is the Language/Action Perspective. The Language/Action Perspective (hereinafter LAP) is an approach to communication analysis in organisational/business systems, i.e. social systems where there exists information interchange between its components, human beings and machines (collectively called actors), with the ultimate goal of fulfilling a mission of the organisation. One of the features of LAP is that in distinction to the “waiter strategy” approaches it assumes that it is the deeds the actors perform that are of crucial importance for understanding the nature of the processes in the system. This paper presents an overview of some results as well as a new possible approach to RE using LAP; the following methods are taken into consideration: DEMO (Dynamic Essential Modelling of Organisations) methodology, the Semiotics approach and Petri nets.

Title:

PRESCRIBED IMPACTS AND IMPACTS STIPULATED FOR ICS

Author(s):

Virginie govaere

Abstract: The aim of this work is to present the consequences of introducing communication and information systems (ICS) into a company for the users or the organizations users. Consequently, this introduction want to inform and to warn the designer of new technologies of the impacts of their achievements. With the ICS, the exchange of information in a relationship is never neutral, its ability to circulate is never natural, and the fact of being able to exchange it, whatever the means and media used or these quality, has no predictive value concerning the real exchanges. Thus, an analysis of the applications in technical terms (performances and functionalities available) is insufficient. Indeed, the taking into account of the context in the broad sense is essential to determine their real performance. This structuring aims at bring out a difference between the performance offered by the designers of ICS and that observed in real situation.

Title:

AN INNOVATIVE APPROACH TO WIRELESS APPLICATIONS DEVELOPMENT:AN EXPLORATION OF PRACTISE

Author(s):

Phillip  Olla

Abstract: Due to the developing of enabling technologies such as mobile data networks and various types of affordable mobile devices, mobile computing has become widely accepted and applied for both consumer and business initiatives, it is fuelling the new trend of information systems development. There is evidence that the profile of systems development on innovative projects is very different from that faced in the past when the systems development methodologies were first promoted (Sawyer, 2001). Therefore there is the need to move from the past which has documented problems ‘Software crisis’ (Hoch, Roeding, Purket, & Lindner, 2000) to the future, by deriving new methodological approaches more appropriate to the needs of the current development environment (Fitzgerald, 2000). This paper used Action Research to study an organisation called the Mobile Application Development and Integration Centre, which created an innovative approach to develop and deploy wireless applications produced by independent third parties

Title:

SPECIFYING A KNOWLEDGE MANAGEMENT SOLUTION FOR THE CONSTRUCTION INDUSTRY:THE E-COGNOS PROJECT

Author(s):

Yacine Rezgui, Matthew Wetherill, Abdul Samad Kazi

Abstract: The paper focuses upon the contribution which adequate use of the latest development in Information and Communication Technologies can make to the enhancement, development and improvement of professional expertise in the Construction domain. The paper is based on the e-COGNOS project, which aims at specifying and developing an open model-based infrastructure and a set of tools that promote consistent knowledge management within collaborative construction environments. The specified solution emerged from a comprehensive analysis of the business and information / knowledge management practices of the project end-users, and makes use of a Construction specific ontology that is used as a basis for specifying adaptive mechanisms that can organise documents according to their contents and interdependencies, while maintaining their overall consistency. The e-Cognos web-based infrastructure will include services, which allow the creation, capture, indexing, retrieval and dissemination of knowledge. It also promotes the integration of third-party services, including proprietary tools. The e-COGNOS approach will be tested and evaluated through a series of field trials. This will be followed by the delivery of business recommendations regarding the deployment of e-COGNOS in the construction sector. The research is ongoing and supported by the European Commission under the IST programme – Key Action II

Title:

RELATIONSHIP SUPPORT IN OBJECT MODELS

Author(s):

Mohamed Dahchour, Alain Pirotte

Abstract: Relationships play a central role in information modeling. However, most object models (programming languages and database systems)do not provide a construct to deal with them as autonomous units. They merely treat them as pointer-valued attributes and therefore confine them to a second-class status. The paper defines the generic semantics of relationships, addresses a set of requirements to be satisfied to properly manage all kinds of relationships, surveys existing techniques for representing relationships in object models, and compares them to each others according to whether they satisfy the relationship requirements.

Title:

OPCAT – A BIMODAL CASE TOOL FOR OBJECT-PROCESS BASED SYSTEM DEVELOPMENT

Author(s):

Arnon Sturm, Iris Reinhartz-Berger, Dov Dori

Abstract: CASE tools have spread at a lower pace than expected. The main reasons for this are their limited support of a particular method, high cost, lack of measurable returns, and unrealistic user expectations. Although many CASE tools implement familiar methods, their model checking and simulation capabilities are limited, if not inexistent, and the syntax and semantics of their graphic notations may not be clear to novice users. Object-Process CASE Tool (OPCAT), which supports system development using Object-Process Methodology, meets the challenges of next generation CASE tools by providing a complete integrated software and system development environment. Based on two human cognition principles, OPCAT enables balanced modeling of the structural and behavioral aspects of systems in a single model through a bimodal visual-lingual representation. Due to this intuitive dual notation, the resulting model is comprehensible to both domain experts and system architects engaged in the development process. Due to its formality, it OPCAT also provides a solid basis for implementation generation and an advanced simulation tool, which animates system objects, processes, and states in a balanced way, enabling a complete simulation of system structure and behavior. This paper presents OPCAT and demonstrates its unique features through a small case study of a travel management information system.

Title:

RAPID DEVELOPMENT OF PROCESS MODELING TOOLS

Author(s):

michele risi, andrea de lucia, Gennaro Costagliola, genoveffa tortora, rita francese

Abstract: We present an approach for the rapid development and evolution of visual environments for modelling distributed software engineering processes. The definition of the process modeling language takes into account the requirements of the customer that directly participates in the development process. The development process is supported by the VLDesk, an integrated set of grammar-based tools for the definition and automatic generation of visual environments. The produced visual environment enables an organization to quickly design distributed process models and generate the corresponding XML code that specifies the activities with its elements, including actors and artifacts produced and the transitions expressed in the form of event-condition-action rules. In this way the designed process model can be easily instantiated for a specific project and enacted by any workflow engine supporting a programmable event-condition-action paradigm.

Title:

BUILDING CONCEPTUAL SCHEMAS BY REFINING GENERAL ONTOLOGIES: A CASE STUDY

Author(s):

Xavier de Palol, Jordi Conesa, Antoni Olivé

Abstract: The approach of deriving conceptual schemas from general ontologies has not been analyzed in detail in the field of information systems engineering. We believe that the potential benefits of that approach makes its analysis worthwhile. This paper aims at contributing to this analysis by means of a case study. The scope of the case study is rather limited, but even so we believe that some general valid conclusions can be drawn. The main result has been that deriving conceptual schemas by refining a general ontology may require less effort than building them from scratch, and may produce better schemas. On the other hand, an organization may achieve a high level of integration and reuse, at the conceptual level, if that organization builds all its conceptual schemas as a refinement of a general ontology. Our conclusions are similar to those originated in the development of object-oriented designs and applications using frameworks.

Title:

MANAGING THE COMPLEXITY OF EMERGENT PROCESSES

Author(s):

Igor Hawryszkiewycz

Abstract: Business processes in knowledge intensive environments often emerge rather than following predefined steps. Such emergence can result in disconnected activities, which result in complex interaction structures that require ways to maintain awareness across the activities and to coordinate the activities to a common goal. The paper suggests that new ways are needed to both model emergent processes and support and manage them using information technologies. The paper describes a metamodel, which includes the commands to create initial processes and to realize emergence. It then describes a prototype system that implements these semantics and realizes the creation of initial structures and their emergence and coordination.

Title:

OPEN SOURCE SECURITY ANALYSIS - EVALUATING SECURITY OF OPEN SOURCE VS. CLOSED SOURCE OPERATING SYSTEMS

Author(s):

Paulo  Rodrigues Trezentos, Carlos Serrão, Daniel Neves

Abstract: Open source software is becoming a major trend in the software industry. Operating systems (OS), Internet servers and several other software applications are available under this licensing conditions. This article assesses the security of open source technology, namely the Linux OS. Since a growing number of critical enterprise information systems are starting to use Linux OS, this evaluation could be helpful to them. To illustrate the fact that application security depends, above all, on the security of the OS underneath, we present the case of a DRM (Digital Rights Management) solution – MOSESOpenSDRM - implemented on top of the Linux OS, in the scope of the EU MOSES IST RTD programme. Some of conclusions hereby drawn are not compatible with some Microsoft funded studies that point to the fact that open source OS’s are more insecure. This main idea was firstly present by the authors in the Interactive Broadcasting Workshop - IST concertation meeting hosted by the European Commission in September 2002 (Brussels).

Title:

TRUSTED AUTHENTICATION BETWEEN USER AND MACHINE

Author(s):

EunBae  Kong, Soyoung Doo, JongNyeo Kim

Abstract: : Authentication is an important issue in computer system connected to internet. This paper describes a method of providing a trusted path between a user and a system using an access control processing technique. The method of providing a trusted path between a user and a system includes the step of determining whether access to resources of the system will be permitted or refused on the basis of access control rules and stored attributes set by a security administrator in the secure database. Thereafter, the user is notified of permission for or refusal of the access in accordance with the result of the determination

Title:

A FRAMEWORK FOR BUSINESS SIMULATOR: A FIRST EXPERIENCE

Author(s):

Ronan Champagnat

Abstract: This paper deals with a multi-agent based modeling of a company in order to perform a simulation. The specificities of the simulation purpose is that it concerns the economical aspects, but also the production process. This implies that the model of the company must represent the production processes. This paper focuses on the modeling of a company and the analysis of the model. In order to automatically derive a simulation model from a modeling of the company an UML meta-model has been achieved and is presented. Then a validation of the components of the simulator is presented. It allows to perform a validation of the nominal comportment of agents. This paper is structured as follows: first, starting from a description of the company a multi-agent model is derived; then a meta-model for plant modeling is presented; and a validation of the simulator is detailed; finally the requirements and objectives for a business simulator is discussed.

Title:

PATTERN BASED ANALYSIS OF EAI LANGUAGES - THE CASE OF THE BUSINESS MODELING LANGUAGE

Author(s):

Petia Wohed, Arthur ter Hofstede, Marlon Dumas, Erik Perjons

Abstract: Enterprise Application Integration (EAI) is a challenging area that is attracting growing attention from the software industry and the research community. A landscape of languages and techniques for EAI has emerged and is continuously being enriched with new proposals from different software vendors and coalitions. However, little or no effort has been dedicated to systematically evaluate and compare these languages and techniques. The work reported in this paper is a first step in this direction. It presents an in-depth analysis of a language, namely the Business Modeling Language, specifically developed for EAI. The framework used for this analysis is based on a number of workflow and communication patterns. This framework provides a basis for evaluating the advantages and drawbacks of EAI languages with respect to recurrent problems and situations.

Title:

DEFENDING ESSENTIAL PROCESSES

Author(s):

Albert Alderson

Abstract: The essential purpose of a program makes up only a small part of the overall task. All of the complications in the program come from addressing what can go wrong. Where the essential business processes remain stable, close examination shows complex defensive mechanisms which change as new threats to the business develop. Norms derive from modelling social behaviour but are not absolute expressions of what will happen, people may act counter to the behaviour described in the norm. Many norms in business are concerned with defending against erroneous or illegal behaviour of staff and third parties. This paper uses a case study to illustrate the development of defensive norms and how these norms may be used in designing processes. Essential business processes cannot be improved by adding defensive norms but processes are usually more effective where security norms are implemented preventing the breaking of norms.

Title:

TESTING COTS WITH CLASSIFICATION-TREE METHOD

Author(s):

Hareton Leung, Prema Paramasivam

Abstract: This paper presents a new test method for COTS based on the classification-tree method. Information from the system specification and the COTS specification is used to guide the selection of test input. We can generate test cases that verify input outside the system specification but within the scope of COTS do not cause problem to the system, verify input required by the system specification and within the scope of COTS specification is providing correct results and verify input not within the scope of COTS specification is actually not required by the system specification. This paper presents our test selection method with the help of a case example.

Title:

ORGANIZATIONAL MULTI-AGENT ARCHITECTURES FOR INFORMATION SYSTEMS

Author(s):

Stephane Faulkner, Thanh Tung Do, Manuel Kolp

Abstract: A Multi-Agent System (MAS) architecture is an organization of coordinated autonomous agents that interact in order to achieve particular, possibly common goals. Considering real-world organizations as an analogy, this paper proposes MAS architectural patterns for information systems which adopt concepts from organizational theories. The patterns are modeled using the i* framework which offers the notions of actor, goal and actor dependency, specified in Formal Tropos and evaluated with respect to a set of software quality attributes, such as predictability or adaptability. We conduct a comparison of organizational and conventional architectures using an e-business information system case study.

Title:

A HIGH RELIABILITY DESIGN FOR NFS SERVER SOFTWARE BY USING AN EXTENDED PETRI NET

Author(s):

Yasunari Shidama, Katsumi Wasaki, Shin'nosuke Yamaguchi

Abstract: In this paper, we present a model design for Network File System processes based on the Logical Coloured Petri Net(LCPN). The LCPN is an extended Petri net which solves the problem of system description in place/transition nets and coloured Petri nets proposed before. This extension of Petri nets is suitable for designing complex control systems and for discussing methods of evaluating such systems. In order to study the behavior of the server system modeled with this net, we provide simulations on a Java program. From this work, we confirmed that this extended Petri net is an effective tool for modelling the file server processes.

Title:

HOW DIGITAL IS COMMUNICATION IN YOUR ORGANIZATION? A METRICS AND AN ANALYSIS METHOD

Author(s):

Tero Päivärinta, Pasi Tyrväinen

Abstract: Novel innovations in the area of digital media are changing the ways we communicate and organize. However, few practical measures exist for analysing the digitalisation of organizational communication as an intermediate factor in the initiatives to adopt new information and communication technologies (ICT). Building upon the genre theory of organizational communication, a categorization of communication forms, and quantitative measures we suggest such metrics and a measurement method. A case study applying them in an industrial organization suggests the method and metrics to be applicable for quantifying how new information systems affect to organizational communication as well as for anticipating their digitalisation impact prior to the implementation. The metrics provide a basis for further work on analysing correlation between organizational performance and adoption of information and communication technology.

Title:

A TWO TIER, GOAL-DRIVEN WORKFLOW MODEL FOR THE HEALTHCARE DOMAIN

Author(s):

Eric Browne

Abstract: Workflow models define a set of tasks to be undertaken to achieve a set of goals. Very often, the set of goals is not articulated explicitly, let alone modelled in such a way as to link the workflow schema(s) to the goal schema(s). In this paper, we introduce a two tier model, which clearly delineates the higher level goals (business model) from the lower level tasks (process model), whilst elucidating the relationships between the two tiers. We utilise a goal-ontology to represent the upper level (business model) and decompose this to an extended Petri-Net model for the lower level workflow schema. The modelling of business processes, and the management of subsequent changes, both become an integral part of the workflow itself. Healthcare is a domain where it is quite common for goals not to be realized, or not to be realized fully, and where alterations to the goals have to be undertaken on a case by case (instance-level) basis. Consequently any workflow schema will need to include tasks that both assess the degree to which a goal has been achieved, and also allow for new goals to be established, or for the workflow to be altered. We term such workflow schemas self-managing.

Title:

TWO APPROACHES IN SYSTEM MODELING AND THEIR ILLUSTRATIONS WITH MDA AND RM-ODP

Author(s):

Alain  Wegmann, Andrey Naumenko

Abstract: We explain two approaches to the design of system modeling frameworks and perform their comparative analysis. The analysis familiarizes the reader with strengths and weaknesses of the approaches, and thus helps to grasp the preferences for their practical applications. The first of the approaches is illustrated with the example of Model Driven Architecture (MDA), and the second – with the example of Reference Model of Open Distributed Processing (RM-ODP).

Title:

AN DISTRIBUTED WORKFLOW SYSTEM ON OGSA: WORKFLOW GRID SERVICES

Author(s):

Kai Wei, Zhaohui Wu

Abstract: The Open Grid Services Architecture (OGSA) tries to address the challenges of integrating services across distributed, heterogeneous, dynamic “virtual organization” formed from the disparate resources within a single enterprise and/or from external resource sharing and service provider relationships. In this paper, we attempt to bring forwards a new model of workflow system with the Grid Services, which have more valuable characteristics than the Web Services. During discussing the model, we present some new concepts in distributed workflow systems, all of which are dealing with Grid services within the OGSA framework.

Title:

AN APPROACH TO DFD MODELLING

Author(s):

Katja Damij

Abstract: The objective of this work is to introduce an approach to developing Data Flow Diagrams. The approach discussed enables the analyst to create a DFD in an easy manner based on identifying the system’s activities. The approach has three steps. The first step develops the activity table. The second step transforms the activity table into an elementary-level DFD. The third step deals with creating DFDs at work and business process levels. The approach discussed represents a well-defined algorithm, which leads the analyst through a few prescribed and very simple steps to achieve the goal of DFD development. This approach is independent of the systems analyst and his/her experience.

Title:

TOWARDS A VIEW BASED UNIFIED MODELING LANGUAGE

Author(s):

Abdelaziz KRIOUILE, Sophie EBERSOLD, Xavier CREGUT, Mahmoud NASSAR, Bernard COULETTE

Abstract: To model complex software systems, we propose a user-centred approach based on an extension of UML (Unified Modeling Language) called VUML. VUML provides the concept of multiviews class whose goal is to store and deliver information according to user (final or not) profile (viewpoint). A multiviews class consists of a base class (default view) and a set of views corresponding to different viewpoints on the class. Each view is defined as an extension of the default view. VUML allows the dynamic change of viewpoint and offers mechanisms to manage consistency among dependent views. In this paper, we focus on the static architecture of a VUML modelling: the multiviews class and its use in a VUML class diagram, static relations such as specialisation and dependency in VUML. We give an extract of VUML's semantics thanks to an extension of the UML metamodel. Finally, we give an overview of dynamic aspects and the principle of an implementation pattern to generate object-oriented code from a VUML model.

Title:

IDENTIFYING PATTERNS OF WORKFLOW DESIGN RELYING ON ORGANIZATIONAL STRUCTURE ASPECTS

Author(s):

Lucinéia Thom, Cirano Iochpe

Abstract: Modern organizations have demands of automation of their business processes, once they are highly complex and need to be efficiently executed. For this reason the number of systems based on information technologies able to provide a better documentation, standardization and co-ordination of the business process is increasing. In this context, workflow technology has been quite efficient, mainly for the automation of business processes. However, as it is an emergent technology and in constant evolution, workflow presents some limitations. One of the main limitations is the lack of techniques that guarantee correction and efficiency to the workflow project in the phases of requisite analysis and modeling. Taking into account these problems and having accomplished some studies, we formulated the hypothesis that it is possible to infer the specific workflow (sub)processes structure from knowledge on structural aspects of the organization and vice-versa. We made the verification of such hypothesis through the identification of a set of dependency rules among the aspects of the organizational structure and workflow (sub)processes. This paper presents the set of rules and the description of the technique used for the identification.

Title:

A NEW LOOK AT THE ENTERPRISE INFORMATION SYSTEM LIFE CYCLE - INTRODUCING THE CONCEPT OF GENERATIONAL CHANGE

Author(s):

Jon Davis, Stephan Chalup, Elizabeth Chang

Abstract: This paper discusses the Enterprise Information System (EIS) life cycle and the phases of the EIS development life cycle. It details the stages in the EIS life cycle and the characteristics of phases in the system development life cycle and explains where it differs from traditional concepts of software engineering. In particular it defines the concept of generation change and when it is applied to a system. It also describes the nature of the rapid evolution of the EIS and how it results in version or generational change of the system, and how the EIS development life cycle involves a multitude of engineering processes, not just one. This new perspective could lead to the generation of new EIS development methodologies in business modelling, analysis, design, project management and project estimation.

Title:

IPM: AN INCREMENTAL PROCESS MODEL FOR DISTRIBUTED COMPONENT-BASED SOFTWARE DEVELOPMENT

Author(s):

Antonio Francisco do  Prado, Eduardo Santana de  Almeida, Calebe de Paula  Bianchini

Abstract: In spite of recent and constant researches in the Component-Based Development (CBD) area, there is still lack of patterns, process model and methodologies that effectively support as much the development “for reuse” as “with reuse”. Considering the accelerated growth of the Internet over the last decade, where distribution has become an essential non-functional requirement of most applications, the problem becomes bigger. This paper proposes a novel Incremental Process Model (IPM) that integrates the concepts of Component-Based Software Engineering (CBSE), frameworks, patterns, and distribution. This process model is divided into two stages: the development “for reuse”, and the development “with reuse”. A CASE tool is the main mechanism to apply this process model, supporting inclusively, the code generation of components and applications. A distributed components environment is proposed for accomplishing the results of this process model. Through a case study it is shown how the process model works on a given problem domain.

Title:

A DYNAMIC ROLE BASED ACCESS CONTROL MODEL FOR ADAPTIVE WORKFLOW MANAGEMENT SYSTEMS

Author(s):

Dulce Domingos, Pedro Veiga

Abstract: Workflow management systems (WfMSs) support the definition, management and execution of business processes inside organisations. However, business processes are dynamic by nature, meaning that they must cope with frequent changes. As a consequence we have witnessed the development of new types of WfMSs, supporting adaptive workflows. These systems have specific access control requirements, which are not answered by traditional WfMSs access control models. In this paper we present a new model to deal with dynamic role based access control models for adaptive WfMSs, that adapts and extends the well-accepted role base access control principles in two directions. One direction consists in a modified interpretation of permissions by considering that the objects of adaptive WfMSs (e.g. process definition, process instances, activities and activity instances) and the operations performed on them like execute, change and read have to cope with dynamic permission updates. The function that maps a set of permissions into a role is extended to support exiting relations between workflow components. The second direction makes execution more adaptive to dynamic changes in organizations, by separating the role model from the resource model and supporting the definition of dynamic roles as functions that may access to external resource information systems. Our model also adapts the RBAC administrative model to support dynamic authorizations, that is, authorizations that can be defined and modified at run-time.

Title:

A THREE PERSPECTIVE APPROACH TO GROUPWARE IMPLEMENTATION QUALITY MANAGEMENT: WITHIN AN AUSTRALIAN UNIVERSITY

Author(s):

Dion Gittoes

Abstract: Implementing groupware into organisations to support communication and collaboration is highly problematic and often approached from a single perspective. Groupware implementation management is influenced by individual, socio-organisational and technical issues. High quality implementation management leads to system success. Quality literature is investigated to succinctly align the three perspective approach of groupware design (Palen 1999) to a three perspective approach of information systems (IS) quality (Salmela 1997). IS quality is influenced by business integration quality, IS user quality and IS work quality. A study of a groupware implementation highlights the need for a synthesis of all three perspectives to manage implementation quality and understand the adoption challenges groupware systems face. Investigating IS quality from all three perspectives leads to a holistic understanding of groupware implementation quality management. Groupware quality is investigated from the user perspective, employing ethnographic techniques in an interpretative case study of a Lotus Notes implementation, email and electronic calendar, within an Australian University.

Title:

COMPUTING MESSAGE DEPENDENCIES IN SYSTEM DESIGNS AND PROGRAMS

Author(s):

Leszek Maciaszek, Bruc Lee Liong

Abstract: This paper explains the metrics for the computation of class dependencies and introduces new metrics for the computation of message dependencies in system designs and programs. The metrics are used in the design-coding cycle of software production. They define the quality of an architectural solution. The system architecture assumed in this paper is an extension of the MVC (Model-View-Controller) framework nicknamed BCEMD (Boundary-Control-Entity-Mediator-DBInterface). The paper demonstrates how the BCEMD framework minimizes object dependencies in synchronous message passing. We compute message dependencies from a parsed bytecode. The metrics are then reflected in UML models representing the system design. The paper starts by briefly explaining the BCEMD architecture and its advantages. We summarize our earlier paper to show how the BCEMD approach minimizes the cumulative class dependency. We then introduce the new metrics resulting in a cumulative message dependency for the system. The metrics measure the complexity of program’s run-time behavior. Each metric is defined, given an algorithm for its computation, and it is then exemplified. We demonstrate how the new metrics reinforce our claim that the BCEMD architecture delivers understandable, maintainable and scalable software solutions.

Title:

INTRUSION DETECTION BASED ON DATA MINING

Author(s):

Xuan Dau Hoang, Peter Bertok, Jiankun Hu

Abstract: Traditional computer misuse detection techniques can identify known attacks efficiently, but perform very poorly in other cases. Anomaly detection has the potential to detect unknown attacks, however, it is a very challenging task since it is aimed at the detection of unknown attacks without any priori knowledge about specific intrusions. This technology is still at its early stage. Existing research in this area focuses either on user activity (macro-level) or on program operation (micro-level) but not on both simultaneously. In this paper, an attempt to look at both concurrently is presented. Based on an intrusion detection framework (Lee, 2001), we implemented a user anomaly detection system and conducted several intrusion detection experiments by analysing macro-level and micro-level activities. User behaviour modelling is based on data mining; frequent episode algorithms are used to build the user’s normal profiles. The experimental results have shown that the system can detect anomalies and changes in the user’s normal working patterns effectively.

Title:

AN AUTHENTICATION SCHEME USING A SECRET SHARING TECHNIQUE

Author(s):

Mohamed Al-Ibrahim

Abstract: We introduce an authentication scheme based on Shamir threshold secret sharing technique. The scheme, in general, is used for authenticating peer-to-peer communication. In particular, it is used for authenticating a host to join a multicast group.

Title:

A UNIFIED TOOL FOR EDITING INFORMATION OF DIFFERENT LEVELS OF ABSTRACTION

Author(s):

Alexander Kleschev, Vassili Orlov

Abstract: Ontology specification languages, ontologies of different levels of abstraction, enterprise knowledge and data are useful in the life cycle of enterprise information systems. Since methods of work with these kinds of information are in a research stage, new experimental models for representation of information are proposed. As a result, extra efforts on developing editing tools that are intended for editing information represented using new models are needed. These tools often turn out to be mutually incompatible. Meanwhile, the mentioned kinds of information are closely interrelated. This interrelation results in additional efforts on providing the editing tools compatibility. Users of the editing tools need essentially different intellectual support in the editing process. As a result, considerable efforts and resources are spent for development of experimental information editing tools that provide respective classes of users with intellectual support and for establishing the tools compatibility. This paper presents a model of a unified editing tool that is intended for solving this problem.

Title:

PORTUGUESE LOCAL E_GOVERNMENT

Author(s):

Sílvia  Dias, Luis Amaral, Leonel Santos

Abstract: The Internet, the World Wide Web and electronic commerce are transforming the way of doing business. These changes are impacting every industry in our country, including local government. The Internet offers a wide variety of opportunities to improve services to citizens and to divulge information’s about the communities. In Portugal, the adherence to the Internet by local government is increasing visibly, but much more has to be done. In 1999 a first study was done in order to evaluate the situation of e-government in our country, and two years passed a new study was undertaken, this time in order to evaluate the evolution registered in this area. In this paper we describe some conclusions achieved in these studies comparing their evolution in these two years.

Title:

A SYNTHESIS OF BUSINESS ROLE MODELS

Author(s):

Alain Wegmann, Pavel Balabko

Abstract: Modern Information and Communication Technology open a door for innovations that can improve the functioning of companies. Many innovations can come from the analysis of business processes. Today modeling is widely used for the analysis of business processes. In these work we propose a process modeling technique based on role modeling. To specify a process where one business object may play several roles, a synthesis operation (the composition of two base roles in a third role) has to be specified. All role-based techniques have difficulties specifying role synthesis: synthesis is never specified without the description of actual messages passing between business roles. Such implementation details complicate the understanding of the model and semantics of synthesis become implicit. To specify a business process of a complex system at a higher level of abstraction requires the proper understanding of relationships between roles, when they are put together in one common context. In this paper we define the concept of “synthesis constraints” that shows relations between roles. Using “synthesis constraints” allows a business modeler to make explicit his decisions about how the synthesis is done in an abstract and implementation independent way. This approach can be used for building a BPR case tool that enables the discovery of new business processes by means of different disassembling and assembling of roles.

Title:

AGGREGATING EXPERT PROFILES FOR SER QUERING AID

Author(s):

Miguel Delgado, Maria-Jose Martin Bautista, Daniel Sanchez, Maria Amparo Vila

Abstract: We present two different models to aggregate document evaluations and user profiles in the field of Information Retrieval. The main aim of this work is to discuss a general methodology to establish the most relevant terms to characterize a given “topic” on an Information Retrieval System.We start from a set of documents from which a set of characteristic terms is selected, in such a way that the presence of any term in each document is known and we want to establish the most significant ones in order to select “relevant” documents about a given “topic” Π. For that, some experts , are required to assess the set of documents. By aggregating these assessments with the presence weight of terms, a measurement of their relevance in relation with Π may be obtained.The two presented models differ in the fact that the experts can query with the same terms (an unique query) to the system or with different terms (several queries). In each one of these cases, there are two possibilities: first aggregate the opinions of the experts about the documents, and then obtain the topic profile. The second possibility is to generate the expert profiles first, and then aggregate these profiles to obtain the topic profile.Several different situations arise according to the form in which the experts state their opinion, as well as from the approach to aggregate the opinions. An overview of these situations and a general methodology to cope with them for our model is presented here.

Title:

TOWARDS THE ENTERPRISE ENGINEERING APPROACH FOR INFORMATION SYSTEM MODELLING ACROSS ORGANISATIONAL AND TECHNICAL BOUNDARIES

Author(s):

Prima Gustiene, Remigijus Gustas

Abstract: Enterprise Engineering proved to be useful when a generally accepted intentional description of information system is not available. A blueprint of enterprise infrastructure provides a basis for system analysis of the organizational and technical processes. It is sometimes referred as enterprise architecture. The major effort of this paper is the demonstration of how to bridge a gap among various levels (syntactic, semantic and pragmatic) of enterprise engineering. Most information system engineering methodologies are heavily centred on system development issues at the implementation level. Thus, such methodologies are restrictive in a way that a supporting technical system specification can not be motivated or justified in the context of organizational process models. Enterprise models provide a basis for gradual understanding of why and how various technical system components come about. Some undesirable qualities of enterprise engineering are sketched in this paper.

Title:

EXTENDING UML FOR MODELLING QUERIES TO OBJECT-RELATIONAL DATABASES

Author(s):

Carmen Costilla, Esperanza Marcos, Belén Vela, Esperanza Marcos

Abstract: Object-Relational Databases (ORDB) have become as a main alternative for the management of complex data and relationships. It is also very common to access to these databases through the web. Besides, new products versions integrate Object-Relational model along with XML data management. The framework of this paper is MIDAS, an Object-Relational and XML based Methodology for the design of Web Information Systems. MIDAS uses UML as modelling language for the definition of the whole system, extending and specialising it for the definition of systems based on the recommended technology. In MIDAS we have proposed an UML extension for the design of ORDB focused on the structural issues of the system. Due to the importance of queries on every information system, in this paper we extend that work for ORDB queries based on UML

Title:

ANALYSING REQUIREMENTS FOR CONTENT MANAGEMENT

Author(s):

Virpi  Lyytikäinen

Abstract: The content to be managed in organisations is in textual or multimedia formats. Major part of the content is, however, stored in documents. In order to find out the needs of the people and organisations producing and using the content a profound requirements analysis is needed. In the paper, a method for the requirements analysis for content management purposes is introduced. The new method combines different techniques from two existing methods, which were used in various content management development projects. The paper also describes a case study where the new method is exploited.

Title:

USING MODEL OF ENTERPRISE AND SOFTWARE FOR DEPLOYMENT OF ERP SOFTWARE

Author(s):

Franck DARRAS

Abstract: In the deployment of ERP (Enterprise Resource Planning) software, a suitable modelling of needs will lead to a better analysis and a specification of the need for good quality. This modelling can be the key to success. The enterprise modelling develops a formal framework where each element of the enterprise is identified and seeks to include all viewpoints in the representation of the enterprise's operation. But, the the diversity of the formalisms does not facilitate their use in management of a project. The aim of this paper is to show the use of the concepts of enterprise modelling, with a formalism close to software engineering in order to ameliorate the analysis and the deployment of ERP systems.

Title:

A CONTEXT-AWARE USER-ADAPTIVE SUPPORTING SYSTEM FOR GOAL-ORIENTED REQUIREMENTS ELICITATION PROCESS

Author(s):

Chao Li, Han Liu, Jizhe Wang, Qing Wang, Mingshu Li

Abstract: Goal-oriented requirements elicitation is recognized as an important elicitation method from both research and industry. While in a complex multi-user environment, many problems rise up in performing goal-oriented requirements elicitation because of ignorance of user participation support. We present a supporting system here to assist users in taking part in goal-oriented requirements elicitation process. The system, with its serious consideration of user factor, is believed to offer better participation experiences for users in GRE process.

Title:

A PEER-TO-PEER KNOWLEDGE SHARING APPROACH FOR A NETWORKED RESEARCH COMMUNITY

Author(s):

Yang Tian

Abstract: Over the past few years, the interests in the potential of peer-to-peer computing and the use of different approaches to knowledge sharing, involving the development of networked communities has grown rapidly. This paper investigates the potential that a peer-to-peer community may have for effective and efficient knowledge sharing. It starts with an introduction to networked communities and the supporting knowledge sharing activities. A comparison between the centralized and the decentralized approaches in supporting networked communities is made. A case study using a networked Journal Club is discussed in detail, including the design and implementation of the supporting peer-to-peer prototype using JXTA as the developing platform. The paper concludes with a discussion of the peer-to-peer architecture as the direction of future knowledge sharing systems.

Title:

TELEWORK: EMPLOYMENT OPPORTUNITIES FOR A DISABLED CITIZEN

Author(s):

Nelson Rocha, Silvina Santana

Abstract: Disable citizens have being considered as potential beneficiaries of teleworking. However, the subject raises several questions. Specifically, it is important to determine companies’ willingness to adopt this new work modality, the activities they will consider to pass to external entities and the more appropriated model to adopt, when talking about teleworkers with special needs. On the other hand, it is necessary to determine and analyse perceptions and expectations, in order to manage eventual resistances and provide solutions liable to being adopted and used efficiently. This work reports the results of a study designed to find answers to these questions. It also allowed finding out the competences potential teleworkers need to have, enabling the progress of training actions and the development of insertion strategies adapted to the teleworkers and to the needs and expectations of employing companies.

Title:

DISTRIBUTED WORKFLOW MANAGEMENT IN OPEN GRID SERVICES ARCHITECTURE

Author(s):

Zhen Yu, Zhaohui Wu

Abstract: Vast resources in a grid can be managed flexibly and effectively by workflow management systems. Here a structure of the workflow management system in Open Grid Services Architecture is proposed. In the structure, main components of conventional workflow management systems are made into some high-level Grid Services and distributed in the grid. Then those services compose a distributed workflow management system, which can make full use of the workflow resources and ensure the process executed efficiently and reliably in the grid The interfaces required by those workflow services and some implementation details of the system are discussed too

Title:

MODELLING AND GENERATING BUSINESS-TO-BUSINESS APPLICATIONS USING AN ARCHITECTURE DESCRIPTION LANGUAGE - BASED APPROACH

Author(s):

Ilham Alloui

Abstract: The emergence of Internet and the World Wide Web in accordance with new technological advances led the organisations to seize the opportunities offered by electronic business. In particular, the opportunity to co-operate within the context of electronic (virtual or networked) enterprises / communities / alliances, based on open networks and current information and communication technologies. Among the various kinds of electronic alliances we target are inter-organisational ones that aim at co-operating to achieve clients’ orders while preserving the autonomy of involved organisations and enabling concurrency of their activities, flexibility of their negotiations and dynamic evolution of their environment. Members of such alliances may either have similar competencies or complementary ones. In this context, the paper presents a software architecture-based approach to model and generate business-to-business (B2B) applications that support decentralised and dynamic electronic alliances. The approach is founded on modelling alliance’s life-cycle using an Architecture Description Language (ADL) called Zeta and generating an executable code from the description into a target implementation environment called Process/Web. The benefits of such approach are manifold: (i) using an ADL provides high-level abstractions hiding implementation details, (ii) having a language means that several life-cycle models can be defined and modified according to change requirements, (iii) generating executable code from abstract models can be done in several target implementation languages. The work presented is being developed and validated within the framework of the X French regional project.

Title:

STUDY ON CHINESE ENTERPRISE E-READINESS INDEX AND APPLICATION

Author(s):

Jian Chen, Yucun Tian, Yan Zhu

Abstract: Information Industry has become the global main industry in 21st century, which is the mainstay of national economy and powerful drive of economy development. How to use information technology to enhance the core competition ability are the most important factor of the national and enterprise competitiveness. Since the enterprise is the foundation of the national economy, the construction of e-enterprise must be speeded up enormously for improvement of national economy informatization. Therefore, many scholars and experts are investigating this area at present. The surveys of nearly 100 Chinese typical enterprises are analyzed and several new analyzing algorithms will be concerned in this paper. Through these methods, a set of Chinese enterprise e-readiness index is put forward.

Title:

INTERORGANIZATIONAL WORKFLOW IN THE MEDICAL IMAGING DOMAIN

Author(s):

Schahram Dustdar

Abstract: Interorganizational Workflows are increasingly gaining relevance in enterprise information systems, particularly in developing internet-based applications. A process model has to be shared to enable work items to be managed in different workflow engines. The state-of-the-art for three interorganizational workflow models, capacity sharing, case transfer, and loosely coupled model, is discussed in this paper. Further the medical imaging domain made early progress in workflow standardization and its main concepts and software components are introduced. Key workflows and protocols of the medical imaging domain are described. Next the interorganizational workflow models are applied to the domain and advantages of certain models are pointed out. Finally the required infrastructure for a Web-service based design is discussed, conclusions for Web-service based implementations are made and further research areas are concluded.

Title:

FUNCTIONAL SIZE MEASUREMENT OF LAYERED CONCEPTUAL MODELS

Author(s):

Geert Poels

Abstract: This paper builds on previous work showing a way to map the concepts used in object-oriented business domain modelling onto (a subset of) the concepts used by the COSMIC Full Function Points (COSMIC-FFP) functional size measurement method for modelling and sizing a software system from the point of view of its functional user requirements. In this paper we present a refined set of measurement rules resulting from a careful revision of our previous proposal, based on ‘field trials’, feedback from function points experts and the forthcoming transition from COSMIC-FFP version 2.1 to the ISO/IEC standard version 2.2. The main contribution of the paper is, however, an extended set of rules to be used when applying COSMIC-FFP to multi-layer conceptual models, including at least an enterprise layer and, on top of this, an information system services layer. We also outline the approach that will be used to further verify and validate the proposed measurement rules and to evaluate their efficiency and effectiveness.

Title:

FÖDERAL: MANAGEMENT OF ENGINEERING DATA USING A SEMISTRUCTURED DATA MODEL

Author(s):

Christoph Mangold, Ralf Rantzau, Bernhard Mitschang

Abstract: The Föderal system is a flexible repository for the management, integration and modeling of product data. Current systems in this domain employ object-oriented data models. Whereas this is adequate for the management of product data, it proves insufficient for integration and modeling. Present semistructured data models, however, are suited ideally for integration, but data management and also modeling is a problem. In this paper we describe our approach to narrow down the gap between structured and semistructured data models. We present the Föderal information system which employs a new semistructured data model and show how this model can be employed in the context of management, integration, and modeling of engineering data.

Title:

SCORING WWW PAGES WITH SOCIAL CREDIBILITY IN A HYPERLINK ENVIRONMENT

Author(s):

Hidenari Kiyomitsu, Junya Morisita, Tatsuya Kinugawa, Masami Hirabayashi, Kazuhiro Ohtsuki, Shinzo Kitamura

Abstract: In this paper, we propose an approach in web page scoring. We introduce the evaluation based on the credibility of the page creators. We assume all Web pages are latently evaluated on this criterion. We call this evaluation as an evaluation by Social Credibility(SC). We propose a dynamical scoring approach utilizing this evaluation of SC and analysis of link structure by defining degree of recommendation to each link. We show the convergence of the calculation based on our approach under certain conditions. We also show the diversity of this evaluation utilizing SC that is given externally by regarding the evaluation of SC and the propagation of scores are independent. We experiment this approach and discuss about the results.

Title:

PRODUCING DTB FROM AUDIO TAPES

Author(s):

Luís Carriço, Teresa Chambel, Nuno Guimarães, Carlos Duarte

Abstract: This paper presents a framework for the conversion of audiotape spoken books to full featured digital talking books. It is developed within the context of the IPSOM project. The introduction of search, cross-referencing and annotation mechanisms, with multimedia and trough multimodal capabilities are considered. Different formats and standards are taken into consideration, as well as different interaction alternatives. The resulting digital talking books aim the visually impaired community, but also situated applications and studies of cognitive aspects. The framework is part of a larger setting enabling the authoring, by reuse of and enrichment of multimedia units, of digital multimedia and multimodal documents.

Title:

SOFTWARE CONFEDERATIONS AND MANUFACTURING

Author(s):

Michal Zemlicka

Abstract: Modern information systems of large companies and other human organizations (like state administration, complex health care systems) must have a specific architecture called software confederation (a peer-to-peer network of large autonomous software units behaving like permanent services). Confederation architecture is a notion and technology orthogonal to most popular object-orientation. They are used in different problem or scale domains. For the confederative systems specific experts are necessary. Such experts can be found among experts having positive experience with manufacturing systems but not among the experts with strong object-orientation. Some technical problems simplifying the design of EAI are discussed and not yet solved issues are formulated.

Title:

IMPLEMENTING A GENERIC COMPONENT-BASED FRAMEWORK FOR TELE-CONTROL APPLICATIONS

Author(s):

Avraam Chimaris, George Papadopoulos

Abstract: In this paper, we design and implement a generic framework of components that can be used for the realization of Tele-Control applications. This category of applications focuses particularly on the issues of managing distributed units on remote end-systems. Such applications contain remote and administrative units that are connected and exchange data and control messages. In the analysis of our framework, we used UML for the specifications, analysis and presentation of system operations. The distributed units of our framework are using XML messages and TCP channels for exchanging data and control messages. We implement a communication “protocol” that contains the basic messages that can be used in Tele-Control Systems. Finally we are presenting two different applications, which are implemented by reusing the generic components of our framework.

Title:

LANGUAGES AND MECHANISMS FOR SOFTWARE PROCESSES AND MANUFACTURING ENTERPRISE PROCESSES: SIMILARITIES AND DIFFERENCES

Author(s):

Franck Theroude, Selma Arbaoui, hervé Verjus

Abstract: This paper tends to confront two wide and deep process fields: software process and enterprise manufacturing process (called for short, manufacturing processes). It will analyse them, present a state of the art according to a set of process requirements and conclude with a similarities and differences

Title:

DESIGNING TOWARDS SUPPORTING AND IMPROVING CO-OPERATIVE ORGANISATIONAL WORK PRACTICES

Author(s):

M. Cecilia C. Baranauskas, Rodrigo Bonacin

Abstract: Literature in CSCW and related fields has acknowledged the importance of understanding the social context in which prospective computational systems for the workplace will be embedded. The Participatory Design approaches share these concerns and address several techniques to commit the design process to considerations that take into account people’s work practices and participation. Although the participatory techniques provide mechanisms to capture important concepts of the organisational context, results of these techniques are not well represented by traditional methods of system modelling. Organisational Semiotics understands the whole organisation as a semiotic system and provides methods for considering the social aspects of organisations in modelling and deriving the system. In this paper we propose an approach which combines PD techniques and OS methods to design CSCW systems. The approach is illustrated with Pokayoke: a system designed to support problem solving in the context of a lean manufacturing organisation.

Title:

TOWARDS ADAPTIVE USER INTERFACE GENERATION: ONE STEP CLOSER TO PEOPLE

Author(s):

María Lozano, Antonio Fernández-Caballero, Francisco Montero, Víctor López-Jaquero

Abstract: User interface generation has become a Software Engineering branch of increasing interest, probably due to the great amount of money, time and effort used to develop user interfaces and the increasing level of exigency of user requirements for usability (Nielssen, 1993) and accessibility (W3C, 2002) compliance interfaces. There are different kinds of users, and that is a fact we cannot ignore. Human society is full of diversity and that must be reflected in human-computer interaction design. Thus, we need to engage users in a new kind of interaction concept where user interfaces are tailored-made, and where user interfaces are intelligent and adaptive. A new generation of specification techniques are necessary to face these challenges successfully. Model-based design has proved to be a powerful tool to achieve these goals. A first step towards adaptive user interface generation is introduced by means of the concept of connector applied to model-based design of user interfaces.

Title:

TOWARDS AN AGENT ARCHITECTURAL DESCRIPTION LANGUAGE FOR INFORMATION SYSTEMS

Author(s):

Manuel Kolp, Manuel Kolp

Abstract: This paper identifies the foundations for an architectural description language (ADL) to specify multi-agent system architectures for information systems. We propose a set of system architectural concepts based on the BDI agent model and existing classical ADLs. We then conceptualize SKwyRL-ADL, aimed at capturing a “core” set of structural and behavioral concepts, including relationships that are fundamental in architecture description for BDI-MAS. We partially apply our ADL on a peer-to-peer document sharing example.

Title:

E-COMMERCE AUTHENTICATION: AN EFFECTIVE COUNTERMEASURES DESIGN MODEL

Author(s):

Victor Sawma

Abstract: Existing authentication models for e-commerce systems take into account satisfying legitimate user requirements described in security standards. Yet, the process of introducing countermeasures to block malicious user requirements is ad hoc and relies completely on the security designer expertise. This leads to expensive implementation life cycles if defects related to the design model were discovered during the system-testing phase. In this paper, we describe an authentication countermeasures design model for e-commerce systems. This model includes effective countermeasures against all known malicious user requirements and attacks. The described model is preventive in nature and can be used with other authentication models or can be implemented as a stand-alone module for e-commerce systems.

Title:

USER INTERFACE COMPLEXITY ASSESSMENT IN LARGE-SCALE SAFETY-CRITICAL ENVIRONMENTS

Author(s):

Erman Coskun, Martha  Grabowski

Abstract: In order to design an understandable and usable interface, the human-computer interaction, computer-supported cooperative work, psychology, cognitive sciences, and human factors disciplines have developed methodologies and determined critical elements for successful user interfaces. The importance of the user interface increases particularly in safety-critical or mission-critical systems where the user has time limitations within which to make correct decisions. User interfaces for these type of systems should be well-designed, easy to understand and use. Otherwise mishaps or accidents may occur and consequences of accidents may include loss of human life, large financial losses, and environmental damage. All this suggests that examining the complexity of user interface in safety-critical large-scale systems is important. In this study, we study user interface complexity in safety-critical environments and report the results of a study conducted with an Embedded Intelligent Real-Time System and its operators.

Title:

REAL-TIME DATABASE MODELING CONSIDERING QUALITY OF SERVICE

Author(s):

Maria Lígia Barbosa Perkusich, Angelo Perkusich

Abstract: Recent research points the real-time database systems (RTDB) as a key functional unit that contribute to the success of emergent applications, such as electronic commerce, notice for demand, telephony systems and on line trading. These research is motivated by the fact of the dealing of these applications with great amount of data, beyond data and transactions with timing constraints. Due to the high service demand, much transactions may miss their deadlines. To address these problems, we present a RTDB model considering quality of service (QoS) to support guarantees the performance. A simulation study shows that our model can achieve a significant performance improvement, in terms of deadlines miss and accumulated maximum imprecision resultant of the negotiation between the logical and temporal consistency. Furthermore, we shows model analysis generated by the Design/CPN tool.

Title:

DISTRIBUTED SOFTWARE DEVELOPMENT: TOWARD AN UNDERSTANDING OF THE RELATIONSHIP BETWEEN PROJECT TEAM, USERS AND CUSTOMERS

Author(s):

Roberto Evaristo, Jorge Audy, Rafael Prikladnicki

Abstract: The objective of this paper is to propose a typology for distributed software development comprising the relation between the three main stakeholders: project team (developers, analysts, managers, testers, system administrator, graphic artist, etc.), customers and users. With this objective, we propose a set of criteria to define geographically distributed environments. As a result, a model to define the distribution level for an organization in a DSD (Distributed Software Development) environment is presented. Such model is applied in two case studies and its results are discussed. These cases studies involve two companies with headquarters in the United States (U.S.) and a development unit in Brazil. Data from two exploratory case studies are presented to support the proposed model. Advantages of this representation as well as some aspects of the increasing distribution of software development particularly in a few Brazilian organizations are discussed.

Title:

DEVELOPING DOCUMENT AND CONTENT MANAGEMENT IN ENTERPRISES USING A ‘GENRE LENS’

Author(s):

Anne Honkaranta

Abstract: A great deal of organizational information content is organized and produced as text and stored, understood and managed as documents – logical (sets of) content units meant for human comprehension. In some occasions content needed by human actors can be smaller or larger by it’s grain size than that of a document. Dynamic nature of digital documents alongside with multiple technologies used for enacting them have made a document as an object of analysis fuzzy thus causing possibility that important information content can be overlooked within enterprise document management systems and development initiatives. I argue that enterprises need to find means for analyzing their information content independently from technologies and media; whether content is identified and managed as documents or not. For the purpose I introduce a theory of genres - typified communicative actions characterized by similar substance and form - as a ‘lens’ that can be operationalized for document and content analysis. In the paper I discuss how genres can be employed for document and content analysis by discussing the literature review I carried out. Research literature shows that theory of genres has been applied in multiple ways, such as for requirements analysis, identifying documents used in enterprise workgroups along with their metadata, analysis and design of information coordination, and so on. Multiple metadata frameworks have also been developed for classifying the communicational content within enterprises. Findings of the literature review implicate that genre ‘lens’ can be used for document and content analysis in multiple ways.

Title:

THE IMPACT OF INFORMATION AND COMMUNICATION TECHNOLOGIES IN SMALL AND MEDIUM ENTERPRISES

Author(s):

Silvina Santana

Abstract: This work shows part of the results of an empirical investigation carried out in small and medium enterprises of the Portuguese industry. One of it’s goals was to determine the impact that the use of Information and Communication Technologies (ICT) may have in the way the companies deal with the external environment and in organizational factors like culture, strategy, management and leadership, structure, people, processes, routines and procedures and financial resources. The investigation followed a model for the study of organizational learning, previously developed and involved 458 companies of the District of Aveiro, the ones that had responded and affirmed to hold and use ICT in a preliminary auscultation to all the companies of the industry of this district (3057). Collected data was submitted to procedures of multivariate data analysis, namely, Principal Components Analysis and Cluster Analyses.

Title:

A PROMISING MANIFESTO FOR E-BUSINESS VALUE MODELING PROBLEMS

Author(s):

Midori Takao, Masao Matsumoto

Abstract: Continuing to the Value Proposition Innovation method presented on occasion of the ICEIS’02 panel on “Problems and Promises of e-business”, this paper explores other crucial subject matters relevant to the theme, say, what the root problem is out from many of the difficulties encountered in recent e-business modeling projects. The series of surveys undertaken by Japan’s Research Thrust IEICE Software Enterprise Modeling identify that the less matured support for such dual-discipline as e-business is the root cause that generates numerous problems. One break through for this is to provide an evaluation framework that allows you to make decision whether the underlined e-business modeling is beneficial. The framework will become a key component that is essentially needed for forming a feedback loop in “model and go” support in e-business.

Title:

SOFTWARE PROTOTYPING CLASSIFICATION

Author(s):

Claudine  Toffolon, Salem Dakhli

Abstract: Many academics and practitioners consider that software prototyping is a solution to many symptoms of the software crisis. As software prototyping may be costly and complex, many typologies have been proposed in the literature to help understanding this approach. The main weakness of such typologies is related to their technical orientation. In this paper, we propose an alternative classification of software prototyping which take into account all the aspects of software. Our typology is based upon two frameworks proposed by the authors in a previous work: the software dimensions theory and the software global model.

Title:

REUSING A TIME ONTOLOGY

Author(s):

H. Sofia Pinto, Duarte Nuno Peralta, Nuno Mamede

Abstract: Ontologies are becoming crucial in several disparate areas, such as the Semantic Web or Knowledge Manage-ment. Ontology building is still more of an art than an engineering task. None of the available methodologies to build ontologies from scratch has been widely accepted. One cost effective way of building ontologies is by means of reuse. In this article we describe the development of an ontology of Time by means of reuse, following an evolving prototyping life cycle. This process involved several complex subprocesses: knowledge acquisition and requirement specification using Natural Language techniques, reverse engineering, knowledge representation translation, technical evaluation. As far as we know, this is the first time that all these processes have been combined together. We describe the techniques and best practices that were successfully used.

Title:

IS DEVELOPMENT WITH IS COMPONENTS

Author(s):

Slim TURKI, Michel LEONARD

Abstract: In this paper, we expose our vision of the component based information systems (IS) development in a data-intensive application context. A hyperclass is a large class, formed from a subset of conceptual classes of the global schema of a database, forming a unit with a precise semantics. This concept introduces a kind of modularity in the definition and the management of a database schema and a powerful kind of independence between the methods and the schema. We present our global approach of reuse and give our definition of an IS component (ISC). For us, an ISC is an autonomous IS, respecting a set of conformity rules. It is defined through a static space, a dynamic space and an integrity rules space. We use the hyperclass concept to implement the static space. Applied to an ISC, it facilitates its handling when refined or integrated.

Title:

SYSTEMS DEVELOPMENT METHOD RATIONALE: A CONCEPTUAL FRAMEWORK FOR ANALYSIS

Author(s):

Pär Ågerfalk, Kai Wistrand

Abstract: The rationale of their creators inherently influences information systems development methods. Rationale, which is always based on the creator’s values and assumptions about the problem domain, motivates, implicit or explicit, the different modelling activities and primitives prescribed by a method. The method, and hence its inherited rationale, directs method users’ attention toward certain kinds of phenomena and away from others. Today we see a trend towards standardizing systems development in terms of standard modelling languages and standard development processes. When using an existing (standard) method, developers are forced to rely on the rationale of that particular method. Sometimes, however, there are reasons to enhance the standard method to reflect aspects of the world held as important by the method users – but not emphasized by the method creator. Hence, there is a need to integrate the rationale of the method users with that of the existing method. In this paper, we investigate what method rationale is and how it can be modelled and analysed. The paper employs a method engineering approach in that it proposes method support for analysing, describing and integrating method rationale – an emerging essential task for method engineers in a world of standardization.

Title:

A METHODOLOGY FOR THE INTEGRATION OF CSCW APPLICATIONS

Author(s):

Anne  James, Richard Gatward, Rahat Iqbal

Abstract: Organizations rely on a wide variety of collaborative applications in order to support their every day activities and to share resources. The collaborative applications are typically designed from scratch if the existing applications do not meet organizational needs. This requires more budget and time. This paper reports on the integration of existing collaborative applications or computer supported cooperative work (CSCW) in order to support collaborative activities of organizations. This will meet their requirements at low cost. This is a part of our research towards investigating and developing an integrative framework for CSCW applications. It will be flexible enough to accommodate the various and varying needs of the organization community. We discuss different types of integration model and interoperability in CSCW and consider different models of CSCW systems. A framework for CSCW integration is presented. A methodology based on this framework is also proposed

Title:

EXTRACTING THE SOFTWARE ELEMENTS AND DESIGN PATTERNS FROM THE SOFTWARE FIELD

Author(s):

Mikio Ohki, Yasushi Kambayashi

Abstract: Deriving the class structure of object-oriented software has been studied intensively. We have proposed a methodology to divide the conceptual model used in the object-oriented analysis into basic elements, such as classes, attributes, methods, relations, and to define constraint characteristics and constructing operations on each element. In the methodology, we have applied the field theory in the quantum physics to software and proposed the software field concepts (Ohki and Kambayashi, 2002a). Our thesis is that software is a kind of fields in which software elements, such as methods and attributes, interact each other to produce certain behavioral patterns. The methodology explains well the characteristics of class libraries (Ohki and Kambayashi, 2002b). Once the software elements are extracted from the software field, the methodology allows constructing design patterns from the characteristics of the elements (Ohki and Kambayashi, 2002a). Although we defined the extract operations to elicit the software elements, we failed to show that those operations have reasons and are correct (Ohki and Kambayashi, 2002a). In order to overcome this problem, in this paper, we introduce the distribution functions to represent the software elements, and to formulate the interactions of the functions. Using the distribution functions and the interactions between them, we have succeeded to suggest how to extract the software elements from the software field, and how to derive the design patterns by using the characteristics of the extract elements. This paper first describes the basic concepts of the software field, and then introduces the distribution functions to represent the software elements. In the latter part of this paper describes that it is applicable to derive typical design patterns.

Title:

FLEXIBILE PROCESSES AND METHOD CONFIGURATION: OUTLINE OF A JOINT INDUSTRY-ACADEMIA RESEARCH PROJECT

Author(s):

Kjell Möller, Pär Ågerfalk, Kai Wistrand, Gregor Börjesson, Fredrik Karlsson, Martin Elmberg

Abstract: This paper outlines a joint industry-academia research project in the area of method engineering. Founded in practical experiences and emerging theoretical constructs, the project aims as developing theories, methods and tools to support the adaptation, integration and construction of method components for flexible configuration of system development methods. By explicating the possibilities of using method’s inherent rationale, the possibilities to adopt rigorous methods (such as the Rational Unified Process) to comply with increasing demands of flexibility will be exploited. The paper also addresses the approach to technology transfer adopted in the project, viewing the project as existing in two different intellectual spheres, one academic and one industrial. The two spheres overlap in a space of conceptualization and interpretation shared by the practitioners and academic researchers involved. This way the project adopts an iterative process of reflection and application, generating knowledge directly applicable in industry as well as knowledge of theoretical and scientific importance.

Title:

A NEW USER-CENTERED DESIGN OF DISTRIBUTED COLLABORATION ENVIRONMENTS: THE RÉCIPROCITÉ PROJECT

Author(s):

Alain Derycke, Frédéric Hoogstoel, Xavier Le Pallec, Ludovic Collet

Abstract: Designing collaborative applications is a hard task. Indeed, anticipating users' needs and help users understanding the process of proposed services are more difficult due to the group dimension. The Réciprocité project is a new way in designing collaborative applications. It tries to reduce the two previous difficulties. In this paper, we present strong points of our approach: Peer-to-Peer (P2P), full-XML architecture, and tailorability mechanisms.

Title:

ASPECT-ORIENTED SOFTWARE DEVELOPMENT: AN OVERVIEW

Author(s):

Isabel  Brito, Ana Moreira

Abstract: Separation of concerns is a software engineering principle that claims the clearly identification of all the elements that participate in a system. There are some concerns such as security and performance that cut across many other concerns. The classical approaches do not support the modularisation and further integration of these crosscutting concerns with the functional requirements of a system producing spread and tangled representations (e.g. specifications, code) that are difficult to understand, maintain and evolve. Aspect-Oriented Programming (AOP) aims at handling these problems. Recently we have been noticing a clear interest on propagating the AOP ideas and concepts to earlier activities of the software development process. This paper gives an overview of aspect-oriented software development, giving a special emphasis to aspect-oriented requirements engineering

Title:

COMPLEMENTARY ADOPTION AND OPTIMIZATION OF ENTERPRISE RESOURCE PLANNING SYSTEMS

Author(s):

C. Sophie  Lee

Abstract: Enterprise Resource Planning (ERP) systems emphasize on the integrative and platform adoption of system software, instead of piece-meal upgrades. However, technological changes can only bring limited benefit to the organization if other factors in the organization -- such as strategy and organizational structure -- are not changed in the coordinated or complementary directions. Failure to understand the complementarity between technology and organizational may cause low payoff of IT investment. Customer Relationship Management (CRM) software is the latest addition to the ERP family. It is rooted in relationship marketing or customer relationship management, which emphasizes the need to build a long term relationship with customers. Spending on CRM software has grown 6 times over the past years but customer satisfaction of American consumers did not grow. An examination of the literature reveals that current CRM model tends to focus more on customer “service” instead of customer “relationship”. This study proposes to combine the American Customer Satisfaction Index (ACSI) model and complementarity framework to provide optimal design and adoption of CRM software.

Title:

INFORMATION SYSTEM FAILURE: ANALYSIS AND COMMENTARY

Author(s):

John Wise, Ata Nahouraii, Anthony Debons

Abstract: Contemporary events of considerable significance to national and public welfare suggest that information was a significant force on the character and outcome of each event such as the Chechen rebels held more than 700 hostages in the Dubrovku, Kremlin theater on October 23, 2002, the terrorist attack of 9/11, the Challenger shuttle accident, the explosion at Bhopal. The need to identify the success and failures of information systems in these events is deemed to be relevant to national and private interests. In 1986, an aggregation of distinguished scholars met in Bad Windsheim, Federal Republic of Germany to serve as lecturers in an Advanced Study Institute sponsored by the NATO’s Science division. A number of issues were addressed which included the prevailing methods used in the assessment of information system failure, the organizational factors pertaining thereof, the role of human cognitive variables, the capacity of the system to induce or resist failure and many other socio-economic- political variables that were considered pertinent to an understanding of information system failure. The paper summarizes these dimensions of information system failure as presented at the institute and in addition, comments on the importance on such systems based on contemporary socio-political circumstances.

Title:

NEW APPROACH TO TEST THE SAP SYSTEM DATA SECURITY

Author(s):

Jen-Hao Tu

Abstract: The SAP system is the most widely used ERP (Enterprise Resource Planning) system in the world. There are thousands of seamlessly linked components and subsystems. Conducting security tests in a complicated ERP system is still a major challenge. Based on the study of the SAP system data security testing at the author’s company, this paper will discuss issues related to legal and regulatory requirements, IT security governance and segregation of duty in order to meet these emerging security challenges. A practical SAP data security framework is proposed to link these requirements to the business units. AIS (Audit Information System) was, originally, an integrated audit tool provided by the SAP company to facilitate both the SAP system and the business audit process. The functionality of AIS will be explored to ensure the tests meet these security requirements in the SAP data security framework.

Area 4 - INTERNET COMPUTING AND ELECTRONIC COMMERCE

Title:

THE RESOURCE FRAMEWORK FOR MOBILE APPLICATIONS: ENABLING COLLABORATION BETWEEN MOBILE USERS

Author(s):

Jörg Roth

Abstract: Mobile devices are getting more and more interesting for several kinds of field workers such as sales representatives or maintenance engineers. When in the field, mobile users often want to collaborate with other mobile users or with stationary colleagues at home. Most established collaboration concepts are designed for stationary scenarios and often do not sufficiently support mobility. Mobile users are only weakly connected to the communication infrastructure by wireless networks. Small mobile devices like PDAs often do not have sufficient computational power to handle effortful tasks to coordinate and synchro-nize users. They have for example very limited user interface capabilities and reduced storage capacity. In addition, mobile devices are subject to other usage paradigms like stationary computers and often turned on and off during a session. In this paper, we introduce a framework for mobile collaborative applications based on so-called resources. The resource framework leads to a straightforward functional decomposition of the overall application. Our platform Pocket DreamTeam provides a runtime infrastructure for applica-tion based on resources. We demonstrate the resource concept with the help of two applications build to top of the Pocket DreamTeam platform.

Title:

A SEMANTIC FRAMEWORK FOR DISTRIBUTED APPLICATIONS

Author(s):

Liang-Jie Zhang, Wei-Tek Tsai, Bing Li

Abstract: The .XIN technology is a novel approach to build and integrate existing distributed applications. The essence of a .XIN is business logic descriptions. Based on the concept of .XIN, developers’ effort is minimized because their developing work is concentrated on mapping business logic to .XINs. The adaptor layer is an interpreter that translates .XINs into implementations of particular distributed domains. This layer hides details of implementation techniques of distributed applications. Moreover, applications built with .XIN can share their services over Internet via RXC (Remote .XIN Call) and a remote .XIN-based Service can be blended into a local .XIN-based application via RXI (Remote .XIN Interchange). Finally, an object interface can be mapped to a .XIN interface. With the support of this mapping, both non-.XIN applications and .XIN applications have the same interface, .XIN interface. So it is possible for them to share their respective services over the Internet. This is also a new approach to integrate heterogeneous applications. The technology of .XIN is a semantic framework for distributed applications.

Title:

OPEN TRADING - THE SEARCH FOR THE INFORMATION ECONOMY'S HOLY GRAIL

Author(s):

Graham Scriven

Abstract: This paper examines the concept of Open Trading, establishing its crucial importance in achieving comprehensive benefits for all trading partners as a result of the move towards the Information Economy. The rationale for interoperability is also examined and placed in perspective. The paper considers how Open Trading can be achieved and suggests ten principles as a practical guide for both vendors and business organisations.

Title:

EVALUATION OF MAINFRAME COMPUTER SYSTEM USING WEB SERVICE ARCHITECTURE

Author(s):

Yukinori Kakazu, Mitsuyoshi Nagao

Abstract: In this paper, we propose a mainframe computer system using a web service architecture in order to realize a mainframe computer system that permits users to conveniently access it to perform flexible information processing. The web service is a system architecture that communicates among applications through the Internet by using the SOAP (Simple Object Access Protocol). SOAP is a simple protocol based on XML and HTTP. It has the advantages that the communication can be performed beyond the firewall provided to promote network security and that it can be used on various platforms. The web service architecture inherits these advantages of SOAP. It is likely that an effective and convenient mainframe computer system used over the Internet can be implemented by using the web service architecture. Moreover, the implementation of the proposed system can bring about new application model. Applications that users can unconsciously use the mainframe computer system and which can perform large-scale information processing can be implemented on low-performance clients, such as mobile platforms, by realizing the proposed system. In addition, the application combining the high-performance libraries on a mainframe computer system can be implemented on such a client. We report the construction of the proposed system and confirm its effectiveness through a computational experiment. The experimental result revealed that effective information processing could be performed over the Internet by using the proposed system.

Title:

WHAT IS THE VALUE OF EMOTION IN COMMUNICATION? IMPLICATIONS FOR USER CENTRED DESIGN.

Author(s):

Robert Cox

Abstract: This research presents an investigation into the question - what is the value of emotion in communication. To gain a greater appreciation of this title, it is this paper’s intention to de-construct the sentence into its component parts – namely its nouns; Value, Emotions and Communications, and to study them in isolation to each other and as a total construct. Further, the everyday use of communications technology (i.e. e-mail, chat lines, mobile and fixed line telephones) has changed human communication norms. To identify the significance of this change, an investigation into the question of whether emotions continue to play an important role in effective human-to-human communications is most likely warranted.

Title:

COMBINING WEB BASED DOCUMENT MANAGEMENT AND EVENT-BASED SYSTEMS - MUDS AND MOOS TOGETHER WITH DMS FORM AN COOPERATIVE OPEN SOURCE KNOWLEDGE SPACE

Author(s):

Thorsten  Hampel

Abstract: The WWW has developed as the de facto standard for computer based learning. However, as a server-centered approach it confines readers and learners to passive non-sequential reading. Authoring and web-publishing systems aim at supporting the authors' design process. Consequently, learners' activities are confined to selecting and reading (downloading documents) with almost no possibilities to structure and arrange their learning spaces nor do that in a cooperative manner. This paper presents a learner-centered – completely web-based – approach through virtual knowledge rooms. Our technical framework allows us to study different technical configurations within the traditional university setting. Considering the systems design the concept of virtual knowledge rooms is to combine event-based technology of virtual worlds with the classical document management functions in a client-server framework. Knowledge rooms and learning materials such as documents or multimedia elements are represented as a fully object oriented model of objects, attributes and access rights. We do not focus on interactive systems managing individual access rights to knowledge bases, but rather on cooperative management and structuring of distributed knowledge bases.

Title:

USERS-TAILORED E-BUSSINES THROUGH A THREE-LAYER PERSONALIZATION MODEL BASED ON AGENTS

Author(s):

Irene Luque Ruiz, Miguel Angel Gómez-Nieto, Gonzalo Cerruela Garcia, Enrique López Espinosa

Abstract: The explosion of Internet together with the advantages that offers nowadays the electronic commerce, are provoking an important growth of the Web sites devoted to the development of this activity; what gives rise to each time be greater the quantity of information that arrives to the users upon the products or services that offer said sites. The users before so much information, which in some cases will interest it and in other not, finish for not processing it. This situation has provoked that the researchers try to seek solutions, among the ones that emphasizes the use of Artificial Intelligence for solve this problem. With this idea appears the personalization of Web sites, which has as objective to provide to the user the information that he needs. In this paper a personalization model in various levels is proposed, which applied to the Business Virtual Centre portal (BVC) will try to personalize services, information, as well as, the activities that will be able to carry out each user in it. Personalization model is based on: stereotypes existing in the system, information introduced by the user and the knowledge extracted from the information generated by the user during its stay in the BVC.

Title:

PERSONALIZATION MEETS MASS CUSTOMIZATION - SUPPORT FOR THE CONFIGURATION AND DESIGN OF INDIVIDUALIZED PRODUCTS

Author(s):

Martin Lacher, Thomas  Leckner, Michael Koch, Rosmary Stegmann

Abstract: Using electronic media for customer interaction enables enterprises to better serve customers by cost-efficiently offering personalized services to all customers. In this paper we address the area of providing help for customers in selecting or designing individualized products (mass customization) by using personalization technologies. The paper provides an introduction to the application area and presents a system for supporting the customization and design of individualized products. The support solution is presented and discussed from a process (customer) point of view and from a system point of view.

Title:

E-COMMERCE PAYMENT SYSTEMS - AN OVERVIEW

Author(s):

Pedro Fonseca, Joaquim Marques, Carlos Serrao

Abstract: Electronic Commerce is playing a growing importance on modern Economy since it provides a commodity way for consumers to acquire goods and services through electronic means – Internet and the WWW are the most important. However, this new way of trade raises important problems on the way payments are being made, and trust is one of the most important one. This paper starts by presenting some of the complexities related to Electronic Commerce payments in this New Economy, both on a consumer and seller perspective. Next, differences between the traditional and electronic payment systems are identified and how they both deal with the identified complexities. Electronic payment systems (EPS) are then identified referring the advantages presented to Electronic Commerce. Finally, a comparative EPS table is presented identifying strong and week points on each of the EPS and conclusions are drawn from this.

Title:

CONTENT ANALYSIS OF ONLINE INTERRATER RELIABILITY USING THE TRANSCRIPT RELIABILITY CLEANING PERCENTAGE (TRCP): A SOFTWARE ENGINEERING CASE STUDY

Author(s):

Peter Oriogun

Abstract: In this paper the author presents a case study of online discourse by message unit using quantitative content analysis, with particular emphasis on the author's proposed interrater agreement percentage that will be referred to in this paper as Transcript Reliability Cleaning Percentage (TRCP). The paper will examine the ratings of participants' messages in terms of level of engagement within a negotiation forum in line with the author's Negotiated Incremental Architecture, Oriogun (2002) using the Win-Win Spiral Model, Boehm (1988). The variables that the author investigated are, participation, and interaction. The paper is divided into six sections, that will introduce the rationale for the study, a brief introduction to the Negotiated Incremental Architecture, followed by the study itself, we then define what we means by Transcripts Reliability Cleaning Percentage (TRCP) of online discourse using message unit, followed by the interpretation of individual participant's result and finally the author will conclude with a recommendation of a follow-on paper, using our SQUAD approach to online posted messages. The SQUAD approach is a semi-structured categorisation of online messages. The paper also discusses the reasons why there has been very little research on interrater reliability with respect to content analysis of online discourse, furthermore, a comparison is made of Cohen's kappa value as reported in Rouke, Anderson, Garrison & Archer (2000) and the author's proposed Transcript Reliability Cleaning Percentage (TRCP). It is argued in this paper that the proposed Transcript Reliability Cleaning Percentage (TRCP) will better enhance interrater reliability (percentage agreement between coders) of the rating of online transcripts. The author is suggesting that it is not possible under certain circumstances to obtain 100% agreement between coders after discussion. However, the author noted that this was achieved by, Hara, Bonk & Angeli (2000).

Title:

ARCHCOLLECT: A SET OF COMPONENTS TOWARDS WEB USERS’ INTERACTIONS

Author(s):

Julio Ferreira, Edgar Yano, Joao  Sobral, Joubert  Castro, Tiago Garcia, Rodrigo Pagliares

Abstract: Abstract This paper describes an example of a system that emphasizes web users’ interactions, called ArchCollect. One JavaScript component and five Java components gather information coming only from the user, independing onthe web application that will be monitored and on the web server used to support it. This improves the portability of this software and its capacity to deal with many web applications in a Data Center at the same time, for example. The ArchCollect relational model, which is composed by several tables, provides analyses, regarding factors such as purchases, business results, the length of time spent to serve each interaction, user, process, service or product. In this software, data extraction and the data analysis are performed either by personalization mechanisms provided by internal algorithms, or by any commercial decision making tools focused on services, such as, OLAP, Data Mining and Statistics, or by both.

Title:

INTEGRATION OF OBJECT-ORIENTED FRAMEWORKS HAVING IDL AND RPC-BASED COMMUNICATIONS

Author(s):

Debnath Mukherjee

Abstract: This paper proposes a software architecture to unify disparate application frameworks that have Interface Definition Language (IDL) and RPC-based communication between client and server, thus enabling distributed computation using disparate frameworks. The architecture also demonstrates how multiple inheritance from classes belonging to disparate object-oriented frameworks is possible.

Title:

THE SECURE TRUSTED AGENT PROXY SERVER ARCHITECTURE

Author(s):

Michelangelo Giansiracusa

Abstract: Concerns of malicious host system attacks against agents have been a significant factor in the absence of investment in agent technologies for e-commerce in the greater Internet. However, in this paper, it can be seen that agent systems represent a natural evolution in distributed system paradigms. As in other distributed systems, applying traditional distributed systems security techniques and incorporating trusted third-parties can discourage bad behaviour by remote systems. The concept and properties of a trusted proxy server host as a 'middle-man' host anonomising authenticated agent entities in agent itineraries is introduced, along with its inherent benefits. It is hoped that this fresh secure agent architectural offering will inspire further new directions in tackling the very challenging malicious agent platform problem.

Title:

SECURE SMART CARD-BASED ACCESS TO AN E-LEARNING PORTAL

Author(s):

Josef von Helden, Ralf Bruns, Jürgen Dunkel

Abstract: The purpose of the project OpenLearningPlatform is the development of an integrated E-learning portal in order to support teaching and learning at universities. Compared to other E-learning systems the originality of the OpenLearningPlatform is the strong smart card-based authentication and encryption that significantly enhances its usefulness. The secure authentication of every user and the encryption of the transmitted data are the prerequisites to offer personalized and authoritative services, which could not be offered otherwise. Hence, the smart card technology provides the basis for more advanced E-learning services.

Title:

TOWARDS WEB SITE USER'S PROFILE: LOG FILE ANALYSIS

Author(s):

Carlos Alberto de Carvalho, Ivo Pierozzi Jr., Eliane Gonçalves Gomes, Maria de Cléofas Faggion Alencar

Abstract: The Internet is a remote, innovative, extremely dynamic and widely accessible communication medium. As in all other human communication formats, we observe the development and adoption of its own language, inherent to its multimedia aspects. The Embrapa Satellite Monitoring is using the Internet as a dissemination medium of its research results and interaction with clients, partners and web site users for more than one decade. In order to evaluate the web site usage and performance of the e-communication system the Webalizer software has been used to track and to calculate statistics based on web server log file analysis. The objective of the study is to analyze the data and evaluate the indicators related to requests origin (search string, country, time), actions performed by users (entry pages, agents) and system performance (error messages). It will help to remodel the web site design to improve the interaction dynamics and also develop a customized log file analyser. This tool would retrieve coherent and real information.

Title:

SCALABLE AND FLEXIBLE ELECTRONIC IDENTIFICATION

Author(s):

David Shaw, S. Maj

Abstract: Verification of network service requests may be based on widely available identification and authentication services. Complexity or multiple access requirements may require access control artefacts such as hardware based signature generators. Characteristics of artefact generated signatures include security and repeatability. Electronic signatures used in remote transactions need to be graded, flexible and scalable to permit appropriate user selection. Further, inherent error detection may reduce inadvertent errors and misconduct and aid arbitration.

Title:

A SURVEY RESEARCH OF B2B IN IRAN

Author(s):

Javad Karimi Asl

Abstract: EC is relatively new concept in business domain( Wigand, 1997). While the consumer side of the Web explosion has been much touted, it is the Business-to-Business (B2B) market that has quietly surpassed expectations. This paper is based on a survey of 102 business managers (or IS) in Iran and discusses the management practices, application, problems and technical situations with regard to EC development in this country .In this paper was evaluated the B2B situation in Iran. This paper discusses about their business or Is manager’s experiences, and satisfaction with current electronic- commerce (EC) solutions in use. The finding of this paper are useful for both researchers and practitioners as they provide an insight for critical management issues which engage both under development countries'non-governmental organizations and policy makers. The result of this study shows that there are more differences between conditions of EC in developed and derdeveloping countries

Title:

WHITE PAPER FOR FLOWAGENT PLATFORM

Author(s):

Wenjun Wang

Abstract: FlowAgent is the network platform to implement "Streamline Bus" with Jini network technology. "Streamline Bus" is trying to solve problems that prevent us to integrate different applications cross-enterprises/ organizations; it realizes task-scheduling among different applications through pre-defined task data requiring / providing relations; it can provide automatically workload balance, dynamic fail over and run-time data/performance tracking. One critical issue of FlowAgent platform is how to define the internal format for the task running/scheduling data, (1) let it provide the isolated applications the request data for running, as while (2) control the flow through the “Streamline service”. Base on "Streamline Bus", you can build large-scale scheduling systems, that integrates applications of different business fields. Systems based on “Streamline Bus” are in full-distributed model, are very different from traditional “Workflow systems”, which depend on centralized rule engine and has much limitations on the types of application can be integrated.

Title:

A MISSION-BASED MULTIAGENT SYSTEM FOR INTERNET APPLICATIONS

Author(s):

Glenn Jayaputera, Seng Loke, Arkady Zaslavsky

Abstract: Software agents have been one of the most active research areas in the last decade. As a result, new agent technologies and concepts are emerging. Mobile agent technology has been used in real life environments, such as on-line auctions, supply chain, information gatherings, etc. In most situations, mobile agents must be created and carefully crafted to work together almost from scratch. We believe that this is quite inefficient for application developers and users, and hence propose a system for generating and coordinating agents based on the notion of agent missions. The prototype system is called eHermes and its architecture and components are discussed in the paper.

Title:

KNOWLEDGE CONSTRUCTION IN E-LEARNING - DESIGNING AN E-LEARNING ENVIRONMENT

Author(s):

Lily Sun, Kecheng Liu, Shirley Williams

Abstract: In the traditional classroom, students tend to depend on tutors for their motivation, direction, goal setting, progress monitoring, self-assessment, and achievement. A fundamental limitation is that students have little opportunity to conduct and manage their learning activities which are important for knowledge construction. E-Learning approaches and applications which are supported by pervasive technologies, have brought in great benefits to the whole society, meanwhile it also has raised many challenging questions. One of the issues of which researchers and educators are fully aware is that technologies cannot drive a courseware design for e-Learning. An effective and quality learning requires an employment of appropriate learning theory and paradigms, organisation of contents, as well as methods and techniques of delivery. This paper will introduce our research work in design an e-Learning environment with emphases on instructional design of courseware for e-learning.

Title:

THE FUTURE OF TELEPHONY: THE IP SOLUTION

Author(s):

Sílvia Fernandes

Abstract: Enterprises have begun to transform their working environment to meet not only the business world of today but also the business world of tomorrow. The working methods are more flexible than ever before: some workers collaborate entirely from home and others work in several different offices circulating between remote workplaces. In a short time the way we work will be so radically different that working will be just what we do and no more where we are. As globalisation becomes a business reality and technology transforms communications, the world of data transmission together with wireless networks has progressed a lot instead of fixed and wire-line voice communications that have barely changed. However tariffs are still based on time and distance even though it does not make any sense in today’s global marketplace, in spite of the reduced costs that have resulted from the deregulation process over public telephone networks.

Title:

TOWARD A CLASSIFICATION OF INTERNET SCIENTIFIC CONFERENCES

Author(s):

Abed Ouhelli, Prosper Bernard, Michel Plaisent, Lassana Maguiraga

Abstract: Since 1980, the classification of scientific production has been an constant concern for academics. Despite its growing importance in the last decade, Internet has not been investigated as an autonomous domain. This communication relates the efforts to develop a first classification of themes based on calls for paper submitted to the ISWORLD community in the last two years. The distribution of theme and sub-themes is presented and compared.

Title:

WEB NAVIGATION PATTERNS

Author(s):

Eduardo Marques, Ana Cristina Bicharra  Garcia

Abstract: Many Internet service providers and online services require you to manually enter information, such as your user name and password, to establish a connection. With Scripting support for Dial-Up Networking, you can write a script to automate this process. A script is a text file that contains a series of commands, parameters, and expressions required by your Internet service provider or online service to establish the connection and use the service. You can use any text editor, such as Microsoft Notepad, to create a script file. Once you've created your script file, you can then assign it to a specific Dial-Up Networking connection by running the Dial-Up Scripting Tool.

Title:

DYNAMICALLY RECONSTRUCTIVE WEB SERVER CLUSTER USING A HIERARCHICAL GROUPING MECHANISM

Author(s):

Myong-soon  Park, Sung-il  Lim

Abstract: The Internet is quickly growing and people who use the WWW are increasing exponentially. So, companies which offer Web Service want to service to clients during 365*24*60. Therefore they use the cluster system for the availability and the performance. The previous works have made the dispatcher do static position. So, if a node in the system is failed the total system results in crash. We need to make it do dynamic position as like SASHA (Scalable Application-Space Highly-Available) Architecture. SASHA Architecture is composed of COTS, Application-Space Software, Agent and Tokenbeat protocol for system administration. Because it composes nodes in system by a virtual ring, the system administration’s overhead happened. Our paper will propose improved fault Detection and Reconfiguration performance in SASHA Architecture.

Title:

CUSTOMER LOYALTY IN E-BUSINESS

Author(s):

Bel G. Raggad, Jim Lawler

Abstract: This study examines from simulation the effects of the privacy sensitivity of customers, the personalization practices or standards of retailers and the difficulty in locating favorable sites, on the loyalty of consumers to a Web site. The key finding of the study is that customer privacy sensitivity is a critical success factor that significantly impacts loyalty to a retailer. Customers have higher loyalty to sites that request the least information, while they have lower loyalty to sites that request the most information. Web retailers considering expanded personalization of products or services to customers, through increased personal information, need to rethink their practices. The study also found that difficulty in locating a favorable site is a success factor that impacts retailer loyalty, and that customers have higher loyalty to difficult to locate favorable sites on the Web. These findings are important at a time when consumers are empowered with Web technology to immediately shop competitor sites. The significance of privacy to loyalty is a factor that needs to be considered seriously by retailers, if they are to compete for loyal customers, and this study furnishes a framework to effectively research loyalty, personalization and privacy on the Web.

Title:

OPERATION-SUPPORT SYSTEM FOR LARGE-SCALE SYSTEM USING INFORMATION TECHNOLOGY

Author(s):

Seiji Koide, Riichiro Mizoguchi, Akio Gofuku

Abstract: We are developing an operation support system for large-scale system such as rocket launch using Information Technology. In the project, we build a multi-media database that organizes diverse information and data produced in designing, testing, and practical launching, develop case-based and model-based trouble shooting algorithms and systems that automatically detect anomaly and diagnose the causes rapidly, and provide a fast networking environment that allows us to work with experts in distance. The distributed collaborative environment in which all of human operators and software agents can work collaboratively is been developing by means of the Web servicing technology such as UDDI, WSDL, and SOAP, and the Semantic Web technology such as RDF, RDFS, OWL, and Topic Maps. This project was prepared with the contract under the Japanese IT program of the Ministry of Education, Culture, Sports, and Technology.

Title:

SIMULATION STUDY OF TCP PERFORMANCE OVER MOBILE IPV4 AND MOBILE IPV6

Author(s):

Jiankun Hu, Damien Phillips

Abstract: Mobile IPv6 (MIPv6) is a protocol to deal with mobility for the next generation Internet (IPv6). However, the performance of MIPv6 has not yet been extensively investigated. Knowledge of how MIPv6 affects TCP performance, especially in comparison with MIPv4, can provide directions for further improvement. In this report, an intensive simulation study of TCP performance over MIPv4 and MIPv6 has been conducted. Simulation using the famous network simulator NS-2 will be used to highlight differences when TCP is used in hybrid wireless environment, over these two Mobile IP protocols. Initial simulation has shown a solid improvement in performance for MIPv6 when IPv6 Route Optimisation features are used. During the course of simulation, a consistent event causing dropped TCP throughput was identified. Out of order arrival of packets would occur when the mobile node initiated a handover. This out of order arrival invokes TCP congestion control falsely which reduces throughput. The difference in overall throughput of MIPv4 compared to MIPv6 is roughly proportional to the difference in packet size attributed to IPv6's increased header size. Another contribution of this work is to provide modifications and new functions such as node processing time, to the NS-2 simulator to make such investigation possible. To the best of our knowledge, no similar publication has been reported.

Title:

COLLABORATIVE ENGINEERING PORTAL

Author(s):

KRISHNA KUMAR RAMALEKSHMI SANKAR KUMAR, COLIN TAY, KHENG YEOW TAN, STEVEN CHAN, YONGLIN LI, SAI KONG CHIN, ZIN MYINT THANT, SAI KONG CHIN

Abstract: The Collaborative Engineering Portal (CE-Portal) is envisioned to be a comprehensive state-of-the-art infrastructure for facilitating collaborative engineering over the Web. This system offers a Web-based collaborative use of High Performance Computing and Networking technology for product/process design that helps the enterprises to shorten design cycles. This platform allows government professionals and engineers to share information among themselves and to work together with their private counterparts as a virtual project team. The Collaborative Engineering portal is developed as a multi-tiered system implemented using VNC and other Java technologies. In conclusions, we analyze strengths, weaknesses, opportunities and threats of the approach.

Title:

A SURVEY OF KNOWLEDGE BASE GRID FOR TRADITIONAL CHINESE MEDICINE

Author(s):

Jiefeng Xu, Zhaohui Wu

Abstract: Knowledge base gird is a kind of grid, which takes many knowledge bases as its foundation and its knowledge sources. All these knowledge sources follow a public ontology standard defined by standard organization. Knowledge base grid has its own specific domain knowledge, and so can be browsed at semantic level. It also supports correlative browse and knowledge discovery. In this paper, we introduce a generic knowledge base grid for Traditional Chinese Medicine. Its framework consists of three main parts: Virtual Open Knowledge Base, Knowledge Base Index, and Semantic Browser. We anatomize the implementation in detail. Furthermore, knowledge presentation and services of knowledge base grid are discussed.

Title:

TOWARDS A SECURE MOBILE AGENT BASED M-COMMERCE SYSTEM

Author(s):

Ning Zhang, Omaima Bamasak

Abstract: It is widely agreed that mobile agent technology, with its useful features, will provide the technical foundation for future m-commerce applications, as it can overcome the wireless network limitations of limited bandwidth, frequent disconnections and mobile device's weaknesses. In order for mobile agents to be accepted as a primary technology for enabling m-commerce, proper security mechanisms must be developed to address the new security issues they bring to the fore. The most challenging and difficult problem among them is the issue of protecting mobile agents against malicious hosts. Although, to the best of our knowledge, there is yet no general solution to this problem, mechanisms that provide effective protection against specific attacks from hostile hosts have been proposed. This paper has analysed the security requirements for a mobile agent in the context of m-commerce, surveyed the related work in relation to the requirements specified, and suggested the development of a framework that provides confidentiality of data carried by a mobile agent by using secret sharing scheme together with fair exchange and non-repudiation services.

Title:

NON-REPUDIATION AND FAIRNESS IN ELECTRONIC DATA EXCHANGE

Author(s):

Aleksandra Nenadic, Ning Zhang

Abstract: In this paper we discuss the two security issues: non-repudiation and fairness in association with e-commerce applications. In particular, these issues are addressed in the context of electronic data exchange, which is one of the most commonly seen e-commerce applications. In detail, this paper gives a survey of the approaches to non-repudiation and fair electronic data exchange protocols. We additionally discuss the current technologies that propose solutions to these issues, and the emerging standards in the area of business data formats and protocols for the exchange of such data. Finally, we discuss the architecture layer at which to implement the protocols for non-repudiation and fair data exchange.

Title:

SOMEONE: A COOPERATIVE SYSTEM FOR PERSONALIZED INFORMATION EXCHANGE

Author(s):

Layda Agosto, Laurence Vignollet, Pascal Bellec, Michel Plu

Abstract: This paper presents a user-centric, social-media service: SoMeONe. It's goal is to build an information exchange network using Web informational networks. It should allow the construction of personal knowledge bases whose quality is improved by collaboration. It tries to increase the user's commitment by helping him to establish and to maintain interesting interactions with enriching people. Although many users are individualist, the rules we define for this media should encourage a cooperative behaviour. The functionalities it offers are between a bookmark management system and mailing lists. With SoMeONe users exchange informartion with semantic addressing: they only need to annotate information for being diffused to appropriate users. Each user interacts only through a manually controlled contact network composed of known and trusted users. However, to keep his contact network open, SoMeOne helps each user to send information to new appropriate users. In return, the user expects these users to send him new information as well. In companies, where the Intranet is composed of huge amounts of heterogeneous and diverse information, such collective behaviour should increase the personal efficiency of each collaborator. Thus, SoMeONe provides some solutions to some knowledge management problems particularly for companies aware of the value of their social capital.

Title:

POTENTIAL ADVANTAGES OF SEMANTIC WEB FOR INTERNET COMMERCE

Author(s):

Yuxiao Zhao

Abstract: Past decade saw much hype in the area of information technology. The emerging of semantic Web makes us ask if it is another hype. This paper focuses on its potential application in Internet commerce and intends to answer the question to some degree. The contributions are: first, we find and examine twelve potential advantages of applying semantic Web for Internet commerce; second, we conduct a case study of e-procurement in order to show its advantages for each process of e-procurement; lastly, we identify critical research issues that may transfer the potential advantages into tangible benefits.

Title:

BUSINESS MODEL ANALYSIS APPLIED TO MOBILE BUSINESS

Author(s):

Giovanni Camponovo

Abstract: Mobile business is a young promising industry created by the emergence of wireless data networks. Similar to other emerging industries, it is characterized by a large number of uncertainties at different levels, in particular concerning technology, demand and strategy. This paper focuses on the strategic uncertainties, where a large number of actors are trying a number of strategic approaches to position themselves in the most favourable position in the value system. As a consequence, they are experimenting with a number of innovating business models. We argue that successful business models are likely to be the ones that best address the economic peculiarities underlying this industry, like mobility, network effects and natural monopolies. The paper presents the principal classes of actors that will participate in the mobile business industry and give an overview of their business models based on a formalized ontology.

Title:

VOICEXML APPLIED TO A WIRELESS COMMUNICATION SYSTEM

Author(s):

FRANK WANG

Abstract: The project aims to develop a wireless online communication system (Wireless Messenger) to aid communication for small-medium enterprises. By expressing automated voice services using VoiceXML, a visual Web site is created in addition to the physical WML Web site. This wireless system links an out-of-office mobile phone and an in-house server. The functions of this system include posting and notifying of messages internally, posting and notifying of notices, setting and notifying of events, calendar reference modules and administrative controls.

Title:

A NEW SOLUTION FOR IMPEMENTATION OF A COLLABORATIVE BUSINESS PROCESS MODEL

Author(s):

Takaaki Kamogawa, Masao Matsumoto

Abstract: This paper presents a Collaborative Business Process Model based on a Synchronized Theory. The Cisco case of co-working with suppliers is viewed in terms of business-process collaboration to identify issues concerning collaboration with suppliers. The authors also discuss past and present concepts of collaboration, and propose that it is necessary to combine a synchronized theory with a collaborative business process model. We propose a new solution for implementation of the Collaborative Business Process Model from the viewpoint of open infrastructure.

Title:

A DESIGN PROCESS FOR DEPLOYING B2B E-COMMERCE

Author(s):

Youcef Baghdadi

Abstract: This paper emphasizes on architecture and design process for developing applications to support B2B electronic commerce due to their growth and difference from other categories of e-commerce in many aspects. It first describes current architectures, reference models, approaches and implementing technologies. It then proposes an architecture with four abstraction levels: business process, decomposition and coordination, supporting electronic commerce services, im-plementing technology, and the interfaces between them. This abstraction aims to make B2B e-commerce process-driven not technology-driven. Thus making business process independent from the implementing technologies. Finally, a five-steps design process in accordance with this architecture is described.

Title:

AN OBJECT ORIENTED IMPLEMENTATION OF BELIEF-GOAL-ROLE AGENTS

Author(s):

Walid  Chainbi

Abstract: One of the most driving forces behind multi-agent systems research and development is the Internet. Agents are populating the Internet at an increasingly rapid pace. Unfortunately, they are almost universally asocial. Accordingly, adequate agent concepts will be essential for agents in such open environment. To address this issue, we show in the first part of this paper that agents need to have communication concepts and organization concepts. We argue that instead of the usual approach of starting from a set of intentional states, the intentional structure should be deduced in terms of interaction. To this end, we come up with ontologies related to communication and organization. The second part of this paper deals with a study which compares the agent paradigm to the object paradigm. We also show the capabilities as well as the limits of the object paradigm to deal with the agent paradigm. We illustrate our work with the well known prey/predator game.

Title:

BUILDING SUPPLY CHAIN RELATIONSHIPS WITH KNOWLEDGE MANAGEMENT: ENGINEERING TRUST IN COLLABORATIVE SYSTEMS

Author(s):

John  Perkins, Ann-Karin Jorgensen, Lisa Barton, Sharon Cox

Abstract: Collaborative systems are essential components of electronic supply chains. Barriers to collaboration are identified and a preliminary model for evaluating its characteristic features is proposed. Some features of knowledge management and knowledge management systems are briefly reviewed and the application of these to the needs of collaborative system evaluation is explored. A process for iterative evaluation and review of collaborative system performance is proposed. Finally, a case study in the retail industry is used to assess the contribution of knowledge management concepts and systems to develop improved e-commerce performance in collaborative value networks.

Title:

WIDAM - WEB INTERACTION DISPLAY AND MONITORING

Author(s):

Hugo Gamboa, Vasco Ferreira

Abstract: In this paper we describe the design and implementation of a system called Web Interaction Display and Monitoring (WIDAM). We have developed a web based client-server application that offers several services: (i) real time monitoring of the user interaction to be used in synchronous playback (Synchronous Monitoring Service) (ii) real time observation by other users (Synchronous Playback Service); (iii) storage of the user interaction information in the server database (Recording Service); (iv) retrieval and playback of a stored monitored interaction (Asynchronous Playback Service). WIDAM allows the usage of an interaction monitoring system directly over a web page, without the need of any installation, using low bandwidth comparatively to image based remote display systems. We discuss several applications of the presented system like intelligent tutoring systems, usability analysis, system performance monitoring, synchronous or asynchronous e-learning tools.

Title:

AN AGENT-MEDIATED MARKETPLACE FOR TRANSPORTATION TRANSACTIONS

Author(s):

Alexis Lazanas, Pavlos Moraitis, Nikos Karacapilidis

Abstract: This paper reports on the development of an innovative agent-mediated electronic marketplace, which is able to efficiently handle transportation transactions of various types. Software agents of the proposed system represent and act for any user involved in a transportation scenario, while they cooperate and get the related information in real-time mode. Our overall approach aims at the development of a flexible framework that achieves efficient communication among all parties involved, constructs the possible alternative solutions and performs the required decision-making. The system is able to handle the complexity that is inherent in such environments, which is mainly due to the frequent need of finding a modular" transportation solution, that is one that fragments the itinerary requested to a set of sub-routes that may involve different transportation means (trains, trucks, ships, airplanes, etc.). The system's agents cooperate upon well-specified business models, thus being able to manage all the necessary freighting and fleet scheduling processes in wide-area transportation networks.

Title:

ENGINEERING MULTIAGENT SYSTEMS BASED ON INTERACTION PROTOCOLS: A COMPOSITIONAL PETRI NET APPROACH

Author(s):

Sea Ling, Seng Wai Loke

Abstract: Multiagent systems are useful in distributed systems where autonomous and flexible behaviour with decentralized control is advantageous or necessary. To facilitate agent interactions in multiagent systems, a set of interaction protocols for agents has been proposed by the Foundation of Intelligent Physical Agents (FIPA). These protocols are specified diagramatically in an extension of UML called AUML (Agent UML) for agent communication. In this paper, we informally present a means to translate these protocols to equivalent Petri net specifications. Our Petri nets are compositional, and we contend that compositionality is useful since multiagent systems and their interactions are inherently modular, and so that mission-critical parts of a system can be analysed separately.

Title:

ENHANCING NEWS READING EXPERIENCE THROUGH PERSONALIZATION OF NEWS CONTENT AND SERVICES USING INTELLIGENT AGENTS

Author(s):

Logandran Balavijendran, Soon Nyean Cheong, Azhar Kassim Mustapha

Abstract: One of the most common things we use the Internet for is to read the news. But there is so much news catering for so many people, that it often gets confusing and difficult to read what you want to read about. This system uses an Intelligent Agent to guess what the user is interested in and personalizes the news content. This is done by observing the user and determining short-term and long-term interests. To further enrich the experience, it provides features that allows the user to track specific news events and receive instant alerts; summarize news so you can take a quick look before committing yourself; find background information to learn about the news; search and filter results according to the user profile and also provides a smart download tool that makes viewing heavy multimedia content practical without needing large bandwidth (by exploiting the irregular nature of internet traffic and use). This agent is designed to work of the News on Demand Kiosk Network[1] and designed primarily in J2EE.

Title:

AN INTERNET ENABLED APPROACH FOR MRO MODELS AND ITS IMPLEMENTATION

Author(s):

Dennis F Kehoe, Zenon Michaelides, Peiyuan  Pan

Abstract: This paper presents an Internet enabled approach for MRO applications based on the discussion on different MRO models and its implementation architectures. This approach is focused on using e-business philosophy and Internet technology to meet the requirements of MRO services. The proposed e-MRO models are framework techniques. Different system architectures for this new approach are described and available technologies for system implementation are also presented.

Title:

A NEW USER-ORIENTED MODEL TO MANAGE MULTIPLE DIGITAL CREDENTIALS

Author(s):

José Oliveira, Augusto Silva, Carlos Costa

Abstract: E-Commerce and Services are become a major commodity reality. Aspects like electronic identification, authentication and trust are core elements in referred web market areas. The use of electronic credentials and the adoption of a unique worldwide-accepted digital certificate stored in a smart card will provide a higher level of security while allowing total mobility with secure transactions over the web. While this adoption does not take place, the widespread use of digital credentials will inevitably lead to each service client having to be in possession of different electronic credentials needed for all the services he uses. We present a new approach that provides a user-oriented model to manage multiple electronic credential, based in utilization of only one smart card per user as a basis for secure management of web-based services, thus contributing for a more generalized use of the technology.

Title:

INTELLIGENT AGENTS SUPPORTED COLLABORATION IN SUPPLY CHAIN MANAGEMENT

Author(s):

Minhong WANG, Huaiqing WANG, Huisong ZHENG

Abstract: In today's global marketplace, individual firms no longer compete as independent entities but rather as integral part of supply chain links. This paper addresses the approach of applying the technology of intelligent agent in supply chain management to cater for the increasing demand on collaboration between supply chain partners. A multi-agent framework for collaborative planning, forecasting and replenishment in supply chain management is developed. With the concerns for exception handling and flexible collaboration between partners, some function are proposed in the system such as product activity monitoring, negotiation within partners, supply performance evaluation, and collaboration plan adjustment.

Title:

FIDES - A FINANCIAL DECISION AID THAT CAN BE TRUSTED

Author(s):

Sanja Vranes, Snezana Sucurovic, Violeta Tomasevic, Mladen Stanojevic, Vladimir Simeunovic

Abstract: FIDES is aimed at valuating investment projects in accordance with the well-known UNIDO standard and making recommendations on a preferable investment, based on multicriteria analysis of available investment options. FIDES should provide a framework for analyzing key financial indicators, using the discounted cash-flow technique, and also allows for non-monetary factors to enter the multicriteria assessment process, whilst retaining an explicit and relatively objective and consistent set of evaluation conventions and clear decision criteria. Moreover, since virtually every investment and financing decision, involving allocation of resources under uncertain conditions, is associated with considerable risk, FIDES should integrate the risk management module. The basic principle governing risk management is intuitive and well articulated, taking into account investor’s subjective appetite for and aversion to risk, and the decision sensitivity to the uncertainty and/or imprecision of input data. Thus, with FIDES, financial analysts and decision-makers will be provided with effective modeling tools in the absence of complete or precise information and the significant presence of human involvement. The decision aid will be implemented using multiple programming paradigms (Internet programming, production rules, fuzzy programming, multicriteria analysis, etc.), using a three-tier architecture as a backbone. Being Web based, the application is especially convenient for large, geographically dispersed corporations.

Title:

AGENT-BASED GENERIC SERVICES AND THEIR APPLICATION FOR THE MOBILEWORKFORCE

Author(s):

Makram Bouzid

Abstract: In this paper we propose an architecture of agent-based services for easy development of multi-agent applications. It is based on the notion of service components, which can be installed (“plugged”) into a communicative agent, and which can be composed in order to offer more sophisticated services. This architecture was validated through the design and development of a set of generic services for mobile workforce support, within the European project LEAP. These generic services were also built to develop two multi-agent applications that assist the mobile workers of British Telecommunications and the German Automobile Club ADAC. Both have been tested in real world conditions in UK and Germany.

Title:

AN EXTENSIBLE TOOL FOR THE MANAGEMENT OF HETEROGENEOUS REPRESENTATIONS OF XML DATA

Author(s):

Riccardo Torlone, Marco Imperia

Abstract: In this paper we present a tool for the management and the exchange of structured data in XML format, described according to a variety of formats and models. The tool is based on a novel notion of ``metamodel'' that embeds, on the one hand, the main primitives adopted by different schema languages for XML and, on the other hand, the basic constructs of traditional database conceptual models. The metamodel is used as a level of reference for the translation between heterogeneous data representations. The tool enables the users to deal, in a uniform way, with various schema definition languages for XML (DTD, XML Schema and others) and the ER model, as a representative of a traditional conceptual model. The model translation facility allows the user to switch from one representation to another and accounts for possible loss of information in this operation. Moreover, the tool is easily extensible since new models and translations can be added to the basic set in a natural way. The tool can be used to support a number of involved e-Business activities like: information exchange between different organizations, integration of data coming from heterogeneous information sources, XML data design and re-engineering of existing XML repositories.

Title:

USER AUTHENTICATION FOR E-BUSINESS

Author(s):

James P H Coleman

Abstract: There are many factors that need to be addressed before e-business is seen as a truly usable service to the ordinary customer. The most well known factors are: · The speed of access to the Internet and service providers · The cost of access to the Internet infrastructure. The poor quality of a large number of e-business/e-commerce web sites – in particular aspects such as the interface, design … A less well-known, but perhaps equally important factor is user authentication. User authentication is the process whereby the Service Provider (SP) is able to identify the person using the web site. This is normally done by a username/password combination. User Authentication is important for the SPs because if a product is ordered or a service is requested, then the supplier needs to be reasonably confident that the order/request is valid, and not a hoax. Unfortunately, the situation has arisen where a user who is a frequent web user may have accounts with many different SPs, e.g. Their bank, telephone company, ISP, superannuation/pension fund, insurance company, government (often with different departments within the Government) and so on. In these cases the SPs use a registration process where the user has a username and password. It is unfortunately usually the case that the username and password combinations are different between sites. This is a deterrent to the whole registration process as you have people with multiple registrations. There are many e-Gateway systems that offer a single-point-of-logon, for example the e-Government within the UK e-Government Project which aims to solve the problem at least within their infrastructure. The very large private sector has no such mechanism. This paper investigates current e-Gateway systems (including those where the primary purpose is not necessarily user authentication) and proposes a model for a more universal e-Gateway.

Title:

ON ESTIMATING THE AMOUNT OF LEARNING MATERIALS A CASE STUDY

Author(s):

Matti Järvenmpää, Pasi Tyrväinen, Ari Sievänen

Abstract: E-learning has been studied as the means to apply digital computers to educational purposes. Although the benefits of information and communication technology are obvious in several cases, there still exists a lack of convincing measures for the value of using computers in education. This reflects the general difficulty in evaluating investments on information systems, known as the "IT investment paradox" that has not been solved so far. In this paper we approach the problem by estimating the amount of teaching and learning material in a target organisation, a university faculty. As expected, the volume of learning material dominates the communication of the faculty forming about 95% of all communication volume and 78% to 82% of communication when measured with other metrics. Also the use of alternative communication forms used in the target organisation was analysed quantitatively. The study also indicates, that communication forms dominating the volume of communication are likely to be highly organisation-specific.

Title:

E-COMMERCE ENGINEERING: A SHORT VS LONG SOFTWARE PROCESS FOR THE DEVELOPMENT OF E-COMMERCE APPLICATIONS

Author(s):

Andreas Andreou, Stephanos Mavromoustakos, Chrysostomos  Chrysostomou, George  Samaras , Andreas  Pitsillides, Christos  Schizas, Costas Leonidou

Abstract: The immediacy in developing e-commerce applications, the quality of the services offered by these systems and the need for continuous evolution are primary issues that must be fully analysed and understood prior and during the development process. In this context, the present work suggests a new development framework which aims at estimating the level of complexity a certain e-commerce system encompasses and driving the selection of a long or short software process in terms of time and effort. The proposed framework utilizes a special form of Business Process Re-engineering (BPR) to define and assess critical business and organizational factors within small-to-medium enterprises (SMEs) whishing to go e-commerce. This set of factors is enriched with other critical issues belonging to the quality requirements of the system and to the application type of services it aspires to offer. The set of critical factors identified is used to estimate the average complexity level of the system using numerical values to describe the contribution of each factor to the overall complexity. The level of complexity estimated dictates the adoption of either a short or a long version of the well-known WebE process for analysing, designing and implementing the e-commerce system required by an SME.

Title:

ARCHITECTURE OF AUTOMATIC RECOMMENDATION SYSTEM IN E-COMMERCE

Author(s):

Rajiv Khosla, Qiubang Li

Abstract: Automatic recommendation system will become an indispensable tool for customers to shop online. This paper proposes an architecture of automatic recommendation system in e-commerce. The response time of the system, which is the bottleneck of the system, is overcome by high performance computing. The architecture has already applied to an online banking system.

Title:

ELECTRONIC JOB MARKETPLACES: A NEWLY ESTABLISHED MANIFESTATION OF E-BUSINESS

Author(s):

Georgios Dafoulas, Mike Turega, Athanasios Nikolaou

Abstract: Finding suitable candidates for critical job posts is currently an issue of concern for most organizations. Consideration of cultural fit, experience, ability to adapt to the company’s marketplace and ability to grow with the organisation all weigh heavily on the minds of most human resource professionals. Since the mid-90s a significant number of recruiting firms started exploiting the Internet mainly because of its global nature that provides access to an unlimited pool of skills. Optimistic estimations examine the Internet as a medium for conducting the recruitment and selection process in an online environment. This paper suggests developing an integrated Electronic Job Marketplace offering a new service in the Internet Job Market: Online Interviewing for screening candidate employees. In order to meet hiring objectives and control the increasing cost of recruiting, organisations could implement an online recruiting and selection process. The critical requirements of the new model are: eliminating paperwork, improving time-to-hire, reducing turnover, creating a resume and position-centric environment as well as using the Internet as a recruitment and selection tool.

Title:

ONE-TO-ONE PERSONALIZATION OF WEB APPLICATIONS USING A GRAPH BASED MODEL

Author(s):

Georg Sonneck, Thomas Mück

Abstract: Due to the maturity of current web technology, a large fraction of non-technically oriented IT end users are confronted with increasingly complex web applications. Such applications should help these end users to fulfill their tasks in the most effective and efficient way. Out of this perspective there is little doubt that personalization issues play an important role in the era of web applications. Several approaches already exist to support so{-}called {\em Adaptive Hypermedia Systems}, i.e., systems which are able to adapt their output behaviour to different user categories. In this paper, we are focusing on those personalization and customization issues of web applications raised by task driven {\em user interaction} and give as example the interaction patterns caused by different users of a financial advisor system. To achieve this goal we propose, in a first step, a graph{-}based model representing the logical structure of web applications, a fully extensible XML schema description modelling the structure of the nodes in the graph and a document type definition to store user profiles. In a second step, this basic model is augmented by process graphs corresponding to specific business tasks the web application can be used for, leading to a first form of personalization by assigning a user to a process task. We then show in a final step how matches between stored skills within the user profile and the node descriptions can lead to one{-}to{-}one personalization of the process graph.

Title:

AN INVESTIGATION OF THE NEGOTIATION DOMAIN FOR ELECTRONIC COMMERCE INFORMATION SYSTEMS

Author(s):

Zlatko Zlatev, Pascal Eck, van

Abstract: To fully support business cycles, information systems for electronic commerce need to be able to conduct negotiation automatically. In recent years, a number of general frameworks for automated negotiation have been proposed. Application of such frameworks in a specific negotiation situation entails selecting the proper framework and adapting it to this situation. This selection and adaptation process is driven by the specific characteristics of the situation. This paper presents a systematic investigation of there characteristics and surveys a number of frameworks for automated negotiation.

Title:

COLLABORATOR - A COLLABORATIVE SYSTEM FOR HETEROGENEOUS NETWORKS AND DEVICES

Author(s):

Agostino Poggi, Matteo Somacher, Socrates Costicoglou, Federico Bergenti

Abstract: This paper presents a software framework, called Collaborator, to provide a shared workspace supporting the activities of virtual teams. This system exploits seamless integration of standard Web technologies with agent technologies, enhancing the classic Web communication mechanisms to support synchronous sharing of applications, and its use through emerging technologies such as: third generation of mobile networks and terminals, and new generation of home appliances. The system presented in the paper is the main result of an on-going European research project Collaborator (IST-2000-30045) that aims at specifying and developing a software distributed environment to support efficient synchronous collaborative work between virtual teams, and will experiment such an environment in the construction and telecommunication working sectors.

Title:

SOFTWARE AGENTS TO SUPPORT ADMINISTRATION IN ASYNCHRONOUS TEAM ENVIRONMENTS

Author(s):

Roger Tagg

Abstract: Current economic pressures are causing severe problems for many enterprises in maintaining service standards with shrinking headcounts. Front-line workers have had to shoulder runaway workloads. Software Agent technologies have been widely advocated as a solution, but there are few reported success stories. In the author’s previous work, a design was been proposed for a system to support front-line staff in a team teaching environment. This system is based on a domain-specific screen desktop with drop boxes supported by a number of types of agent. This paper analyses the work these agents have to do and the technology needed to support them.

Title:

IT INFRASTRUCTURE FOR SUPPLY CHAIN MANAGEMENT IN COMPANY NETWORKS WITH SMALL AND MEDIUM-SIZED ENTERPRISES

Author(s):

Marcel Stoer, Joerg Nienhaus, Nils Birkeland, Guido Menkhaus

Abstract: The current trend of extending supply chain management beyond the company's wall focuses on the integration of suppliers and consumers into a single information network. The objective is to optimize costs and opportunities for everyone involved. However, small-sized enterprises can rarely carry the high acquisition and introduction costs of hardware and software. This reduces the attractiveness of the small-sized enterprise as partner in a logistics and a production network. This article presents a lean IT infrastructure that targets small-sized enterprises. It allows flexible and configurable integration with the Internet, ERP systems and the secure communication of supply chain management data.

Title:

AGENTS-MIDDLEWARE APPROACH FOR CONTEXT AWARENESS IN PERVASIVE COMPUTING

Author(s):

Karim Djouani, Abdelghani CHIBANI, Yacine AMIRAT

Abstract: With the emergence of wireless distributed systems, embedded computing is becoming more pervasive. Users in continuous transitions between handheld devices and fixed computers expect to maintain the same QoS. Thus, applications need to become increasingly autonomous by reducing interactions with users. The present paper caters with user’s mobility, context-aware embedded applications, distributed systems, and in the general case accesses to remote services through embedded middleware. The context, in which exist such applications, exhibits some constraints like: low bandwidth, frequent disconnections, resources poor devices (low CPU speed, little memory, low battery power, etc). The first objective of our work is to proof that agent paradigm and technologies present a great potential to fully blossom in this new area. This allows the building of new and more effective pervasive applications. Our vision, beyond what it was given in middleware and agents for pervasive computing research, is including the context-awareness capability into the early-introduced agents-middleware approach. Thus, we have proposed an agents-middleware architecture approach, which is FIPA standard compliant. This approach is a logical suite of some transitions in research results; from embedded middleware approaches to lightweight agents’ platform approaches, and arriving finally to context-aware agents-middleware approach. In this way, we present the usefulness of context notion through two derived concepts: pervasive context and user profile. Upon we have introduced tow specialized agents, within the agents-middleware, that process by inferring meta-data that describe context information extracted from sources like: sensors, user, system resources, wireless network, etc. on top of this agents-middleware we can build context-aware pervasive applications. We present also our ongoing work and the future targeted applications by our approach.

Title:

TOXIC FARM: A COOPERATIVE MANAGEMENT PLATFORM FOR VIRTUAL TEAMS AND ENTERPRISES

Author(s):

Hala Skaf-Molli, Pascal Molli, Pradeep Ray, Fethi  Rabhi, Gerald Oster

Abstract: The proliferation of the Internet has revolutionized the way people work together for business. People located at remote places can collaborate across organizational and national boundaries. Although the Internet provides the basic connectivity, researchers all over the world are grappling with the problems of defining, designing and implementing web services that would help people collaborate effectively in virtual teams and enterprises. These problems are exacerbated by a number of issues, such as coordination, communication, data sharing, mobility and security. Hence there is a strong need for multiple services (to address above issues) though an open cooperative management platform to support the design and implementation of virtual teams and enterprises in this dynamic business environment. This paper presents a cooperative management platform called Toxic Farm for this purpose and discusses its application in business applications.

Title:

LEARNING USER PROFILES FOR INTELLIGENT SEARCH

Author(s):

Pasquale Lops, Marco Degemmis

Abstract: The recent evolution of e-commerce emphasized the need for more and more receptive services to the unique and individual requests of users. Personalization has became an important strategy in Business to Consumer commerce, where a user explicitly wants the e-commerce site to consider his own information such as preferences in order to improve access to relevant products. By analyzing the information provided by a customer, his browsing and purchasing history, a personalization system could learn a customer's personal preferences and store them in a personal profile used to provide an intelligent search support. In this work, we propose a two-step profiles generation process: in the first step, the system learns coarse-grained profiles in which the preferences are the product categories the user is interested into. In the second step, the profiles are refined by a probabilistic model of each preferred product category, induced from the descriptions of the products the user likes. Experimental results demonstrate the effectiveness of the strategy proposed.

Title:

AGENT COMMUNICATION CHANNELS: TRANSPORT MECHANISMS

Author(s):

Qusay Mahmoud

Abstract: Most of the work that has been done on agent communication has concentrated on ontologies – Agent Communication Languages (ACLs) that are used to describe objects that the agents manipulate. Little attention, if any, has been given to agent communication channels – the transport layer through which data is sent between agents. Here we describe the different communication transport techniques that can be used to send data between agents, and then we will compare and contrast the different transport mechanisms. This is important as the way agents communicate can have a significant effect on the performance of agent-based systems.

Title:

IMPLEMENTING AN INTERNET-BASED VOTING - A PROJECT EXPERIENCE

Author(s):

Alexander Prosser, Robert Krimmer, Robert Kofler

Abstract: Worldwide research groups have developed remote electronic voting systems using several different approaches with no legal basis. In 2001 the Austrian Parliament passed a law allowing electronic voting with digital signatures for public elections. Besides these legal requirements, an algorithm has to solve the basic technical problem, of how to identify the user uniquely with still guaranteeing the anonymity of one’s vote and further not to allow fraud by the election administration. In this paper the authors give an experience report on the implementation of the first phase of an algorithm that fulfills these requirements by strictly separating the registration from the vote submission phase.

Title:

TOWARDS THE ENTERPRISES INFORMATION INFRASTRUCTURE BASED ON COMPONENTS AND AGENTS

Author(s):

Manuel Chi, Ernesto German, Matias Alvarado, Leonid Sheremetov, Miguel Contreras

Abstract: Information infrastructure as the mean to bring together software applications within the enterprise is the key component to enable cooperation, information and knowledge exchange in an open distributed environment. In this article, component and agent paradigms for the integration of virtual enterprises are analyzed and the advantages and drawbacks of the proposed solution are discussed. As an example of the infrastructure as an integration of the both technologies, a Component Agent Platform (CAP) that uses DCOM as a particular case of component model for its implementation is described. Finally, we discuss the interoperability issues of the proposed solution and outline the directions of the future work.

Title:

GUARDIAN KNOWLEDGE FARM AGENTS AND SECURITY ARCHITECTURES: WEB SERVICES, XML, AND WIRELESS MAPPINGS

Author(s):

Britton Hennessey, Girish Hullur, Mandy McPherson, George Kelley

Abstract: This paper merges the BDIP (beliefs, desires, intentions, and plans) rational agent model into the Jungian rational behavioral model. It also defines the key framework design dimensions and classified intelligences of knowledge farm network agents having the necessary know-how to function as trust and security guardians. The paper presents four practical example application mappings of the converged BDIP-Jungian framework into (1) seven design principles of computer systems security, (2) the web services security architecture, (3) the XML family systems security architecture, and (4) the wireless security architecture.

Title:

ICS- AN AGENT MEDIATED E-COMMERCE SYSTEM: ONTOLOGIES USAGE

Author(s):

Sofiane Labidi

Abstract: The Electronic Commerce has presented an exponential growth in relation to the number of users and amount of commercial transactions. Recent advances in Software Agent’s technology allow agent-based electronic commerce where agents are entities acting autonomously (or semi-autonomously) on behalf of companies or people in negotiation into virtual environments. In this work, we propose the ICS (an Intelligent Commerce System) as a B2B E-Commerce system based on intelligent and mobile software agent’s technology following the OMG MASIF standard. Three important features of ICS are emphasized here: the e-commerce lifecycle approach, the user modeling, and a proposed ontology for each phase of the lifecycle.

Title:

IMPLEMENTATON OF MOBILE INFORMATION DEVICE PROFILE ON VIRTUAL LAB

Author(s):

Aravind Kumar Alagia Nambi

Abstract: The rate at which information is produced in today’s world is mind-boggling. The information is changing by every minute and today’s corporate mantra is not “knowledge is power” but “Timely knowledge is power “. Millions of Dollars are won or lost due to information or lack of it. Business executives and corporate managers push their technology managers to provide information at the right time in the right form. They want information on the go and want to be connected all the time to the Internet or their corporate network. The rapid advancement of Technology in the field of miniaturization and that of communications has introduced a lot of roaming devices for people to connect through to the network like laptop, PDA, mobile phones and many embedded devices. Programming for these devices were cumbersome and limited since each device supported their own standard I/O ports, screen resolution and had specific configurations. The introduction of Java 2 Micro Edition (J2ME) has solved this problem to some extent. J2ME is divided into configuration and profiles, which provide specific information to a group of related devices. Mobile phones can be programmed using J2ME. If the mobility offered by the cellular phones combined with Electrical Engineering many new uses can be found out for existing electrical machines. It will also enable remote monitoring of electrical machines and the various parameters involved in Electrical Engineering

Title:

GLI-BBS: A GROUPWARE BASED ON GEOGRAPHICAL LOCATION INFORMATION FOR FIELD WORKERS

Author(s):

Tatsunori Sasaki, Naoki Odashima, Akihiro Abe

Abstract: Geographical Location Information (GLI) is information showing in which geographical position a person or an object is located. Using digital maps and digital photographs, we have developed a GLI-based Bulletin Board System (GLI-BBS), and we are promoting applications for various public works in local communities. Fieldworkers who participate in public works can use the GLI-BBS effectively to share information and to form mutual agreement. As examples of concrete GLI-BBS applications, a support system for road maintenance and management operations are taken up to examine important points in operation.

Title:

SECURING INTERNET SERVERS FROM SYN FLOODING

Author(s):

Riaz Mahmood

Abstract: Denial-of-Service (DoS) attacks utilize the vulnerabilities present in current Internet protocols and target end server machines with flood of bogus requests – thus blocking the services to the legitimate users. In this paper a counter denial-of-service method called Dynamic Ingress Filtering Algorithm (DIFA) is introduced. This algorithm aims to remove the network peripheries inability to counter spoof-based denial-of-service attacks originating from valid network prefixes. Dynamic Ingress Filtering mechanism by virtue of its design, gives protection against both types of spoof-based attacks, generating from valid network prefixes and invalid network prefixes. This is because of the reason that incoming traffic IP addresses change rate is compared with predefined threshold time limit. If the addresses are changing rapidly from a particular source – the packets arriving from that host are not forwarded. Advantages of DIFA include design simplicity, scalability and reasonable implementation costs

Title:

WEB SERVICES SECURITY MODEL BASED ON TRUST

Author(s):

Luminita Vasiu

Abstract: The concept of Web services is the latest in the evolution of ever more modular and distributed computing. Web services represent a fairly simple extension to existing component models, such as Microsoft's Component Object Model (COM) or Sun's Enterprise Java Bean (EJB) specification .It is obvious that Web services have what it takes to change something important in the distributed programming field. But, until they do it developers will have some difficulties in figuring out how to solve and eliminate problems that appear when trying to build heterogeneous applications.In an open environment security is always an issue. In order to overcome this problem the main challenge is to understand and asses the risk involved in securing a Web-based service. How do you guarantee the security of a bank transaction service? There are efforts being made to develop security mechanisms for Web services. Standards like SAML, XKMS, SOAP security will probably be used in the future to guarantee protection for both the consumers and the services. In this paper we analyse some security issues faced by Web services and present a security model based on trust which supports more specific models such as identity-based-security, access control lists.

Title:

A MULTI-AGENT ARCHITECTURE FOR DYNAMIC COLLABORATIVE FILTERING

Author(s):

Gulden Uchyigit, Keith Clark

Abstract: Collaborative Filtering systems suggest items to a user because it is highly rated by some other user with similar tastes. Although these systems are achieving great success on web based applications, the tremendous growth in the number of people using these applications require performing many recommendations per sec-ond for millions of users. Technologies are needed that can rapidly produce high quality recommendations for large community of users. In this paper we present an agent based approach to collaborative filtering where agents work on behalf of their users to form shared “interest groups”, which is a process of pre-clustering users based on their interest profiles. These groups are dynamically updated to reflect the user’s evolving interests over time. We further present a multi-agent based simulation of the architecture as a means of evaluating the system.

Title:

POLICIES COMPOSITION THROUGH GRAPHICAL COMPONENTS

Author(s):

Rui  Lopes, Vitor Roque, Jose Luis Oliveira

Abstract: Policy based management have gained a crescent importance in the two last years. New demands on internetworking, on services specification, on QoS achievement and generically on network management functionality, have driven this paradigm to a very important level. The main idea is to provide services that allow specifying management and operational rules in the same way people do business. Despite the main focus of this technology has been associated with network management solutions, its generality allows to extend these principles to any business process inside an organization. In this paper we discuss the main proposals in the field, namely the IETF/DMTF model, and we present a proposal that allows the specification of policy rules through a user-friendly and component-oriented graphical interface.

Title:

TOWARDS AGENT BASED BUSINESS INFORMATION SYSTEMS AND PROCESS MANAGEMENT

Author(s):

Johann Sievering, Jean-Henry Morin

Abstract: Todays Business Information Systems and Business Intelligence applications have become key instruments of corporate management. They have evolved over time to a mature discipline within IT departments. However, they appear to be slow at integrating emerging technologies offering major improvements to trading partners in the global networked ecosystem. The Internet is slow-ly evolving towards Peer-to-Peer architectures and grid computing, Agent-Oriented Program-ming, Digital Rights and Policy Management, trusted computing, ontologies and semantics. These evolutions are setting the ground and requirements for the future of corporate IT. This pa-per reports on current investigations and developments on this issue making the case for the in-tegration of emerging technologies in Business Information Systems. In particular, mobile agents and peer-to-peer computing offer major advantages in terms of technical architectures as well as a programming paradigm shift. We are currently working on a framework addressing these issues towards Active Business Objects.

Title:

ANALYSIS OF BUSINESS TO BUSINESS ELECTRONIC MARKETS IN CHINA: THEORETICAL AND PRACTICAL PERSPECTIVES

Author(s):

Jing Zhao

Abstract: In China, electronic markets (e-markets) are in the early stages of development. It has unique characteristics in e-commerce activities and market mechanisms, which are largely a function of the current industry structure, financial infrastructure and organization structure. This paper addresses an interactive e-market space view and proposes the interactive e-commerce model for studying e-commerce activities and strategies in e-markets of China. Building on this theoretical insight the model draws attention to the e-commerce process in which buyers and sellers, virtual market manager and its business partners are linked and in which web-based communication and collaboration take place, and to the adopted innovative market mechanisms. The e-commerce process can be modelled by separating main business activities into four phases designed to exploit business opportunities. The model is applied to analyse one successful B2B Exchange in China. It offers an effective approach to studying dynamic structure of transaction and a high performance e-commerce strategy. Our research identifies the four lever of e-market capability. These abilities imply e-market potential to achieving and sustaining a new level of e-commerce strategy performance, and a more competitive position in a rapidly changing B2B electronic market of China.

Title:

MEMBERSHIP PORTAL AND SERVICE PROVISIONING SYSTEM FOR AN INFRASTRUCTURE OF HUBS:MANAGED E-HUB

Author(s):

Liang-Jie Zhang, Henry Chang, Zhong Tian, Shun Xiang Yang, Ying Nan Zuo, Jing Min Xu, Tian Chao

Abstract: The goal of Managed e-Hub research prototype is to build a common infrastructure of hubs so that businesses can develop B2B exchanges meeting their business needs based on it. In this paper, an open and extensible framework for Managed e-Hub is presented and the hub fundamental services are discussed in detail as well. The service provisioning system of Managed e-Hub not only provides a way of integrating other services into the hub by means of service on-boarding and subscription, but also provisions these services with their required provisioning information.

Title:

APPLICATION SCENARIOS FOR DISTRIBUTED MANAGEMENT USING SNMP EXPRESSIONS

Author(s):

Rui Lopes

Abstract: Management distribution is an, we can say, old topic in terms of the number of proposed solutions and publications. Recently, the DISMAN workgroup suggested a set of MIB modules to address this matter in the context of SNMP. One of the DISMAN modules has the capability of using expressions to perform decentralized processing of management information – the Expression MIB. Although existing for some time now, its capabilities are not very well known. In fact, other DISMAN MIBs, such as the Schedule MIB and the Script MIB already got some attention in several papers and are target of very solid work. There are hardly any papers describing the Expression MIB and its functionality. This paper contributes to eliminate this absence by describing our implementation effort around it as well as some real world applications for it.

Title:

AGENTAPI: AN API FOR THE DEVELOPMENT OF MANAGED AGENTS

Author(s):

Rui Lopes

Abstract: Managed agents, namely SNMP agents, costs too much to develop, test and maintain. Although assuming simplicity since its origins, the SNMP model has several intrinsic aspects that make the development of management applications a complex task. However, there are tools available which intend to simplify this process by generating automatic code based on the management information definition. Unfortunately, these tools are usually complicated to use and require a strong background of programming experience and network management knowledge. This paper describes an API for managed agent development which also provides multiprotocol capabilities. Without changing the code, the resulting agent can be managed by SNMP, web browsers, wap browsers, CORBA or any other access method either simultaneously or individually.

Title:

AUTOMATIC E-COMMERCE USING A MOBILE AGENTS MODEL

Author(s):

Francesco Maria Raimondi, Salvatore Pennacchio

Abstract: Electronic commerce business to business using mobile agents is one of the most important future promise and also one good result of global mobility code. As we will show, classic commerce model and electronic commerce model both introduce advantages and disadvantages. Electronic commerce through mobile agents has an objective that is to eliminate defects and to arrange advantages of previous models. In particular it takes cue of selling negotiation, in which it is necessary to take decisions.

Title:

A TIME ZONE BASED DYNAMIC CACHE REPLACEMENT POLICY

Author(s):

Srividya Gopalan, Kanchan Sripathy, Sridhar Varadarajan

Abstract: This paper proposes a time zone based novel cache replacement policy, LUV, intended for web traffic in the context of a hybrid cache management strategy. The LUV replacement policy is based on ranking of web objects on a set of metrics intercepted by a proxy server. Further, in order to maximize the hit rate, the cache replacement policy makes use of immediate past access patterns for individual web objects with respect to various time zones.

Title:

BOOTSTRAPPING THE SEMANTIC WEB BY DEVELOPING A MULTI-AGENT SYSTEM TO FACILITATE ONTOLOGY REUSE:A RESEARCH AGENDA

Author(s):

Abir  Qasem

Abstract: Ontologies are basic components of the Semantic Web but are difficult to build, and this acts as a bottleneck in the spread of the Semantic Web. Reuse is seen as one of the solution to this problem. This paper addresses the feasibility of a multi-agent system that will automatically identify the appropriate reusable ontologies and thereby greatly reduce the burden of its users. First, the area of automated software component reuse is reviewed and borrowed from in order to develop an appropriate framework. Next, a research agenda is proposed for developing this type of multi agent system for ontology reuse. Finally it is argued that the proposed multi-agent system will enable faster deployment of the Semantic Web by making the ontology development process efficient and developed ontologies, more robust and interoperable. This use of agents may help to bootstrap the Semantic Web itself by leveraging from the emerging Semantic Web architecture, and contributing to its growth.

Title:

A DYNAMIC AND SCALABLE AGENT-BASED APPROACH FOR KNOWLEDGE DISCOVERY : WEB SITE EXPLORATION

Author(s):

Aurelio López López, Alberto Méndez Torreblanca

Abstract: The World Wide Web has become an open world of information with a continuous growth.This dynamic nature is causing several difficulties for discovering potentially useful Knowledge from the web. The techniques of web mining and software agents can be combined for resolving the problem. In this paper, we propose a dynamic and scalable agent-based approach for knowledge discovery from specific web sites, where the information is constantly added or eliminated, or its structure is permanently modified.We also report preliminary results of the approach for the exploration of web sites.

Title:

INTELLIGENT SOFTWARE AGENTS IN THE KNOWLEDGE ECONOMY:

Author(s):

Mahesh S. Raisinghani

Abstract: Intelligent agent technology is emerging as one of the most important and rapidly advancing areas in information systems and e-business. There is a tremendous explosion in the development of agent-based applications in a variety of fields such as electronic commerce, supply chain management, resource allocation, intelligent manufacturing, industrial control, information retrieval and filtering, collaborative work, decision support, and computer games. While research on various aspects of intelligent agent technology and its application is progressing at a very fast pace, this is only the beginning. There are still a number of issues that have to be explored in terms of agent design, implementation, and deployment. For example, salient characteristics of agents in different domains, formal approaches for agent-oriented modeling, designing and implementing agent-oriented information systems, agent collaboration and coordination, and organizational impact of agent-based systems are some of the areas in need of further research. The purpose of this paper is to identify and explore the issues, opportunities, and solutions related to intelligent agent modeling, design, implementation, and deployment.

 
Page Updated on 27-05-2003

Copyright © Escola Superior de Tecnologia de Setúbal, Instituto Politécnico de Setúbal