ICEIS 1999 Abstracts

 

Abstract of Accepted Papers

Program Committee

Case Studies

Keynote Lectures

Tutorials

Workshops

Paper Templates

Proceedings

Social Activities

Transportation and Accomodation

Local Information

Organizing Committee

Steering Committee

Sponsors

Hall of Fame

Links


Co-organized by:

École Supérieure d' Électronique de l' Ouest
École Supérieure
d' Électronique de
l' Ouest

and
Escola Superior de Tecnologia
Departamento de Sistemas 
e Informática
da
EST-Setúbal/IPS 
Escola Superior de 
Tecnologia de Setúbal 

 Instituto Politécnico de Setúbal

 

ICEIS 2003 Sites
www.est.ips.pt/iceis/

www.iceis.org

DBLP bibliography

 

Area 1 - DATABASE TECHNOLOGY AND ITS APPLICATIONS
Area 2 - ARTIFICIAL INTELLIGENCE AND DECISION SUPPORT SYSTEMS
Area 3 - SYSTEM ANALYSIS AND SPECIFICATION
Area 4 - INTERNET AND INTRANET COMPUTING

Area 1 - DATABASE TECHNOLOGY AND ITS APPLICATIONS

Title:

VALIDATING REFERENTIAL INTEGRITY AS A DATABASE QUALITY METRIC

Author(s):

Coral Calero, Mario Piattini, Macario Polo and Francisco Ruiz

Abstract:

This paper describes two metrics based on the referential integrity. The first one is defined as the maximum number of levels of referential integrity among tables and the second one is defined as the number of foreign keys. An empirical study for demonstrate that these metrics can affect the understandability of the relational database schema is presented. Four cases have been designed in order to validate empirically the influence of the two metrics. With the results obtained we conclude that the referential integrity affects the undestandability of the relational database schema.


Title:

A PROPOSAL FOR CONTROL DATABASE SOFTWARE APPLICATION DEVELOPMENT

Author(s):

Antonio Martínez and Mario Piattini

Abstract:

In this paper, we propose a measurement set for the CA-OpenIngres/4GL, by adapting classical metrics (size, length, complexity, cohesion, and coupling) which can be generalized for other relational database fourth generation languages. These measures are characterized using the mathematical framework developed by Briand et al. (1996) and Morasca and Briand (1997). Note that the proposed measures and metrics have been conceived to be applied in traditional languages, which are more homogeneous than 4GL ones. These measures do not take into account the 4GL languages include sentences ( sub-languages) of different nature.
We propose a classification of the fourth generation languages in sub-languages ( procedural control sentences, visual control sentences, exception handling sentences, database objects manipulation sentences, data manipulation sentences, security control sentences, transaction control sentences). The application of these measures would provide a more accurate vision of the characteristics of the evaluated code.


Title:

GATHERING THE RIGHT INFORMATION AT THE RIGHT TIME: AN AGENT BASED APPROACH TO DATA WAREHOUSES LOADING PROCESSES

Author(s):

Orlando Belo

Abstract:

In order to build accessible and effective means for decision-making, enterprises gather large amounts of data on their distributed information sources and, in some cases, on external repositories. After, the information gathered is analysed, selected, filtered, transformed and integrated in special data storage units, commonly named by data warehouses systems. To do all the previous tasks properly, it is necessary to dispense significant efforts on the enterprise functional and operational analyse processes and on the definition and implementation of mechanisms that ensure effectively the operations involved with the "migration" of data into the data warehouse system. Moreover, it is also necessary to guarantee consistency, permanent data availability, and high quality levels on the information stored on the data warehouses. Thus, through the combination of information systems and data warehousing techniques, and agent-based technology, we designed an intelligent data warehousing system management architecture. The architecture's model includes all the passive and active components – data resources, end-users, and agents - related to system management, information gathering, and fault tolerance mechanisms, for an intelligent data warehousing system.


Title:

DB-GRAPH: A TOOL FOR DEVELOPMENT OF DATABASE SYSTEMS BASED ON THE EXTENDED ENTITY RELATIONSHIP LOGICAL MODEL

Author(s):

Marcos Aurélio Alves, Marcelo Ossamu Honda and Fábio Lúcio Meira

Abstract:

The main goal of this paper is the presentation of the graphic tool for development of databases systems: db-Graph. This tool allows the construction of a logical model of database based on the extended entity relationship model, following the pattern defined by Elmasri and Navathe in [ELMAS94]. The main prominence of the tool is that it allows the manipulation of the data base (inclusion, exclusion, alteration, queries) directly in the graphic model as well as in the relational model, generated automatically. The tool presents three modules: Publishing module, for the construction and manipulation of the logical model; Conversor Module, to generate the relational model automatically; Queries Generator module, for manipulation of database directly in the relational model.


Title:

A SYSTEM FOR ANALYSIS & VISUALIZATION OF DOMAIN SPECIFIC DATA

Author(s):

Srinivas Narasimha Kini, Srikantha and K. Poulose Jacob

Abstract:

Computerization and the usage of digital devices in recent years have made data available in digital form in the corporate world. Despite this, most corporate executives are not using their invaluable asset, data, fully. A visual form of presentation can enhance executives understanding of the data and their interrelationships. This paper discusses how complementary technologies like networking, data warehousing, data mining, OLAP and data visualization can be used to develop the system architecture for a domain specific data with an example of the election domain. A system with such architecture can be used efficiently in the corporate scenario to help executives to make strategic decisions. This paper also explains the creation of efficient data warehouse with the concept of static and dynamic data.


Title:

DESIGN AND IMPLEMENTATION OF AN OBJECT-ORIENTED ROAD NETWORK DATABASE

Author(s):

Muhammad Abaidullah Anwar and Takaichi Yoshida

Abstract:

One of the problems raised by the Transportation Industry is to arrange the data related to a road in such a way such that its retrieval, especially a part of the road, may be efficient and unnecessary data processing may be avoided. This paper presents an effort to handle this problem by dividing the road into road segments into levels in which a country is divided into administrative areas i.e. prefectures, cities, etc Responsibility-Driven Approach of Object-oriented Analysis and Design (OOAD) is used in designing the Road Network Data Model. The design process is divided into two phases. In Exploratory Phase, the classes needed for the data model, overall responsibilities of the system and responsibilities of the individual classes have been identified. In the Analysis Phase, analysis of hierarchies is done to arrange the identified classes into containment and composite hierarchies.


Title:

HOW DECISION SUPPORT SYSTEMS INFLUENCE MANAGEMENT DECISION EFFECTIVENESS?

Author(s):

Alberto Carneiro

Abstract:

This article is concerned with decision process and examines the relationships between management decision effectiveness, information attributes, and decision support systems in the context of database management. By considering how a database and the information technology affect the consequences of manager's decisions, this study attempts to provide useful insights on the linkages between strategic management, information technology, and a number of information attributes commonly used in dealing with strategic needs. This study proposes also an interpretative linear programming model, with special focus on the relationships between decision effectiveness and decision support systems. Results indicate that middle and top managers are increasingly taking into account the importance of information resources and the role of decision support systems to assure decision effectiveness. The major findings are discussed and directions for future research are suggested according to the proposed model.


Title:

MODELLING BACKGROUND PROCESSES IN PARALLEL DATABASE SYSTEMS FOR PERFORMANCE PREDICTION

Author(s):

K. J. Lü

Abstract:

Performance predication is a valuable technique for the application sizing, capacity planning and performance tuning of database systems. It allows design choices to be tested easily and cheaply. It can highlight potential performance problems in designs before any construction conducted. Background process is one of the most important activities involved in database operation. The way of estimating the cost of background processes will greatly affect the accuracy of performance predication for the entire system. An approximation modelling approach for background processes in parallel database systems is introduced. This approach could be used in analytical tools for performance prediction of shared-nothing and shared-disk parallel database systems.


Title:

A COORDINATION MODEL FOR WORKFLOW MANAGEMENT SYSTEMS

Author(s):

Danilo Montesi, David Beaumont, Peter Dearnley and Dan Smith

Abstract:

Systems coordination is essential to achieve a common goal through cooperative systems. Active rules provide a simple and powerful approach to achieve high level activities coordination and can easily and flexibly express coordination policies among different tasks that are executed by different systems. We propose a coordination model based on active rules where rule semantics can be immediate, deferred or decoupled. Under the immediate semantics a rules is executed as soon as it is triggered. Under the deferred semantics rule execution is postponed. Under the decoupled semantics rule executions form a separate sequence that can run concurrently. All the semantics of these rules can be expressed within a simple and uniform formal model to define several coordination policies. Transactional behaviour can be expressed within the same coordination model to realize workflow management systems where all-or-nothing behaviour is often required and can run on top of a database system. We describe a conceptual architecture based on database systems to control access, a discussion of the toolkit, and a description of the workflow interface to the Web. This approach follows previous experiences at BT Laboratories where customized solutions have been developed for several data-intensive cooperative systems.


Title:

TWO FAST ALGORITHMS FOR REPEATED MINING OF ASSOCIATION RULES BASED ON RESOURCE REUSE

Author(s):

Xiaoping Du, Kunihiko Kaneko and Akifumi Makinouchi

Abstract:

This paper proposes new algorithms for mining association rules in a large database of sales transactions. In order to obtain the results satisfying the users, data mining has to be repeated several times with diferent minimum supports. If the existing mining algorithms are used for each repeated mining, similar and unnecessary processing is repeated, which may lead to overhead. We propose two more e
ective algorithms. They store the intermediate results which are used for the following mining and reuse them to improve the performance of repeated mining. The empirical evaluation shows that, when the algorithms are repeated each time, the performance of our algorithms is much better than that of the rerunning present algorithms.


Title:

AUTOMATIC DOCUMENT ANALYSIS AND UNDERSTANDING SYSTEM

Author(s):

Xuhong Li, Jianshun Hu, Zhenfu Cheng, D.C. Hung, and Peter A. Ng

Abstract:

Document processing is a crucial process in the office automation. Document image processing begins from the "OCR" phase with difficulty of the document "analysis" and "understanding". In this paper, we describe an automatic document classification and extraction system (ADoCES), which is a component of TEXPROS (Text Processing System). This component proceeds from scanning a given paper-document into the system, classifying it as a particular type, which is characterized in terms of attributes to form a frame template, and then extracting the pertinent information from the document to form its corresponding frame instance, which is an effective digital form of the original document. Given an incoming document, the document analysis proceeds to define its layout structure in terms of a directed and weighted graph (DWG) and its logical structure in terms of a frame template. The document understanding proceeds to extract information from the document based on the layout structure and the frame template of the document. We describe briefly that the system can "learn" from experiences and function in an operational stage.


Title:

PUBLIC TRANSPORT TRAVEL PLANNING APPLICATION

Author(s):

Marisol Correia

Abstract:

Here it is presented an application that plans out travel on public transports and that chooses the best ones, according to preference criteria provided by the user. These criteria are: the time spent on the travel, the price of the tickets and the quality of the transports. The application combines different means of transport. Algorithms and heuristics were developed to draw up transport plans and to choose the best ones. The best plans are determined using the multi-attributes decision techniques. The application uses a database that was developed in a Relational Database Management System. To draw the database at the conceptual and the applicational level, it was used one of the models based on the object, the Entity-Relationship Model.


Title:

A NOVEL APPROACH FOR THE SEAMLESS INTEGRATION OF MEDIA MANAGEMENT INTO DISTRIBUTED ENTERPRISE INFORMATION SYSTEMS

Author(s):

Gabor Szentivanyi, Waltraud Gerhardt and Mohammad Abolhassani

Abstract:

Current approaches to enrich distributed enterprise information systems with media functionality do not provide a high degree of openness, flexibility and configurability. The deficiencies are to be found in the models that serve as a foundation for the management and the information to be managed, and in the architecture that provides a realisation environment for the models. On the one hand, the models used are usually not full-fledged, distributed object models, but interchange scripts, proprietary protocols and local views. On the other hand, the architectures used cannot provide the infrastructures to seamlessly incorporate distributed object models with all their features. This paper defines the foundations of distributed enterprise information systems as business management and media management. It examines existing architectures and points out their deficiencies what concernes seamless integration of media management. Finally, it proposes a distributed object model for media management and an architecture for distributed information systems, into which the media management model can seamlessly be integrated.


Title:

SUPPORTING DECISIONS CONCERNING LOCATIONROUTING OF OBNOXIOUS FACILITIES: AN APPLICATION EXAMPLE OF VISUALIZATION AND INTERACTION TECHNIQUES

Author(s):

Carlos Ferreira, Andreia Barbosa De Melo and Beatriz Sousa Santos

Abstract:

The problem of locating obnoxious facilities has become a major social concern. In these cases the traditional optimality criterion of "closeness" is replaced by the opposite criterion; moreover location and routing decisions are usually intimately related and combining both provides a better modelling. The problem’s already high complexity is also increased when other criteria of interest are considered simultaneously. The use of multiobjective location-routing models, which corresponds to a better modelling in real cases, can be jeopardised by the huge demanded effort. In consequence, the use of graphical and interactive methods becomes of great importance. This paper addresses specific issues and corresponding developed solutions for the visualization and interaction problems related to the use of a multiobjective location-routing model that could be used in a Decision Support System for Location-Routing of Obnoxious Facilities. A mock-up user interface was developed in order to test if the visualization and interaction solutions prove adequate.


Title:

CANDIDATE DROP ALGORITHM FOR MINING ASSOCIATION RULES

Author(s):

S. Raman T. S. Appan

Abstract:

Knowledge Discovery in Databases (KDD) helps in realizing the potential value of the implicit information stored in large databases. One specific approach to that is the mining of association rules. An example of an association rule is that \90% of customers who buy Rufies also buy Coke". The main problem is that of discovering the relationship between such items. As per the widely used apriori algorithm, the frequent itemsets are generated through an iterative process which requires a large number of I/O operations. The partition algorithm, which is a later development, scans the database only twice. In the rst scan, it generates a huge list of potential candidate itemsets which will be used in the second scan to generate the frequent itemsets. In this paper, a new approach called the candidate drop method is proposed, and it reduces the number of potential candidates through the use of Virtual Large Partitions. Empirical evaluation shows that the reduction is between 24% and 41%. Also, the number of scans over the database is reduced to between 1 and (2n-1)/n, where n is the number of partitions.


Title:

AN INFORMATION SYSTEM FOR DISTRIBUTED MANUFACTURING ENTERPRISES

Author(s):

Américo Lopes Azevedo and Cesar Toscano

Abstract:

A component-based Information System supporting the requirements of an optimisation-based global Order Promise system developed for use in distributed manufacturing enterprises is described. The system is full Distributed Object Oriented and is part of a broader Decision Support System for production and operations planning for the semiconductor industry. In this manufacturing sector, planning and control activities are very complex, and have to take place both within the enterprise and across the whole supply network in order to achieve high levels of performance. This leads to new more challenging requirements for information systems, that are partially fulfilled with the system described in this paper.


Title:

USING TRANSFORMATION PATHS TO TRANSLATE AND MIGRATE DATA

Author(s):

Christophe Nicolle, Nadine Cullot and Kokou Yétongnon

Abstract:

The increasing need for exchanging data between heterogeneous distant sites, and for managing new database systems lead to develop new mechanisms and architectures to allow interoperability of information systems. One of the main problems of these architectures is to take into account heterogeneous data translation between cooperative databases. Many translation methodologies have been developed, but they are often too specific or not extensible to new models. This paper presents a case tool, called TIME, to aid the construction and the management of heterogeneous cooperative information systems. The key features of this solution are 1) a set of abstract metatypes that capture the characteristics of various modelling concepts, 2) the organisation of the metatypes in a generalisation hierarchy to allow unification and correlation of data models and 3) a set of transformation rules coupled with the hierarchy that ensure the translation between instances of metatypes. Rules are combined in translation path used for the translation schema and the migration of data. A Data Translation Module uses this translation path to build data translators, which can format data requested from one system to another system.


Title:

UPDATING MULTIPLE DATABASES THROUGH MEDIATORS

Author(s):

Vânia Maria Ponte Vidal and Bernadette Farias Lóscio

Abstract:

Mediator is a facility that supports an integrated view over multiple information sources, and allows for queries to be made against the integrated view. In this paper, we extend the mediator architecture to support updates against the integrated view. Updates expressed against the mediator’s integrated view need to be translated into updates of the underlying local databases. We developed algorithms to generate translators for the basic types of mediator update operations. In our approach, a translator is a function that receives an update request and generates the update's translation. In this paper, we present the algorithm that generates translators for monovalued attribute modification operations.


Title:

AN INTELLIGENT RETAIL ANALYSIS SYSTEM

Author(s):

M. Fátima Rodrigues and Carlos S. Ramos

Abstract:

The progress of data-collection technology, such as bar-code scanners in commercial domains, generates huge amounts of data. Moreover, pressure to improve profitability has caused retail companies to spend more energy in identifying sales opportunities. To aid this task, companies increasingly store huge amounts of data in data warehouses for decision support purposes. Actually the needs of decision-support systems are evolving into finer and finer-grain requirements. In the 60's the requirements were at the market level; in the 70's, at the niche level; in the 80's at the segment level; and in the 90's, at the customer level (Druker 1995). These finer-grain requirements obviously lead to the use of more data in decision support systems. It is not realistic to expect that all this data be carefully analysed by human experts. A new generation of tools and techniques for automated intelligent database analysis is needed. These tools and techniques are the subject of the rapidly emerging field of Knowledge Discovery in Databases, which is the subject of this article. This article presents two Data Mining exercises that we have added to DECADIS - ''DEscoberta de Conhecimento em Armazéns de Dados da DIStribuição'' (Knowledge Discovery in Retail Data Warehouses), which is an integrated system to understand customer behaviour and consumption patterns in a Portuguese company of the retail industry.


Title:

ON INCORPORATING MULTIMEDIA INTO SPATIOTEMPORAL SYSTEMS

Author(s):

Cláudio de Souza Baptista

Abstract:

The advances in multimedia technology have changed the way of designing information systems. The number of applications requiring the use of this new data type is increasing. This work focus on the integration of multimedia into spatial information systems. It is advocated that the use of metadata in order to express the semantic of those data types is essential for effectively index, retrieve, transport and storage of them. A metamodel is defined which encompasses spatial, temporal, aspatial and multimedia data types. This model is defined using a hierarchical approach with different levels of abstraction and it is implemented on the top of an object-relational database system. A key characteristic of the model is the flexibility in querying the database objects by content, by spatial and temporal features, and by keywords. The model is designed obeying the following properties: unified and transparent view of the underlying data, query facilities, indexing requirements, distribution, interoperability, and extensibility.


Title:

ANALYZING THE IMPACT OF SCHEMA CHANGE ON APPLICATION PROGRAMS

Author(s):

M. Bouneffa, H. Basson and L. Deruelle

Abstract:

A database is generally shared by several application programs. This makes any schema change critical and often prohibited because existing application programs often become invalids to run against the modified schema. Many approaches have been developed to deal with this problem and the main idea was to make the change transparent for the programs. In the case of Object-Oriented databases, transparency is often reached by using a kind of ad hoc polymorphism, which may be implemented by the instances, the query language or the application programs. Such approaches are often very useful for a transition period or for a limited taxonomy of schema changes. However, they can not be sufficient to deal with all kind of schema changes. Otherwise, polymorphism based approaches may increase the systems complexity and would have bad side effects on both the maintenance process and the system performance. In this paper, we deal with adapting object-oriented application programs to the database schema change. We assume that such programs must evolve to meet the new schema versions. Our approach is based on the use of a software change impact analyser that we developed to manage the change of object-oriented programs.


Title:

PROTOTYPE VALIDATION OF THE TRAPEZOIDAL ATTRIBUTE CARDINALITY MAP FOR QUERY OPTIMIZATION IN DATABASE SYSTEMS

Author(s):

Murali Thiyagarajah and B. John Oommen

Abstract:

Current database systems utilize histograms to approximate frequency distributions of attribute values of relations. These are used to efficiently estimate query result sizes and access plan costs and thus minimize the query response time for business and (non commercial) database systems. In two recent works (Oommen, Thiyagarajah 1999a, Oommen, Thiyagarajah 1999b) we proposed two new forms of histogramlike techniques called the Rectangular and Trapezoidal Attribute Cardinality Maps (ACM) respectively. Since these techniques are based on the philosophies of numerical integration, they provide much more accurate result size estimations than the traditional equi-width and equi-depth histograms currently being used by many commercial database systems. In (Oommen, Thiyagarajah 1999a; Oommen, Thiyagarajah 1999b) we also provided a fairly extensive mathematical analysis for their average and worst case errors for their frequency estimates which, in turn, were verified for synthetic data. This paper reports the prototype validation for the Rectangular-ACM (R-ACM) for query optimization in real-world database systems. By using an extensive set of experiments using real-life data (U.S. Census 1997, NBA 1992), we demonstrate that the T-ACM scheme is much more accurate than the traditional histograms for query result size estimation. We anticipate that it could become an invaluable tool for query optimization in the future.


Title:

MEDINFORM: AN ENTERPRISE-WIDE MEDICAL INFORMATION AND TELEMEDICINE SYSTEM

Author(s):

Pang Pingli and Micheal Fenton

Abstract:

MedInform is a distributed medical imaging system. It links scattered medical resources and provides cheap, scalable, personalized, secure and fast access to them, not only within an organization, but in other institutions as well. The system consists of a set of autonomous local systems that each specializes in a certain medical domain such as craniofacial surgery and planning or cardiology. Each of these domain systems has a distributed 3-tier web-based client/server architecture. The database provides local storage to facilitate faster and special information accessing of medical data and others from PACSs, image devices, existing systems, or outside institutions. The service layer does such computation intensive jobs as image analysis, 3-D visualization and Telemedicine which allows the client computer to be simple with less computation power. This gives cheap image access for general users, including patients. Each interface component serves certain application rendering purpose. The role-oriented interface picks the related interface components that are needed by the user, to form the most suitable working environment. The existing systems and point-of-care devices are linked to the system through DICOM and HL7 interfacing services. The communication among the different systems is achieved through an Information Repository Center (IRC) using interface component or service objects that gives more efficient system integration than the present method through HL7 interfacing. The IRC links the shared PACSs and other medical devices. It keeps indexing on various information in PACSs, domain systems and outside institutions. Upon any information changes, they inform IRC to update its indexes. Levels of information access control and security are also maintained in IRC and domain systems.


Title:

THREE-LEVEL ARCHITECTURE FOR QUERY CLOSURE IN DATABASE SYSTEMS

Author(s):

B.Nick Rossiter, David A. Nelson and Michael A. Heather.

Abstract:

In a database query, closure is achieved when the output from its execution is a first-class database structure. Such structures should be capable of being further manipulated or queried by the database system according to its model without limitations Closure comes easily to relational databases because there is only one type of data - the table. In more expressive database models, such as the object-oriented, closure is not achieved naturally. Such systems allow new objects to be created but it is much more difficult to create new class-object structures, which rank equally in all respects with other such structures already existing in the database. For ODMG, the problem of closure is acknowledged and some tentative solutions have been proposed.


Area 2 - ARTIFICIAL INTELLIGENCE AND DECISION SUPPORT SYSTEMS

Title:

STRUCTURE LEARNING OF BAYESIAN NETWORKS FROM DATABASES BY GENETIC ALGORITHMS APPLICATION TO TIME SERIES PREDICTION IN FINANCE

Author(s):

Jérôme Habrant

Abstract:

This paper outlines a genetic algorithm based method for constructing bayesian networks from databases. Our method permits the generation of a complete structure if there is no expert for the domain studied. Also it allows taking advantage of the knowledge about the domain by specifying connections in the network. To test our method, we applied it to time series prediction in finance with 5 shares. We experimented 3 different genetic algorithms: first, we used classical syntactical genetic operators, second we add 2 high-level genetic operators by taking the semantic of the structures into consideration, and third, we add a last powerful operator. Furthermore, we studied 3 constraints on the structures: by assuming an ordering between the nodes, by releasing the ordering assumption and by forcing the structures to use all available information to build the forecasts. For each of the 3 genetic algorithms and the 3 constraints, we present our results concerning the genetic algorithms convergence and the predictive power of the best structures obtained. Our results are encouraging.


Title:

A GENERAL METHODOLOGY FOR ROBOTIC HAPTIC RECOGNITION OF 3-D OBJECTS

Author(s):

E. Faldella and M. Prandini

Abstract:

Three-dimensional object recognition is a fundamental prerequisite to build versatile robotic systems. This paper describes an approach to the recognition problem that exploits tactile sensing, which can be conveniently integrated into an advanced robotic end-effector. The adopted design methodology is based on the training and classification activities typical of the unsupervised Kohonen neural networks, with a learning phase of the geometric properties of the objects, followed by the operative phase of actual recognition in which the robot explores with its end-effector the objects, correlating the sensorial data with the preceding perceptive experiences. The validity of the novel approach pursued for the design of the haptic recognition system has been ascertained with reference to a high-dexterity 3-finger, 11-degree of freedom robotic hand (the University of Bologna hand), but the underlying methodological issues can be specialized to any robotic dexterous end-effector. The developed prototype system, even though currently referring to a simulated environment, has already shown a satisfactory operative level in recognizing objects belonging to a set of significant cardinality, independently of their pose in the working space.


Title:

LINGUISTIC ENGINEERING FOR CONCEPTION OF MULTIAGENTS SYSTEMS

Author(s):

Jean-Philippe Kotowicz and Xavier Briffault

Abstract:

The actors of a company must collaborate in an efficient way to achieve the common purposes of the company (the Project). Therefore, they must have competencies, manage and produce knowledge, perform tasks, and communicate between them. The objective of the EUREKA project "MERCURE", in which this work fits, is to propose a tool of assistance to collaboration and communication between actors based on a multi-agents platform (software agents) communicating by speech acts of advanced languages. The software agents are built by analysis of the actors’ competencies, from interviews in natural language.


Title:

DAMAS: AN INTEGRATED BUSINESS MODELLING INFORMATION SYSTEM TO SUPPORT MANAGEMENT ENTERPRISE DECISIONS

Author(s):

Luigi Lavazza and Habib Sedehi

Abstract:

DAMAS aims at building a decision support system based on the integration of legacy data repositories and system dynamics modelling tools. The latter are used to simulate the behaviour of different business areas, such as marketing, finance, production, etc. Moreover, an enterprise-wide, company-specific model, integrating the aforementioned areas, is being built. The main target is wine production industry. Nevertheless, the DAMAS consortium is pursuing the applicability of the proposed approach to other industrial sectors. DAMAS features a business object architecture, encapsulating legacy repositories, as well as business intelligence and common functions. Managers are provided with a high-level dashboard which co-ordinates, controls and monitors the underlying business objects.


Title:

SIPO: A SYSTEM FOR THE ESTABLISHMENT OF OBJECTIVES

Author(s):

Paulo Rogério Perfeito Tomé and Luís Alfredo Martins do Amaral

Abstract:

The explicit or implicit definition of objectives of an organisation is usually realised by the individuals of that organisation. Individuals have their own personal objectives, departmental objectives and also objectives for the organisation in its entirety. Knowledge of all the objectives is important for the future development of the organisation. In order that the objectives might be achieved it is important that they be understood. For this reason they must be expressed in a clear and precise way. The coexistence of a multitude of objectives, pursued by different individuals, causes conflicts, redundancies and cross-influences that confirm the relations among these objectives. This article describes an intelligent system that allows the totality of objectives of an organisation to be determined. To this end an attempt is made to establish a language and a set of relations which is to be found among these objectives.


Title:

WHAT’S IN A NODE: NODES AND AGENTS IN LOGISTIC NETWORKS

Author(s):

Joaquim Reis and Nuno Mamede

Abstract:

In this article we describe components of a representation scheme for modeling a logistics environment, where production and distribution of products can both occur and must be coordinated, following from previous work of the authors. These components, model enterprises (e.g., factories, warehouses, etc.) in a Production/Distribution network (P/D network, for short), as well as capacity management at enterprise facilities, and agents which act as enterprise managers, taking decisions that affect the available capacity at the facilities. In the near future, our goals include approaching the multi-agent coordination problems that occur in scheduling activity in this kind of environment.


Title:

NEURAL NETWORKS FOR X-RAY IMAGE SEGMENTATION

Author(s):

Su Linying, Bernadette Sharp and Darryl Davis

Abstract:

The paper addresses the application and challenges of using neural networks to segment gray-level images; approaches we term direct perception. The work described here is part of Intelligent Multi-Agent Image Analysis System, which is being developed to promote the automated diagnosis and classification of digital images. In this paper we show how neural networks may be successfully segment medical X-ray images of the thigh. They are Back Propagation neural network, Counter Propagation neural network, Self-Organizing Feature Map, Bi-directional Associative Memory, and a hybrid network consisted of BP and SOFM. The comparisons among their performance are made, and some feature extraction techniques used here are presented. A kind of one layer neural network, known as WISARD, may be used to validate the segmentation performance based on the segmentation results. A highly general validation information, the centroid curve of the segmented images, is proposed here.


Title:

SCALABLE INTELLIGENCE DECISION SUPPORT SYSTEMS

Author(s):

Carlos Ramos

Abstract:

In this paper we try to compare two different approaches: decision support technology and Artificial Intelligence (AI) technology. After a discussions on these approaches we conclude that they are not alternative, being most of times complementary. A new concept is introduced “Scalable Intelligence Decision Support Systems - SIDSS”. The idea is to have a new generation of Decision Support Systems that gives to the user the opportunity to use as much system intelligence as he or she wants to use. Some on-going projects in this direction will be presented in the paper.


Title:

TOWARDS AN EXECUTABLE SPECIFICATION OF MULTIAGENT SYSTEMS

Author(s):

V. Hilaire and T. Lissajoux and A. Koukam

Abstract:

Facing Multi-Agent Systems spread and numerous implementations, the engineering methods gap is becoming acute. This paper proposes a formal specification method in order to fulfil that need. This method is based upon two existing formalisms: Object-Z and statecharts. We compose them to build a specification framework and illustrate the approach with an example drawn from radiomobile network field.


Title:

THE STL++ COORDINATION LANGUAGE: APPLICATION TO SIMULATING THE AUTOMATION OF A TRADING SYSTEM

Author(s):

Michael Schumacher, Fabrice Chantemargue, Simon Schubiger, Béat Hirsbrunner and Oliver Krone

Abstract:

This paper introduces the STL++ coordination language, a C++-based language binding of the ECM coordination model. STL++ applies theories and techniques known from coordination theory and languages in distributed computing to try to better formalise communication and coordination in distributed multi-agent applications. STL++, as such, may be seen as a preliminary agent language which allows the organisational structure or architecture of a multi-agent system to be described, with means to dynamically reconfigure it. It is aimed at giving basic constructs for distributed implementations of generic multi-agent platforms, to be run on a LAN of general-purpose workstations. We illustrate the application of STL++ to a real case study, namely the application to simulating the automation of a trading system.


Title:

WHAT RIGHT DO YOU HAVE TO DO THAT ? INFUSING ADAPTIVE WORKFLOW TECHNOLOGY WITH KNOWLEDGE ABOUT THE ORGANISATIONAL AND AUTORITY CONTEXT OF A TASK

Author(s):

Peter Jarvis, Jussi Stader, Ann Macintosh, Jonathan Moore and Paul Chung

Abstract:

To achieve more widespread application, workflow systems need to be developed to operate in dynamic environments where they are expected to ensure that users are supported in performing flexible and creative tasks while maintaining organisational norms. We argue that in order to cope with these demands, the systems must be provided with knowledge about the organisational structure and authority context of tasks. We support this argument by identifying a number of decision points that an adaptive workflow system must support, discussing how these decisions can be supported with technically oriented capability specifications, and describe how this support can be enhanced with the inclusion of knowledge about organisational structure and authority. We outline how such knowledge can be captured, structured, and represented in a workflow system. We then demonstrate the use of such knowledge by describing how the task initiation, task planning, activity scheduling, and agent interaction functions within a workflow system can be enhanced by it.


Title:

HOLONIC DYNAMIC SCHEDULING ARCHITECTURE AND SERVICES

Author(s):

Nuno Silva and Carlos Ramos

Abstract:

Manufacturing systems are changing its structure and organisation. Supply chain are evolving to more coupled organisations, like virtual enterprises, though maintaining the single entities autonomy, adaptability and dynamism properties. Such organisations are very different, which imply organisational and technological shift through agility, distribution, decentralisation, reactivity and flexibility. New organisational and technological paradigms are needed in order to reply to the modern manufacturing systems needs. This paper present a holonic manufacturing system architecture and complementary services supplied to assist overall Communication, Security, Reliability, Information Management, Co-operation and Co-ordination.


Title:

CONTOUR ESTIMATION ON PIECEWISE HOMOGENEOUS RANDOM FIELDS

Author(s):

José A. Moinhos Cordeiro and José M. Bioucas Dias

Abstract:

This paper addresses contour estimation on images modeled as piecewise homo- geneous random fields. It is therefore assumed that images are samples of random fields composed of a set of homogeneous, in a statistical sense, regions; pixels within each region are assumed to be independent samples of a given random variable. Particular attention is given to Gaussian, Rayleigh, and Poisson densities. The model just described accurately fits many class of problems on image modalities such as optical, ultrasound, X-rays, emission tomography, and confocal microscopy, only to name a few. The followed approach is Bayesian: contours are assumed to be non-causal Markov random fields. This description is appropriate to include a priori information such as continuity, smoothness, elasticity, and rigidity. The selected estimation criterion is the maximum a posteriori (MAP). In the present context, MAP estimation, although simpler than others (e.g., minimum mean square error or minimum absolute error), leads to a huge non-linear optimization problem. By using dynamic programming associated to a multigrid resolution technique, quasi-optimal contour estimates are computed with an acceptable complexity. A set of tests using synthetic and real images illustrates the appropriateness of the proposed methodology.


Title:

SPARSE-IT: AN INTELLIGENT TUTOR FOR POWER SYSTEM CONTROL CENTER OPERATOR TRAINING

Author(s):

Zita A. Vale, António Silva, Luiz Faria, Nuno Malheiro, Carlos Ramos and Albino Marques

Abstract:

This paper deals with the use of the intelligent tutor SPARSE-IT, as an example of an Engineering application of Artificial Intelligence techniques in power systems Control Center environment. This tutor works in close association with SPARSE, an expert system specialized in the interpretation of the alarm messages produced by the Scada system during serious incidents. Its main purpose is the training of the control Center operators in this precise task. This paper discusses issues such as the representation of an adequate user model, the acquisition of the domain knowledge to be taught to the operators and techniques for the accurate evaluation of the trainee’s progress.


Title:

AN EXPERT SYSTEM FOR INTELLIGENT INFORMATION PROCESSING IN PORTUGUESE POWER SYSTEM CONTROL CENTERS

Author(s):

Zita A. Vale, Carlos Ramos, Luiz Faria, Jorge Santos, Nuno Malheiro, António Silva and Albino Marques

Abstract:

Power System Control Centers receive real-time information about the power system that they operate. In incident situations, a huge volume of information can be received requiring intelligent means of processing and interpreting it. Knowledge Based Systems developed for this purpose have to be integrated with the existing hardware and software and to be able to assure real-time performance under incident conditions. This paper discusses the requirements of Knowledge Based Systems for control center applications. SPARSE, an Expert System developed for assisting the operators of Portuguese Transmission Control Centers is used as an example throughout the paper. Knowledge maintenance and knowledge verification and validation are considered as important issues for the success of this kind of applications.


Title:

AGENT MEDIATED MULTI PERSPECTIVE SOFTWARE DEVELOPMENT

Author(s):

Gulden Uchyigit

Abstract:

In this paper we present a distributed CSCW tool with an agent based infrastructure which enables stakeholders to co-operatively work together from different sites on developing a requirements specification. We discuss the roles and functionalities of the agents that provide support for consistency checking, task delegation and graphical display of user selected mode. The agents communicate using a variant of KQML, a proposed standard inter agent communication language.


Title:

USING INTELLIGENT RETINA WITH CELLULAR AUTOMATA IN ACCESS CONTROL

Author(s):

Eduard Franti, Monica Dascalu, George Stefan and Mihai Stanescu

Abstract:

This paper presents the project of an access control device realized with an intelligent retina based on cellular automata model dedicated to a particular traffic control application: the control of the access in spaces with important dimension restrictions like tunnels, bridges or narrow streets, at the entry of garages or parkings and so on. The device process the information given by a (simplified) digital camera, detecting the dimensions of the incoming vehicles and comparing them with the accepted limits. Depending on the result obtained, a traffic light will signal if the access is permitted or not. The paper describes the function and the possibilities of implementation with cellular automata, the results of the simulations and the principle project for hardware implementation.


Title:

VEHICLE DETECTION IN TRAFFIC IMAGES

Author(s):

Luigi Di Stefano and Enrico Viarani

Abstract:

This paper describes a method for detecting vehicles in traffic images which relies on motion estimation provided by a Block Matching Algorithm (BMA). Once motion has been estimated via BMA, the motion field is regularised by means of a Vector Median Filter. Finally, vehicles are detected by grouping together image blocks with similar motion. The main contributions of the paper consist in an effective approach to prevent wrong matches in static areas which yields also a significant computational saving and the use of an adaptive vector median filter aimed at avoiding vehicle erosion.


Title:

INTELLIGENT AGENTS GENERATING PERSONAL NEWSPAPERS

Author(s):

D. Cordero, P. Roldan, S. Schiafino and A. Amandi

Abstract:

NewsAgent is an intelligent type of agent that has the capability of generating personal newspapers from particular user preferences extracted by observation and feedback. This agent generates personal newspapers using static word analysis for extracting a global classification and case-based reasoning for dynamic subclassification. The agent observes users by an applet with capabilities of detecting changes of pages. It also records the routine of reading newspapers of each user for analyzing readings in terms of their routines. The contributions of this work are both a software architecture for interface agents moving on web pages and the classification of specific themes using case-based reasoning.


Title:

SOLVING THE TIMETABLING PROBLEM WITH SIMULATED ANNEALING

Author(s):

F. Melício, P. Caldeira and A. Rosa

Abstract:

School timetabling is an optimisation problem, which consists in assigning lectures to timeslots, satisfying a set of constraints of various kinds. Due mostly to the constraints this problem falls in the category of NPcomplete. Simulated Annealing (SA) have been applied with significant success to different combinatorial optimisation problems. Nevertheless, any implementation of SA algorithm is highly dependent of how structural elements are defined, i.e., solution space, generation of new solutions, cost function. In this paper, we try to solve the timetabling problem using simulated annealing and compare several parameters concerning the algorithm.


Title:

A DISTRIBUTED APPROACH TOWARDS DESIGNING INTELLIGENT TUTORING SYSTEMS FOR THE WORLD WIDE WEB

Author(s):

Codrin Ionut Zolti, Stefan-Lucian Voinea, Gabriel Dima, Marcel Profirescu , Ion Miu and Gheorghe Olteanu

Abstract:

This original approach towards ITS is defined by several features as: a WWW dedicated architecture, a distributed way of working both in the design and exploitation phases, a heuristic method to gather the needed information from all people involved in the project rather than from a limited number of experts. The development process is enhanced by shifting it toward the Open Source Software Community. Inter-human communication receives a great deal of attention, as it is an important speed up factor for the learning process.


Title:

GDOS – A GRAPHICAL DIAGNOSIS-ORIENTED EXPERT SYSTEM DEVELOPMENT TOOL

Author(s):

Joaquim Filipe, Ana Fred and Mário Fernandes

Abstract:

This paper presents an integrated environment for the development of diagnosis-oriented expert systems, comprising graphical support for knowledge acquisition, edition, explanation and validation. GDOS (Graphical Diagnosis-Oriented Shell), a Windows based software tool, extends one of the most popular shells – CLIPS (C Language Integrated Production System) with the following features: backward chaining engine; graph-based explanation facilities; knowledge editor including a fuzzy fact editor and a rules editor, with facts-rules integrity checking; belief revision mechanism; built-in case generator and validation module. Most tools emphasise knowledge acquisition and knowledge base construction without much concern for validation. GDOS distinctive feature, besides its emphasis on the graphical orientation in all phases of system development, consists of a specially designed tool for assisting in the validation phase. GDOS has supported the development of the PSG Expert System, under the ENN project1, where it has proved to be a valuable tool for the development of medical expert systems. It is currently being used for developing a new system2 also in the medical domain.


Title:

ADVANTAGES OF A HIERARCHICAL PRESENTATION OF DATA STRUCTURES

Author(s):

J.A. Bakker

Abstract:

Using the development of a scheduling system as an example, we demonstrate that the usability of graphical data models can be improved by a hierarchical presentation of object types. Such abstraction hierarchies clearly show the presence of semantic links. Their perception is essential for the evaluation of data models, the specification of additional rules, the design of applications and queries, and the design of data distribution schemes as well. We conclude that system design can be improved significantly by using a hierarchical presentation of object types.


Title:

TRAFFIC SIMULATION OF LARGE REGIONS

Author(s):

Pedro Mendes and João Menano

Abstract:

The application of artificial life methods to traffic simulation allows us to experience several solutions to traffic problems. Former strategies have a considerable number of problems and a different approach is needed. What is intended with this article is the description of a new method, the macroscopic simulation using inter and intra-zonal traffic.


Title:

PROCESS MODELLING WITH NATURAL LANGUAGE INPUT

Author(s):

Geetha K. Abeysinghe and Christian R. Huyck

Abstract:

In this paper we are focusing on the process elicitation stage of process modelling which is crucial. The aim is to aid both the modeller and the process user in process elicitation. For those who carry out the process (process actors) it is much more natural and easy to describe what they do in natural language, but it is inevitable that many of these descriptions will contain ambiguity. If the natural language description can be automatically converted into a graphical form, which is also executable, then these executable models can be presented to the process actors and easily verified by demonstration.


Title:

EXPERT SYSTEM OBJECT-ORIENTED COMBINING SYSTEMATIC AND HEURISTIC REASONING

Author(s):

Lucimar F. de Carvalho, Hugo J. T. de Carvalho, Júlio C. Nievola, Celso A. Kaestner, Cristiane Koehler, Charles T. Batezini, Raquele Z. Grazziotin and Vinícius S. Borguetti

Abstract:

This paper presents the knowledge Based System (KBS) to support the Clinical Diagnosis of Epileptic Seizures. It is based on the classification of type the Crises of the International League Against Epilepy/81 and KADS1 methodology. The classification uses the Artificial symbolic Intelligence techniques being implemented in the shell KAPPA-PC2 (heuristic reasoning) and following the Object Orientation paradigm (systematic reasoning). The KADS methodology provides the flexible life-cycle model goes structuring and controlling the development process, rules and guidelines it goes guiding the knowledge engineer through the life-cycle model, and methods, tools and techniques it goes supporting various life-cycle model activities. Further work is being conducted towards the following issues: use of this model in order to treat about uncertainty with the Bayes´s Theorem and Conditional Probability of dates in medical diagnosis. The Bayes´s Theorem is a quantitative method for the revision of well-known probabilities, with base in a new information.


Title:

TYPE INFERENCE IN LOGIC PROGRAMMING CONTEXT

Author(s):

Cedric Luiz De Carvalho, Eduardo Costa Pereira and Rita Maria Da Silva Julia

Abstract:

This paper presents an algorithm for type inference in logic languages at compile time. It consists of a symbolic execution of the program in order to find out types corresponding to all predicate argument positions. It handles ground type arguments, such as integers and strings, and arbitrary compound type arguments, such as f(integer), f(integer, g(string)). The algorithm exhibits parametric polymorphism and does not require any kind of declaration. It was implemented as part of the logic language NetProlog. It is a well known fact that verifying type consistency at compile-time allows the detection of eventual programming errors. Once type verification is concluded, the system has no further need for information about data types. Then, one can assume that only type correct arguments are passed down to predicates. The absence of runtime type verification greatly improves performance of the generated code. Notice that, if type verification is not carried out at compile time, the generated code must contain filters and guards to prevent wrong data from getting to predicate calls. The programmer must distribute these guards along the source code.


Title:

SYSTEM FOR OPERATIONAL PROCESS MANAGEMENT

Author(s):

Alexandre Bragança and Carlos Ramos

Abstract:

This abstract presents a system for operational process management, namely manufacture’s processes. The presented system is to be used mainly in small/medium size enterprises with productive systems like batch or job-shop. One of the main purposes in developing this system was its flexibility, accessibility and easy way of using. The system is divided in two areas: the area of support for specification of productive process and the area destined to scheduling, dispatch and control of the production. The system is based on a graphic tool that is used to specify process, scheduling of instances of process and monitoring the instances.


Title:

MANAGING KNOWLEDGE FOR AN INTELLIGENT DOCUMENT PROCESSING SYSTEM

Author(s):

Jianshun Hu, Xuhong Li, D.C. Hung, Simon Doong and Peter A. Ng

Abstract:

In this paper, we describe the representation and organization of the knowledge about the infrastructure of storing documents and about the document base itself, which support fast retrieval of documents and information from various documents. A numerous components of the knowledge base of TEXPROS, such as the system catalog, the frame template base and the frame instance base are discussed.


Title:

OPTIMIZING A REAL-WORLD SCHEDULING PROBLEM IN THE STEEL INDUSTRY

Author(s):

J. Psarras, G. Mitrou, I. Makarouni and D. Askounis

Abstract:

The steel industry’s scheduling is characterized by the problem volume, the complexity of the objective function and the multiplicity of the constraints. The problem at hand is a flow shop scheduling problem and requires the effective use and programming of the factory resources. It incorporates the following elements: a) Group of Machines: Ruled by specific functional and physical constraints. b) Group of "Jobs": Each "job" is processed by the machines following a strict order. c) Group of Constraints: Physical, Functional, Hard, Soft, Preferences, etc. d) Objective Function: Is used to evaluate the quality of the solution. The problem is one of the most difficult NP-Hard. In order to handle the inherent intractability of this problem, a system based on Constraint Logic Programming (CLP) and Heuristics, representing the empirical knowledge for the specific industry, has been designed and implemented on the ECLiPSe (ECLiPSe Common Logic Programming System) platform. ECLiPSe is a development environment for constraint programming applications, containing several constraint solver libraries that allow to develop efficient programs to solve combinatorial problems in planning, scheduling, resource allocation, time-tabling, transport, etc. It is a single powerful tool with extended Prolog technology, persistent knowledge base, constraints handling facilities and parallelism.


Title:

NEGOTIATION AMONG INTENTIONAL AGENTS

Author(s):

Fernando Lopes, Nuno Mamede, Helder Coelho and A. Q. Novais

Abstract:

This paper presents a model for intentional agents operating in multi-agent domains and a mechanism for generating negotiation proposals. Intentional agents are defined as autonomous computational processes with cognitive structures composed of beliefs, desires, intentions and expectations, which act rationally to pursue their own goals. They perform means-end reasoning and generate alternative plans of action, which they select according to their preferences and adopt for further execution. In this paper a set of mental states of intentional agents is defined, namely beliefs, desires, intentions and expectations and the relationships between them specified. A detailed framework for means-end reasoning based on planning from second principles is also presented. Agents are assumed to have a library of plan templates or schemata, which are defined as frame-like structures with five components: a header, an argument list, a type, a body and a list of constraints. Means-end reasoning is based on retrieving from this library alternative plan templates, which match a top-level goal, and processing their constituting body components. The retrieved plan templates represent different ways to achieve the goal and only one of them is selected for execution. The selected plan template is used to start the construction of a plan structure: a hierarchical and temporally constrained AND-tree. The header component is made the root node of the plan strcuture. The body component is then processed and its body steps are placed at appropriate points in the plan structure. The alternative plan templates are stored for negotiation purposes and placed in the plan structure alongside the selected one.The whole process is the repreated for every body step of the selected plan template.


Title:

TASK MODELING IN A MULTI-AGENT LOGISTIC DOMAIN

Author(s):

Joaquim Reis and Nuno Mamede

Abstract:

This work concerns to scheduling in a broad sense, i.e., planning/scheduling at the logistic inter-enterprise level, and is also related to earlier work in coordinated planning, management planning, and modern work in supply-chain management. Logistic tasks, to be scheduled by agents playing the roles of interdependent enterprises (managing renewable resources like factories, warehouses, transportation fleets) acting as clients and as suppliers in multi-product cooperative production/distribution networks and the constraints on tasks, are described, as well as how uncertain knowledge about tasks can be represented. In our opinion the referred scheduling problem can be seen as a dynamic Distributed Constraint Satisfaction Problem (DCSP). The scheduling activity can be viewed through the DAI Multi-Agent paradigm as a Multi-Agent coordination problem in a semi-cooperative environment because, in scheduling their interdependent tasks, agents will cooperate to avoid violation of hard (capacity and temporal) constraints but will try to satisfy the most of its scheduling preferences.


Area 3 - SYSTEM ANALYSIS AND SPECIFICATION

Title:

AN ENTERPRISE-WIDE WORKFLOW MANAGEMENT SYSTEM

Author(s):

Lawrance M. L. Chung and Keith C. C. Chan

Abstract: Advancement of workflow technology has resulted in many different standalone and unrelated workflow applications developed by different departments in an organization. Some of them have common workflow components that can be modeled by a generic workflow. Others may require a tailor-made component for specific applications. Since they are developed independently, it is not uncommon for conflicts or inconsistencies to be found among them. To better manage these applications, a global workflow application needs to be developed so that data and information from all workflow applications in an organization can be consolidated. Towards this goal, we propose here a structured enterprise-wide workflow model with a number of unique features to handle enterprise-wide business. We focus, in particular, on the project management aspects.

Title:

GENERATING OBJECT-Z SPECIFICATIONS FROM USE CASES

Author(s):

Ana Moreira and João Araújo

Abstract: The importance of use cases has been growing for the last few years. We believe they are important to help developers capturing requirements. The work presented here formalises use cases using basic temporal logic to define history invariants within Object-Z class schemas. This is accomplished by proposing a set of formal frameworks integrated within a process.

Title:

AN AGGREGATION BASED APPROACH TO RECONCILE EXPRESSIVENESS AND UNAMBIGUITY IN OBJECT ORIENTED MODELLING LANGUAGES

Author(s):

Luc Goossens

Abstract: The fact that an OO modelling language like UML enjoys a much higher acceptance by the software industry than any formal language, despite its informal and ambiguous semantic definition, indicates that expressiveness is a much more desired feature than formality. However, we believe that expressiveness does not a priori exclude formality. In this paper we present an approach to formalising UML. The most distinctive feature of our approach is that, although framed in an OO context, it is not based upon classification but upon aggregation. This reflects our conviction that aggregation combined with abstraction, much more than classification can contribute to managing the complexity of applications. The foundation for our formal semantic model is the implementation level trace. The basic idea is to group objects and quantities of processing from this trace together into bigger units respectively termed aggregate-objects and tasks. The trace can then be reformulated in terms of these bigger units and it is possible to express properties of these bigger units at this new level of abstraction.

Title:

ALLIANCE: AN AGENT-BASED CASE ENVIRONMENT FOR ENTERPRISE PROCESS MODELLING, ENACTMENT AND QUANTITATIVE CONTROL

Author(s):

Ilham Alloui, Sorana Cîmpan, Flavio Oquendo and Hervé Verjus

Abstract: Nowadays enterprise processes are characterized by their: (a) cooperative nature; (b) geographical distribution/decentralisation; (c) permanent change. CASE environments to model and manage enterprise processes must meet these requirements.The paper presents ALLIANCE (ALgebra and Logic for Interoperable AgeNts in Cooperative Environments) framework, an agent-based CASE environment for enterprise process modelling, enactment and quantitative control. The framework originally intended to support process definition, instantiation and enactment is extended with process quantitative management support. It provides project managers with advanced fuzzy logic-based monitoring facilities, support for decision making and change control mechanisms. Its main features is that it relies on the one hand, on a goal-oriented approach for process quantitative management, and on the other hand, on inter-operable decentralized cooperative software agents for achieving goals. In our approach, both software-intensive enterprise processes and quantitative management are carried out by software agents. One important feature of such framework is its adaptivity as it provides facilities for evolution: software agents are interchangeable, they also may evolve themselves.

Title:

IDEF0-BASED SOFTWARE PROCESS ANALYSIS FOR SCHEDULING SYSTEMS DEVELOPMENT

Author(s):

Akihiro Abe and Tetsuo Tamai

Abstract: Scheduling systems must solve combinatorial optimization problems on which various large-scale constraints are placed, based on an understanding of complicated production and distribution environments. Their scope of application has expanded and they are becoming increasingly important as business applications. In the development field, however, the study of scheduling system development methodologies is lagging behind, relying on experimental or trial-and-error-based development. This paper reports an analysis performed using IDEF0 to extract the characteristics of software development processes in four actual cases of development, as a basic study for the design of scheduling system development methodologies. The results of analysis clarified certain software process characteristics and development bottlenecks common to the examples.

Title:

QUANTITATIVE MANAGEMENT OF OO DEVELOPMENT

Author(s):

Marjan Hericko, Matjaz B. Juric, Tomaz Domajnko, Ales Zivkovic and Ivan  Rozman

Abstract: Business and project decision should be based on factual, quantitative information. Unfortunately, assessments of software processes confirm that many companies have not yet recognized the importance of measurements and metrics data gathering. Without a doubt automated metrics collection tools and environments should provide infrastructure for necessary improvements and encourage quantitative management. In the papers basic concepts of an environment based on central metrics repository is presented as well as implemented tools that support OO development process evaluation and improvements. Special attention is paid to integration of different aspects of measurement.

Title:

ADAPTIVE WORKFLOW

Author(s):

W.M.P. van der Aalst, T. Basten, H.M.W. Verbeek, P.A.C. Verkoulen and M. Voorhoeve

Abstract: Today’s information systems do not support adaptive workflow: either the information system abstracts from the workflow processes at hand and focuses on the management of data and the execution of individual tasks via applications or the workflow is supported by the information system but it is hard to handle changes. This paper addresses this problem by classifying the types of changes. Based on this classification, issues such as syntactic/semantic correctness, case transfer, and management information are discussed. It turns out that the trade-off between flexibility and support raises challenging questions. Only some of these questions are answered in this paper; most of them require further research. Since the success of the next generation of workflow management systems depends on the ability to support adaptive workflow, it is important to provide answers for the questions raised in this paper.

Title:

RE-ARRANGING THE CHAIRS: TACKLING THE OWNERSHIP ASPECTS OF ORGANIZATIONAL TRANSITION AND INFORMATION SYSTEMS SUPPORT

Author(s):

Dennis Hart and Greg Whymark

Abstract: The development of information systems and information management continue to present considerable challenges for many organizations, and more often than not for reasons other than technological ones. Politically sensitive issues that are frequently raised by and bound up in such matters are an important cause of difficulties. This paper proposes that perceptions of ownership of business processes and data by various groups within an organization can be a potent contributing factor in the occurrence of such political troubles. Using the concept of information wards, a graphical model that links the scope of system development or organizational change and ownership perceptions to the likelihood of political difficulties is outlined. The model in turn forms the basis for a prototype specialized group support system called Info*Warder, also briefly described in the paper. This software allows representatives of organizational stakeholders to stake their claims to business processes and data that are within the scope of systems or change proposals, thus permitting early detection of differences of opinion and potential conflicts. Finally, an action research study involving three Australian State Government departments undergoing significant change both in their roles and information systems support arrangements, and using the Info*Warder software, is described.

Title:

SOFTWARE EFFORT ESTIMATION: THE ELUSIVE GOAL IN PROJECT MANAGEMENT

Author(s):

J. Javier Dolado, Luis Fernández, M. Carmen Otero and Leire Urkola

Abstract: The estimation of the effort to be spent in a software project is a problem still open. Having a good estimation of the variables just at the beginning of a project makes the project manager confident about the future course of the actions, since many of the decisions taken during the development depend on, or are influenced by, the initial estimations. The root of the problems can be attributed to the different methods of analysis used, and to the way with which they are applied. On one hand we may not use the adequate independent variables for prediction and/or we may not build the correct predictive equations. On the other hand we could think that the method of prediction has some effect on the predictions, meaning that it is not the same to use classical regression or other methods of analysis. We have applied linear regression, neural networks and genetic programming to several datasets. We infer that the problem of accurate software estimation by means of mathematical analysis of simple relationships solely isn’t going to be inmediately solved.

Title:

INTEGRATED APPROACH FOR INFORMATION SYSTEM ANALYSIS AT THE ENTERPRISE LEVEL

Author(s):

Remigijus Gustas

Abstract: The ability to describe a process in a clear and sufficiently rich way is acknowledged as crucial to information system analysis. Current workflow models used in business process re-engineering offer limited analytical capabilities. Entity-Relationship models and Data Flow Diagrams are closer to the technical system development stage and, therefore, they do not capture organisational aspects. Although objectoriented models are quite comprehensible for users, they are not provided by rules of reasoning and complete integration between static and dynamic aspects. The ultimate goal of this paper is to introduce principles of integration for some approaches of information system analysis. Such principles should also build a common basis for a non-traditional approach to business process modelling and integration.

Title:

USING CRITICALITY AS A BASIS FOR DETERMINING INFORMATION REQUIREMENTS FOR AN EIS

Author(s):

Gregory K. Whymark

Abstract: This paper describes a process for identifying the content required of an executive information system (EIS). Too many EIS fail to achieve their potential because they do not deliver what the user needs, relying instead on the technical solution for a highly user friendly interface and merely deliver what is available in the corporate systems (the data availability syndrome). Based on the concept of criticality, previous research is used to develop and illustrate a methodology for identifying what executives need on an EIS. Lastly, the suitability of the methodology for executives is demonstrated with examples and drawn from case studies.

Title:

A METHOD FOR INTEGRATING LEGACY SYSTEMS WITHIN DISTRIBUTED OBJECT ARCHITECTURE

Author(s):

Matjaz B. Juric, Ivan Rozman and Marjan Hericko

Abstract: The ability of a new technology to reuse legacy systems is very important for its economic success. This paper presents a method for integrating legacy systems within distributed object architectures. The necessary steps required for integration are defined. It is explained how to define object interfaces. A detailed overview of how to implement the wrappers is given. The paper also answers the question which distributed object model is most suitable for legacy integration. Therefore a decision model is defined and the evaluation results are presented.

Title:

THE SOFTWARE DIMENSIONS THEORY

Author(s):

Claudine Toffolon

Abstract: Since the early times, software academics and practitioners talk about a « crisis » in the software industry. Nevertheless, nowadays this crisis is more critical as organizations can not exist without operational software to support their production and decisionmaking processes. Solutions proposed to date, in order to deal with this software crisis, have partly failed on the one hand, because of the increasing complexity of software systems and on the other hand, because they don’t take into account all the aspects of software engineering : that means economical, technical, functional, structural as well as human and organizational aspects. This paper proposes a theory, called “software dimensions theory” which permits analyzing deeply the software crisis. In particular, this theory may be used to elaborate software development methodologies which permits coping with the effects of the software crisis. Our work rests on an organizational model based on the Leavitt model which takes into account behavioural aspects and impact of information technology on modern organizations. With the help of this model we determine ten software dimensions. Case studies, lessons learned and problems encountered while using the software dimensions theory are synthesized in the last part of this paper.

Title:

INCREASING OPPORTUNITIES FOR REUSE THROUGH TOOL AND METHODOLOGY SUPPORT FOR ENTERPRISE-WIDE REQUIREMENTS REUSE AND EVOLUTION

Author(s):

K. Suzanne Barber, Thomas J. Graser, Stephen R. Jernigan and Col. John Silva

Abstract: Gathering, monitoring, and managing requirements is a significant aspect to a successful integration and reuse effort, and software development failures can often be attributed to poorly defined and poorly managed requirements. This paper discusses a tool suite under development at the Laboratory for Intelligent Processes and Systems designed to foster reuse by aiding requirements management and evolution and supporting traceability throughout the software development lifecycle.

Title:

THE FUTURE OF ENTERPRISE GROUPWARE APPLICATIONS

Author(s):

S. Terzis, P. Nixon, V. Wade, S. Dobson and J. Fuller

Abstract: This paper provides a review of groupware technology and products. The purpose of this review is to investigate the appropriateness of current groupware technology as the basis for future enterprise systems and evaluate its role in realising, the currently emerging, Virtual Enterprise model for business organisation. It also identifies in which way current technological phenomena will transform groupware technology and will drive the development of the enterprise systems of the future.

Title:

AN INTERPRETIVE APPROACH TO ORGANISATIONAL INQUIRY AND DESCRIPTION FOR INFORMATION SYSTEMS DEVELOPMENT IN INNOVATION CENTRES

Author(s):

Pedro José Leonardo and António Lucas Soares

Abstract: The work presented in this paper touches the analysis and specification phases of information systems (IS), from the social and organisational perspectives. Some preliminary experiences in combining a well known organisational inquiry methodology (SSM) with a novel conceptual modelling framework for the representation of social and organisational requirements in IS development are described. The application of the resulting analytical framework is illustrated in a case study involving an innovation centre.

Title:

INTELLIGENT AGENTS FOR QoS CONTROL AND ADAPTATION IN DISTRIBUTED MULTIMEDIA SYSTEMS

Author(s):

Farid Naï T-Abdesselam

Abstract: Presently, many distributed multimedia systems adapt to their changing environments and Quality of Service (QoS) requirements by exchanging control and feedback data between servers and clients. In order to realize a more flexible adaptation to QoS in distributed multimedia systems, this paper seeks to present a new approach, based on distributed software agents located in both network nodes and end-systems. By having a good knowledge of local resources at each component, and with their capabilities to communicate in order to share their knowledge, the distributed software agents can alleviate the major fluctuations in QoS, by providing load balancing and resource sharing between the competing connections. In order to show the feasibility of our active adaptation approach, simulations have been conducted to adapt a delay sensitive flow, such as distributed interactive virtual environment. We have performed our evaluations for short and long range (i.e. self-similar) traffic patterns. Preliminary results show a viable system, which exhibits a smooth and noticeable improvement in perceptual QoS during a heavy loaded network. In addition, our results indicate that the network and the application have more to benefit from the algorithm, when the traffic exhibits long range dependence behavior.

Title:

INTEGRATION OF INFORMATION SYSTEMS IN LARGE-SCALE ENTERPRISES USING MODEL-INTEGRATED COMPUTING

Author(s):

Amit Misra, Janos Sztipanovits, Gabor Karsai, Michael Moore, Akos Ledeczi and Earl Long

Abstract: The advances in Information System (IS) technology in recent years have allowed manufacturing enterprises to use and apply increasingly sophisticated computer-based systems to run their business and to achieve a competitive advantage. However, these systems mostly exist in isolation with minimal (and expensive) integration. Of late, primarily due to emergent competitive global enterprises and markets, the need to be able to integrate the global enterprise has become more urgent. There are many dimensions to the integration problem that relate to IS: integration across geographically distributed enterprises and offices of an enterprise, integration with suppliers and customers, integration of various domains of activities, integration of different tools, collaborative design, etc. In this paper, we will identify the different layers and dimensions of the integration problem, the issues, and the challenges involved. We will use Saturn Site Production Flow (SSPF), which is a system developed using Model-Integrated Computing approach, as an example of a global application. Then we will examine the issues that arise when a number of different tools and applications have to be integrated in the IS for a large scale and distributed enterprise.

Title:

SPECIFYING SEMANTIC CONSTRAINTS FOR A HEALTHCARE SCHEDULER

Author(s):

J. Artur Vale Serrano, Marta Jacinto and João Paulo Cunha

Abstract: The scheduling activity in a hospital environment is an essential task. It must comply with a number of constraints that rule the various clinical acts such as surgical operations or medical examinations. Because these constraints are characteristic of the semantics of the application domain we call them semantic constraints. It is vital that semantic constraints are satisfied if the safety of a clinical act being scheduled is to be guaranteed. Most current hospital scheduling systems rely on the knowledge of the operator to ensure the obedience to such constraints. This procedure allows for errors and can be unsafe. We believe that the use of formality in a safety critical domain as the healthcare one, can lead to better, more reliable and, ultimately, safer systems. In this paper we propose a novel system for healthcare scheduling in which semantic constraints are formally specified. The constraints are then automatically checked by the system at execution time to assure the correctness of the scheduling activity.

Title:

FACILITATING ORGANISATIONAL ACTIVITIES USING PLANS AND AUDITS

Author(s):

Carlos J. Costa, Tânia Ho and Pedro Antunes

Abstract: This paper departs from the observation that Group Decision Support Systems (GDSS) present important limitations that constraint their usage in current organisations. An approach to widespread GDSS usage is proposed, based on: (1) supporting the facilitation of decision-making processes; and (2) supporting followup processes, intended to integrate decisions throughout organisations. The proposed approach leads to the specification of two software components designated Plans and Audits. Plans foster and guide the planning of group decision-making activities, while Audits support monitoring and corrective actions. A framework for simulating the functionality of Plans and Audits is also proposed.

Title:

BUSINESS OBJECTS IN CONCEPTUAL INFORMATION ARCHITECTURE MODELLING

Author(s):

Rui Gomes and António Dias de Figueiredo

Abstract: There is no standard definition to business object, however we consider OMGs (Object Management Group) as the closest to the business language. Although it mentions that the business object can be represented in a programming language (implementation perspective), it understands the object as an active thing in the business domain, defining its characteristics in a natural or modelling language. This definition does not establish links between an active thing in the business domain and the software object, and doesn't suggest how to derive the business object from the business. This paper presents, in a representation perspective, the types of business objects we can use to build a conceptual model of an information architecture during the organization information system planning, pointing out in each specification the way they relate within the business.

Title:

TRANSPORTATION IN POSTAL ENTERPRISE OF SERBIA: APPLICATION DEVELOPMENT AND INTRANET IMPLEMENTATION

Author(s):

Nada Milosavljevic, Dragoslav Rasinac, Ranko R. Nedeljkovic and Dejan Damnjanovic

Abstract: This paper is presenting some of the results obtained during the execution of the project of an enterprise information system, built for one of the largest companies in Yugoslavia, the Postal Enterprise of Serbia. The paper is centered round the set of problems being treated and solved in the transportation and vehicle fleet subsystem areas, in particular: vehicle operation, vehicle maintenance, supplying and administrative tasks. The results obtained using the object-oriented analysis and design (OOAD), as well as the rapid application development (RAD) tools enabled the production of the information system, and its smooth connection with other parts of the postal information system via large internal WAN, named the Postnet. The focus of the paper is on the set of problems which connect results obtained by OOAD with those achieved through the use of RAD tools.

Title:

A HIGH SPEED ACCESS METHOD TO DATA STRUCTURES WITH INHERITANCE HIERARCHY

Author(s):

Shuichi Nishioka, Fumikazu Konishi, Jun'Ichi Kuroiwa, Makoto Onizuka and Jinnnosuke Nakamura

Abstract: In recent years, ORDBMS which offers the functions of both RDBMS and OODBMS, is receiving more attention for the management of complex data. Accordingly, we extended our RDBMS to cover the object oriented model. Our RDBMS is structured to realize a memory-based architecture for telecommunication applications. The object oriented model has many functions to extend the data type with the goal of enhancing data management and access. Inheritance is one of the features of the object oriented model. There are two types of pattern queries in searching from instances of inherited classes: to search from instances within a single class and to search from instances among parent and subordinate classes. As a way to speed up these queries, two indices have been suggested, indices for each class and one overall index, but they were designed to enhance disk I/O, and so they can not be applied to the memory-based architecture. In this paper, we suggest an efficient index for data structures with inheritance hierarchy for the memory-based architecture. In addition, we implement the index to evaluate its effectiveness.

Title:

CUSTOMER ORIENTED ENTERPRISE INFORMATION MANAGEMENT: A CUSTOMER RELATIONSHIP LIFE CYCLE APPROACH

Author(s):

Satya P. Chattopadhyay

Abstract: Retention of existing customers is a priority for businesses to survive and prosper in the present marketplace. The extreme high cost of acquisition of new customers in a mature market has pushed organizations into actively seeking to build and sustain long-term relationships with customers. Such relationships are expected to be strong enough to provide substantial barrier to switching business over to competition except in the most extreme of cases. There are cognitive, affective and behavioral aspects of such relationships that are relevant. The transition of a neutral or negative relationship into positive territory is based on changes in one or more of these dimensions. Information exchanges between an organization and its customers play a significant role in such a change. The nature and scope of relationship with a customer changes, as the customer needs evolve in course of the product life cycle in the customer's industry. A framework merging the concepts of "customer relationship management" and "product life cycle" into "customer relationship life cycle" is proposed.

Title:

BEYOND RELATIONSHIP MARKETING Technologicalship Marketing: Relationship marketing based on technology

Author(s):

Mosad Zineldin

Abstract: Today’s business environment is characterized by explosive technological growth, continuous reorganizations of economic boundaries, and an endless array of new technological communication tools. Relationship marketing has been devised by organizations to navigate through this disorder. This paper is part of research effort ultimate objective of which is to better understand how information technology can be used as a source of competitive advantage in marketing activities in order to cope with the challenges of the 21st century. Our theory framework is based on current understanding of recent developments in relationship marketing and information technology theories and concepts with application in practice by some organizations. It is very obvious that nowadays organizations and people (consumers) will find it difficult to separate a relationship from information technology and other technological advances. Thus, these new types of relationships could be called technologicalship (relationship based on using of Technology. The paper argues that relationship marketing is not a complete paradigm shift. Technologicalship customers expect new kinds of relationships and solutions because the technology makes them possible. Hence, relationship marketing-based on technological advances can be considered as a new paradigm. The paper shows that traditional, relationship, and technologicalship marketing are fundamentally different.

Title:

INFORMATION MODELLING FOR RESOURCE-ORIENTED BUSINESS PROCESS

Author(s):

Jamal El-Den and J.P. Briffaut

Abstract: As companies tear down the walls between the different parts of their work they are realising these various parts need to share the same flow of information. Putting all the company’s information into one giant software is not without risks. Enterprise Resource Planning (ERP) programs are presented to be able to enforce the ‘supply chain management” concept facilitating the flows of goods and information with suppliers upstream and customers downstream and enabling to reduce the throughput time of goods along the pipeline.

Title:

THE INTEGRATED FRAMEWORK FOR FAULT-TOLERANT SYSTEM SIMULATION AND DESIGN

Author(s):

A.E.Alexandrovich, R.M.Nikitin and V.O.Chouckanov

Abstract: The paper presents the integrated framework for critical applications analysis, simulation and design. The proposed framework is used for building redundunt real-time systems featuring high reliability and availability. The core of the proposed framework is the system model reflecting the most significant factors and parameters influencing on system dependability. The model consists of two parts : hardware structural model and software structural model. The framework is validated through the number of critical designs implemented in the field of process control and avionics.

Title:

MULTI LANGUAGE AND MULTI PLATFORM SUPPORT FOR AGENT EXECUTION USING CORBA PLATFORM – AN IMPLEMENTATION

Author(s):

Orandi Mina Falsarella

Abstract: In this work a software entity will be considered an agent if it has mobility to move through a computer network and autonomy to perform tasks delegated to it by a software application or user. The majority of software agents projects found in literature aim to solve or to eliminate code heterogeneity problems adopting Java language to built such agents. Adoption of Java language to solve heterogeneity issues can restrict the use of many programming languages and turn the software agent paradigm less attractive. For example, agents which should have high autonomy level and intelligence to negotiate and take decisions while traveling through the network could not be written in Lisp or Prolog, which are more suitable to aggregate intelligence skills to agents. Also, an agent that has to perform lots of calculation couldn’t be written in the most appropriate language to deal with it. Under this perspective, agents should be able to travel through the network and to receive local support to be run independently of the language in which they have been written. The agent execution management services developed and implemented in this work support the following basic principles: to prepare, to schedule, and to activate agent execution.

Title:

MAINTENANCE TYPES IN THE MANTEMA METHODOLOGY

Author(s):

Francisco Ruiz, Mario Piattini, Macario Polo and Coral Calero

Abstract: Maintenance is the most expensive stage in the software life-cycle. For this reason, it is important to have methodologies and tools so that we can approach this problem in the best possible way. The MANTEMA methodology is being developed with this objective. We present diverse aspects of this methodology, focusing on the maintenance types and the factors that determine the type that it is necessary to carry out in each situation. The necessities in the development and maintenance phases of the software life-cycle are practically divergent and different. Therefore, it becomes necessary to grant to the software maintenance the importance that it has, keeping in mind its special characteristics and its differences with the development phase.

Title:

TOWARDS A VISUAL ENVIRONMENT FOR ENTERPRISE SYSTEMS

Author(s):

Stewart Thomson and John A. W. McCall

Abstract: The Oncology WorkbenCH (OWCH) is a decision support tool for cancer chemotherapy. Oncologists can create multi drug schedules and see how the tumour and the patient reaction to the drug schedule. The Workbench is make up of several components; a Treatment Editor, Toxicity and Results Viewers, a Simulation Engine and an Optimisation Engine, Communications and Data Collection and Retrieval tools [Boyle 1998]. OWCH is currently used over the internet the main purpose is to develop an environment to allow oncologists to take the workbench components and create an application to meet their own individual needs.

Title:

FACTORISATION OF OMT MODELS

Author(s):

Viviane Jonckers, Bruno Van Damme and Katja Verbeeck

Abstract: In object-oriented modeling techniques, objects are modeled from the real world. Concepts and components of the application domain are identified and organised in suitable models, each with an appropriate graphical notation. The process of acquiring these models is not always straightforward. In practice, modeling from the real world does not guarantee that the granularity and structure of the model is right from the start. This paper investigates how coherent sets of selected states of a flat state chart can be factored out. The goal is to improve the structure and granularity of the dynamic model and as a consequence, also the structure of the static models. Initially lean models get overloaded real fast when other use cases are covered. The idea arose when modeling multimedia services, since quite soon similar patterns were found in related applications. This principle of factorisation is also very useful when trying to reuse parts of the model elsewhere. The components and subsystems we identify and generate can be reused in other models. System models that describe an application as a configuration of basic components can be obtained in this way.

Title:

OBJECTS COUNT FOR SOFTWARE ESTIMATION

Author(s):

E. Chang, T. Dillon and M. Ilkov

Abstract: Today there are a number of estimation models that have been proposed to help predict the needs of a project and produce estimates. One such estimation technique is the Function Points Method[2]. Other Methods such as COCOMO model [ 1] [3 ] and the Putnum model.[ 4 ]. Most of these models rely on parameters such as estimates of lines of code or other low level functions that are not appropriate to modern software development based on the Object Oriented methodology using class libraries or shareware that facilitate reuse and specialisation. Because those methodologies use the lines of code and low level functions, as well as the number of inputs or outputs which is not useful in modern auto or semi-automatic software engineering.

Title:

GUIDE TO DEVELOP AN EUROMETHOD COMPATIBLE INFORMATION SYSTEM METHODOLOGY A PRACTICAL EXPERIENCE OF ITS DEVELOPMENT

Author(s):

Antonio de Amescua Seco, Adoración de Miguel Castaño, Javier García Guzmán, Juan Llorens Morillo and Luis Fernández Sanz

Abstract: In this paper a practical experience in developing the Spanish Information System methodology (Metrica V3, MV3) which had to be Euromethod compatible is presented. Euromethod is the standard that an organisation has to follow in order to acquire/supply an information system into the European public administration. This work details every key areas that have to be considered for the develoment of a new IS methodology which meets Euromethod standard or for the adaptation to Euromethod of an existing methodology. For each key area, basic elements to be considered are stated. This compatibility has great benefits: first, an organisation that supply Information Systems or related services has a European market open; second, project managers get a better planning and monitoring management; third, improving the acquisition of information systems by taking full account of the problem situation and associated risks.

Title:

THE DECISION PROCESS IN SOFTWARE ENGINEERING: A FRAMEWORK BASED ON THE CALL OPTIONS THEORY

Author(s):

Claudine Toffolon and Salem Dakhli

Abstract: Software engineering is composed of four processes: a production process, a support process, a meta-process and a decision process. These four processes are interdependent and may be considered as meta-activities belonging to an iterative process which takes place according to the Boehm’s spiral model. Existing software development processes, approaches and methods used in software engineering have many weaknesses. In particular, the decision process inherent in software engineering has been ignored by the majority of methods and tools suggested to date. Many authors have tried to take into account some aspects related to the decision-making process during software development and maintenance lifecycles. Their work, which rests on the economic decision theory, has three disadvantages. Firstly, it relates to only one period of uncertainty. Secondly, it assumes that the software production process under uncertainty and risks don’t change as managers make decisions. Thirdly, it is based on the Net Present Value technique which is not compliant with the Simon’s Bounded Rationality Principle, and lead to « now or never » decisions.

Title:

ORGANIZATIONAL LEVEL OF INTELLIGENT ROBOTS - AN IMPLEMENTATION POINT OF VIEW

Author(s):

Ariana Popescu and Gheorghe Musca

Abstract: The paper presents an implementation point of view for the Organizational Level of an Intelligent Robot System. The implementation is based on Saridis' Probabilistic Model of the Organizational Level (Saridis 1992), proposed for intelligent robots, according to the Principle of Increasing Precision with Decreasing Intelligence (IPDI) and Jaynes' principle of maximum entropy. The paper focuses on the Machine Planning (MP) function for an industrial robot integrated with a vision system, and all the rules, tests and the entire algorithm and software specifications are presented and implemented.

Title:

AN ASSISTANT FOR SELECTING OBJECT ORIENTED METHODS FOR APPLICATIONS DEVELOPMENT

Author(s):

Gilene Do Espírito Santo Borges and Maria Elenita Menezes Nascimento

Abstract: Different methods are currently available for software development. However, appropriate strategies advise developers on the choice more suitable methods depend upon the characteristics of the application to be developed. A suitable strategy is one, which provides a system with high quality and good results with minimum maintenance costs. Extensive research has been carried out with the goal to make systems development into a more productive, controllable and effective activity. Wide ranges of methods and techniques have been delivered in recent years claiming to solve the development problems. However, developers and managers have difficulty to choose the more appropriated method for the application domain. Each method has specific objectives to model certain applications domains.

Title:

TRANSPORTATION IN POSTAL ENTERPRISE OF SERBIA: A CASE STUDY

Author(s):

Vladimir Papic, Milan Brujic, Jovan Popovic and Olivera Medar

Abstract: The Postal Enterprise of Serbia is in a process of modernization. As a part of it, a large communication network, called Postnet, is installed. It is organized as an intranet and it will gradually connect all parts of the postal system and include all possible activities performed in that system. The building of a new postal information system is also well under way. The important part of it covers the subsystem of transportation and in this paper we are presenting some results obtained during the realization of the project which encompasses the creation of information system for transportation and vehicle maintenance functions.

Title:

MODELING ENTERPRISE ORGANIZATIONAL AND DECISION STRUCTURES

Author(s):

Hélène Bestougeff

Abstract: Generally, enterprise management is carried out by personnel organized into some management structure. On the other hand, the decision structure of the enterprise models decisions taken at different time, with respect to the type of the decision: strategic, tactical or operational. The relationship between these two structures reflects the interplay between individual personnel desires and global constraints on enterprise performance. We present an original approach based on a previously developed Task/Communication model (Bestougeff 1997) which allows the modeling and the simulation of the interplay between the organizational and the decision structures. Communication between management members or teams is analyzed as recurrent dyadic conversations with respect to a business objective, as introduced by Winograd (Winograd 1988).

Title:

A COMPREHENSIVE ORGANISATIONAL MODEL FOR ENTERPRISE KNOWLEDGE MANAGEMENT

Author(s):

Glen Duncan, Ron Beckett and Fawzy Soliman

Abstract:

Title:

STRATEGIC ROLE OF IT LEADER IN INTERNATIONAL BUSINESS PROCESS CHANGE

Author(s):

Fawzy Soliman and John Politis

Abstract:

Title:

ROLE OF ERP SYSTEMS IN INTERNATIONAL BUSINESS PROCESS RE-ENGINEERING

Author(s):

Zahra Salameh

Abstract:

Title:

CRITICAL SUCCESS FACTORS FOR MANUFACTURING MANAGEMENT THROUGH ‘TEAM BUILDING’

Author(s):

Ergun Gide

Abstract:

Title:

IMPORTANCE OF CONFORMANCE OF DATA COLLECTION TO THE ERP MODEL

Author(s):

George Jamoo

Abstract:

Area 4 - INTERNET AND INTRANET COMPUTING

Title:

BLIND SIGNATURES WITH DOUBLE-HASHED MESSAGES FOR FAIR ELECTRONIC ELECTIONS AND OWNERSHIP CLAIMABLE DIGITAL CASH

Author(s):

Chun-I Fan, Wei-Kuei Chen and Yi-Shiung Yeh

Abstract: Fair electronic voting makes it possible for the contents of all cast votes will not be known until all votes are cast. In addition, in an anonymous electronic cash system, it is usually necessary for a cash owner to convince the bank or others of his ownership of his cash in some situations such as to claim and identify his lost money. In this paper we propose a generic blind signature scheme with double hashed messages to cope with these two problems. Not only the proposed method preserves the anonymity of voters or payers, but also it can be easily implemented on the electronic voting or electronic cash schemes in the literature without affecting their infrastructures. Most important of all, the additional overhead of the proposed method is just several operations of hashing.

Title:

APPLICATIONS OF STATELESS CLIENT SYSTEMS IN COLLABORATIVE ENTERPRISES

Author(s):

Sheng Feng Li, Quentin Stafford-Fraser and Andy Hopper

Abstract: This paper identifies the current difficulties faced by the IT professionals working for collaborative enterprises and explains how we exploit and extend the so-called stateless client systems to support those individuals in cooperative work. Stateless client systems are the software tools that separate the display interface from the application logic in windowing systems. They embed a client/server architecture, where the server executes all applications and the client simply presents the frame buffers or screen images to the user and accepts user input. Since the entire system state is preserved in the server, the client becomes stateless. By providing these stateless clients with suitable coordination mechanism, we enable geographically separated users to share workspaces and applications in a work session. And by recording the messages flowing between the client and the server, we enable temporally separated users to search for and playback previous work sessions to share knowledge and experience.

Title:

MAXIMISING THE BENEFITS OF ELECTRONIC COMMERCE: AUTOMATING THE PROCESS OF VIRTUAL STORE GENERATION & MARKETING FOR SMES

Author(s):

Colin Charlton, Jim Geary, Janet Little and Irene Neilson

Abstract: Evolution is required in the services offered to businesses by regional centres for the promotion of electronic commerce if the potential benefits and new opportunities presented by the latter are to be successfully exploited. Public access to and training in use of Internet related technologies is required if a local consumer base is to be established. Software tools are also required. Centres need to be able to effectively and efficiently generate the key functionality of a generic on-line store that can be customised and maintained by any SME without the need for specialist programming knowledge. Tools are also required to automate the registration of on-line businesses with the appropriate, quality search engines. This paper reviews the suite of software tools and strategies used by a regional centre, Connect, to achieve such objectives.

Title:

DEVELOPING DATABASE APPLICATION IN INTERNET - AN ASP FRAMEWORK

Author(s):

Gheorghe Musca, Ariana Popescu and Florin Munteanu

Abstract: The rapid growth in the use of Internet by both individuals and businesses resulted in a wide area of applications, from which those involving database access/control are by far the most rapidly evolving ones. On the other hand, Intranet applications become a more and more challenging solution for a wide area of problems as an alternative to classical local area networks applications. The paper presents a framework for developing database applications for Internet/Intranet in one of the most powerful technologies - Active Server Pages (ASP) on Windows NT servers. It analyses the benefits and limitations of this approach, together with the limitations of HTML to provide a full database user-interface. The paper presents several ASP modules for general database Web programming, implemented as local or remote scripts. These modules permit rapid development of new applications. The second part discusses a case-study, the design and implementations of a on-line database system for a medical application (NEFROROM) involving patients with renal disease that need dialysis support. The paper discusses the advantages of the proposed solution, which can be used for similar applications.

Title:

A ‘SEMANTIC’ APPROACH FOR IMPLEMENTING METADATA ON THE WORLD WIDE WEB

Author(s):

Gian Piero Zarri

Abstract: Several current proposals about metadata maintain they make use of a true ‘semantic’ approach in the description of the essential characteristics of the original documents. In reality, they are largely based on the use of some ‘external’, ‘physical’ features, and address only the ‘external identification framework’ of these documents, and not their real ‘meaning’. In this paper, we describe some of the main data structures proper to NKRL (Narrative Knowledge Representation Language), a language expressly designed for representing, in a standardised way (metadata), the semantic content (the ‘meaning’) of complex multimedia documents.

Title:

VIDEO COMMUNICATIONS OVER IP/ATM NETWORKS IMPLEMENTATION ISSUES AND PERFORMANCE

Author(s):

Luis Orozco Barbosa

Abstract: Many recent studies have been conducted involving the transport of constant and variable bit rate MPEG-2 video in Asynchronous Transfer Mode (ATM) networks; however, many of these studies have considered fairly homogeneous scenarios in which the only other traffic present in the ATM network, if any, are other MPEG-2 video sources. In this study the traffic pattern of MPEG-2 video communications in an ATM network under heavily loaded network conditions, in which the generated traffic sources are bursty in nature, is considered. To complete the study an experimental VoD testbed, developed as part of a collaborative research effort between the Communications Research Centre of Canada (CRC), Nortel and the University of Ottawa, was employed. To determine the characteristics of the MPEG-2 video traffic generated by the VoD application in the ATM network in the presence of other traffic, cell interarrival time measurements were considered. Results obtained show that the end-to-end flow control implemented in the application layer in the VoD system and the traffic controls implemented intermediate network elements (e.g., routers, switches) have significant impacts on the characteristics of the MPEG-2 traffic carried in the ATM network. Moreover, the impact of the intermediate network elements on the characteristics of the MPEG-2 traffic increases with the amount of non-MPEG-2 video traffic present in the network.

Title:

SYSTEM SUPPORT FOR INTRANET-BASED WORKFLOW PROCESSING

Author(s):

Alexander Schill and Christian Mittasch

Abstract: Recently, middleware based on CORBA and Java has gained major importance for practical applications. This paper presents a higher-level middleware approach for supporting workflow management in an Intranet with a specific emphasis on distribution and security. The concepts are based on objects that encapsulate resources and basic business processes. They are accessible via CORBA interfaces. As opposed to more conventional workflow approaches, control is fully decentralised, and existing objects and resources can easily be integrated. The implementation is based on Orbix and is illustrated by concrete examples. Moreover, a security platform is described that provides basic security characteristics such as encryption or integrity for these applications. Its particular feature, however, is that varying security interests of heterogeneous partners can be specified and semi-automatically negotiated. This can be useful in large workflow applications as addressed by our platform, but also in electronic commerce systems and various other scenarios.

Title:

BUILDING A WORKFLOW ENACTMENT SERVICE FOR TELEWORK CO-ORDINATION

Author(s):

Diogo Ferreira, João Rei, José M. Mendonça and J. J. Pinto Ferreira

Abstract: In its on-going effort to define, specify and build a telework co-ordination system, the Telework Interest Group at FEUP1 - DEEC2 has realised the need for a workflow management system that must be able to support business processes that rely on geographically distributed co-operative work. Telework is an innovative form of work organisation for decentralised or information-based organisational structures whose tasks are independent of their location of execution. However, this organisational practice demands an efficient business process co-ordination or, to be more specific, demands a workflow management system. The work we intend to present is a prototype of the workflow enactment service which is a core component of the management system whose construction is the ultimate goal of the Telework Interest Group1,2. The workflow enactment service, that is currently being built, is a software service that contains a workflow engine capable of creating, managing and executing workflow instances.

Title:

A NEW METHOD TO BLOCK ACCESS TO ILLEGAL AND HARMFUL CONTENT ON THE INTERNET

Author(s):

Byung-Jeon Yoo, Hyun-Gyoo Yook and Myong-Soon Park

Abstract: Filtering software is the most commonly used to protect children and young people from illegal and harmful contents on the Internet. Most of this software blocks access to Internet site that contains information such as pornography, violence, drugs, gambling, and so on, using black list filtering (BLF) method. In this method, filter uses the encrypted huge black list at client PC. BLF, accordingly, causes three major problems: black list updating, black list security, and performance degradation of client PC, because of the encrypted huge black list. In this paper, we describe a new method called access list filtering on client/server (ALFCS). The method can solve all of the three problems of BLF and improve filtering performance. The main idea is to use the small access list at client PC, and if needed, query to black list server to use the huge black list. Experiments of this method show that we achieve significant filtering speedup as compared to BLF without any performance degradation.

Title:

AN EFFICIENT PROTOCOL FOR RADIO PACKET NETWORKS

Author(s):

A-Rum Jun, Se-Jin Hwang, Hae-Sun Shin, Gun-Hee Kim and Myong-Soon Park

Abstract: Among various kinds of the wireless network, major concern of the packet radio network is data communication whereas other wireless networks such as cellular technologies are designed for real-time voice communications. Nowadays, packet radio network users not only wish to communicate only within the packet radio network but also want to navigate Internet, the ocean of various information. To meet the demand, we have designed and implemented the Brown, a communication architecture connecting a mobile host to a fixed host on the Internet. However in our previous version of implementation, we did not thoroughly exploit the difference between the wireless link and wireline link. As we know well, major difference of them is that the latency of wireless one is much longer than that of the wireline one. We failed to concern the point enough thus the TCP/IP performs poorly in our previous performance evaluation of the Brown. In this paper, we introduce an efficient data transmission method that can be adapted to the wireless environments to handle such problems above. It is designed for the packet radio networks and works well especially in the case of the interactive deliveries, which is very common case when browsing the Internet.

Title:

NETPROLOG - A LOGIC PROGRAMMING SYSTEM FOR THE JAVA VIRTUAL MACHINE

Author(s):

Cedric Luiz De Carvalho, Eduardo Costa Pereira and Rita Maria Da Silva Julia

Abstract: NetProlog is a system that combines the main features of logic programming with those found in Java and other mobile code languages. The logic programming side of the system does not require as much explicit control as Prolog. At compile time, a partial evaluator provides cuts, ordering of literals and other control facilities that it deems suitable to an efficient execution of the generated code. It also performs type inference, in order to detect type clashing and unwanted operations. This partial evaluator has some knowledge about the real world, which it uses to sort out the resolution steps in such a way that they create the data-flow necessary to feed arithmetic expressions and database queries with instantiated variables. The authors tried to keep the language compatible with ISO Prolog. Therefore, the compiler does not require an explicit declaration of constructors for compound types, like in ML, Haskell, or Clean. Besides this, the programmer can use cuts, if he/she wants to.

Title:

IMPLEMENTING NETWORK PROTOCOLS IN JAVA - A FRAMEWORK FOR RAPID PROTOTYPING

Author(s):

Matthias Jung, Ernst W. Biersack and Alexander Pilger

Abstract: This paper presents JChannels, a framework to support the implementation of network protocols in Java. The goals of JChannels are the rapid development of structured reusable, and configurable protocol stacks profiting from Java features like incorporate concurrency, portability, and runtime class loading. We present the JChannels architecture show how to work with JChannels, give an example implementation of a simple transport protocol and provide some performance results.

Title:

COMBINING GRAPHIC AND ALPHANUMERIC INFORMATION IN JAVA APPLICATIONS

Author(s):

Ricardo João Cruz Correia and José Paulo Leal

Abstract: The integration of graphic information and alphanumeric data has a fundamental role in several fields such as Geographical Information Systems (GIS) and Facilities Management Systems. With the advent of the Internet there is an increasing demand for distributed applications in those areas. To implement such applications we propose an approach to integrate vector graphics and databases in Java applications and applets. Java is particularly suited to implement distributed applications integrating graphical and alphanumeric data since it is a cross platform language and its API provides packages for graphics, networking and database connectivity. In this paper we describe BlueBase, a class of Java objects that combines both data types and integrates them in final applications, and BindData, an utility program to interactively generate BlueBase objects from the source data. We also present two applications that use BlueBase objects and were developed to test the proposed system.

Title:

SOFTWARE AGENTS IN NETWORK MANAGEMENT

Author(s):

Rui Pedro Lopes and José Luís Oliveira

Abstract: The globalisation of Internet technology had a strong impact on technology price and availability, which resulted in the emerging of more opportunities and more services in distance learning, electronic commerce, multimedia and many others. From the user perspective, there is a need for more comprehensive interfaces. From the network perspective, the introduction of QoS and the upcoming of new and more complex services led to additional bandwidth and management requirements. The amount of management information produced by applications, workstations, servers and all kind of network components is increasingly hard to process. All this information can be filtered by an intelligent network management system, capable of proactive management. This paper intents to highlight the role of newer paradigms, such as Software Agents, in Network Management frameworks. The use of intelligent programs that substitutes the user in boring, repetitive or information intensive tasks can help in the resolution of problems such as congestion, reliability, real-time response and many others.

Title:

ADVANTAGES AND PITFALLS OF OO BUSINESS FRAMEWORKS

Author(s):

Simon Beloglavec, Tomaz Domajnko, Marjan Hericko and Ivan Rozman

Abstract: Article describes practical experiences with framework based development and gives overview of unique features of object oriented business frameworks that distinguish them from other business software frameworks. We introduce pilot project which includes development of application for personal finance management based on object oriented framework. We have chosen IBM San Francisco – Java based object oriented framework which provides not only system level reuse but also encourages domain specific reuse. Emphasis is on problems which we have encountered using the framework. We expose concerns about framework documentation and discuss different ways of using the framework. Finally we evaluate applied design and development tools in terms of suitability for framework based development.

Title:

ISSUES ON INTERNET EVOLUTION AND MANAGEMENT

Author(s):

Rui L. Aguiar and José Luís Oliveira

Abstract: This paper discusses the current challenges being presented to the Internet. Starting from the service point of view, we discuss new enterprise requirements being laid upon the net, and the Internet potential to fulfil them. In particular, the introduction of quality of service in the network and the differentiated services perspective are addressed from the point of view of introduction of new network characteristics. Challenges presented to management evolution in order to cope with this new integrated network are presented. Final conclusions indicate the growing need for a more active participation of enterprises on the current developments of the Internet.

Title:

A REMOTE METEOROLOGICAL DATA SERVICE SUPPORTED ON CORBA

Author(s):

Jorge S. C. Mota, José Luís Oliveira and Fernando M. S. Ramos

Abstract: The growing improvement of development tools for the Internet is originating the appearance of new and challenging fields of services. A new promising field consists on tiny applications that allow the remote control of equipment parameters through the Internet. This paper discusses, how CORBA technology can be used to support a Meteorological Data Service based on the Internet. The presentation approach is primary focused on implementation and validation issues in order to evaluate the overhead introduced by a middleware such as CORBA in “thin” communication processes. The work presented is part of a national research project that aims to develop a forest fire monitoring system.

Title:

A WEB BROWSING AGENT

Author(s):

Aimilia Tzanavari

Abstract: Recently the Internet has been growing at such a rate that there is a need to have more efficient ways of finding information. Search engines have long been an extremely useful tool for every Internet user. However, they tend to give rather general and numerous results. In addition, their static nature, due to the index of Web pages that they maintain and consult in every search, limits their capabilities because the Internet literally changes constantly. Our Web Browsing Agent is an application that helps the user retrieve interesting HTML pages from the Internet by defining what he is looking for in great detail. He defines the strings that he considers to be of interest in both pages and links, and the weightings that are to be assigned to each one. The agent then performs a search, applying heuristics to the links to determine which pages to search further. Since the search is done in real time, the Internet is viewed as it actually is at the time of the search. Furthermore, there are no limitations on the search string’s format: it could be any string of characters, which may not be able to be indexed by a conventional search engine.

Title:

TIBLEUS: A MODEL TO BUILD INTERNET ONLINE SUPPORT SERVICES USING ASP

Author(s):

José García-Fanjul, Cristina Monteserín, Claudio De La Riva and Javier Tuya

Abstract: TIBLEUS is a model of Internet-based support services for medium-sized and small software companies. One of the objectives is, of course, to offer the usual functionality of an online support service for the customers of the firm: cataloguing contents about software products and bug reports, maintaining patches and listing “frequently asked questions” documents. Its main goals, however, are to facilitate the mechanism for the creation of a new support site from scratch, to help maintain and enhance this site easily and to offer advanced features based on the use of Internet technology such as dynamic page generation, complete searches or user registration and feedback. The recent appearance of Active Server Pages (ASP) as a software platform for developing applications on the Web helps us deploy all these features.

Title:

DILEMMA: A MEDIATION METHODOLOGY TO INTEROPERATE INFORMATION SYSTEMS

Author(s):

Fabrice Jouanot, Nadine Cullot, Christophe Nicolle and Kokou Yétongnon

Abstract: DILEMMA (Dynamic Interoperable and Logical Extended Mediation Model Architecture) introduces a solution which allows the cooperation of large scaled information sources as in a Web environment. It is an interoperable architecture which allows to resolve semantic and structural heterogeneity of information based on a hybrid approach of the mediation by combining schema and context mediation techniques. DILEMMA proposes a mediation methodology based on the definition of Informative Objects which are represented in a logical Object Oriented language and on the specification of semantic contexts which are described on descriptive logic.

Title:

REALIZATION OF EXPOSITION-LIKE EVENTS IN CYBER-SPACE

Author(s):

Adérito Marcos, Jürgen Bund, Luís Grave, Eduardo Tsen and Rosa Ferreira

Abstract: Realisation of exposition-like events, like conferences, exhibitions, etc., involves a multifaceted process regarding aspects such as: general planning, administration of the event’s ground, exhibitors and visitors support, local event’s time scheduling during de exposition or simple staff management. Depending on the complexity of the exposition, this usually requires considerable logistics efforts, which could be decisively facilitated through information technologies such as Internet/Intranet/Web-based systems. The aim of this paper is to propose a generic web-based solution to support exposition-like events. A prototype designed to support any type of expositionlike events - the Exvent System, is in detail described and discussed. It supports specific services and interfaces regarding different users such as: organisers, exhibitors and visitors. Users have a multimedia environment available, displaying a graphical model of the exposition-ground along with specific on-line facilities.

Title:

STATEWIDE ENTERPRISE COMPUTING WITH THE PURDUE UNIVERSITY NETWORK-COMPUTING HUBS

Author(s):

Nirav H. Kapadia, José A. B. Fortes and Mark S. Lundstrom

Abstract: The Purdue University Network Computing Hubs of Indiana, or PUNCH(I), is an Internet- based software infrastructure under development that enables students and faculty at all Purdue cam- puses in the state of Indiana to use and share unique computational resources. It leverages an existing software infrastructure, the Purdue University Network Computing Hubs (PUNCH). This paper ex- plains the need for a statewide computing system, describes the environment in which PUNCH(I) will be used, outlines the requirements and the goals of an enterprise-wide computing system, and de- scribes how they are met through PUNCH. Key transaction types (in addition to document delivery) supported by PUNCH are described and their performance is discussed.

Title:

CLIENT CACHE-INDEX FORWARDING FOR REDUCING NETWORK TRAFFIC OVER WIRELESS NETWORK FOR THE WWW

Author(s):

Hae-Sun Shin, Gyeong-Hun Kim, Se-Jin Hwang, A-Rum Jun, Gun-Hee Kim and Myong-Soon Park

Abstract: Both caching and prefetching are the well-known solutions to reduce round-trips and network traffic in network. In general, ‘client-caching’ and ‘caching-proxy’ are implemented as caching methods for the WWW. However, since these methods are not suitable for wireless network, the specialized caching and prefetching method, the combination of client-caching and caching-proxy, has been proposed. In this method, the basestation should have the caching information of mobile hosts not to retransmit the cached data. Therefore, mobile hosts should transfer cache-index, which contains information about the cached data, to the basestation, and the basestation should keep and manage it. However, if the size of cache-index is not small, cache-index transmission may cause serious network traffic, and storage overhead on the basestation may arouse trouble. Here, we propose the cache-index forwarding and management method for reducing network traffic, storage space, and control workload. This method only sends cache-index relevant to the requested data, whenever the cached data is requested by Web-Browser.

Title:

A JAVA DISTRIBUTED ARCHITECTURE FOR REMOTE AND AUTOMATIC MANAGEMENT OF TELEPHONIC COMMUNICATIONS

Author(s):

J. Arribi and V. Carneiro

Abstract: Thanks to the popularity of WWW and due to the increasing aceptation of Java as language for the development on Internet, the number of distributed applications on this important platform is increased. The advantages of an intuitive graphic user interface, like web pages, and the facilities that Java gives to make distributed programming via RMI, and to send classes with the serialization, have changed the monolithic and traditional applications which have given way for more advanced and distributed architectures. In this paper is showed the convergence of Web and Java technologies in order to develop a distributed architecture that provides telephonic services. These services are, on the one hand, remote and automatic programming of PBX (Private Branch Exchange) and, on the other hand, storing and querying of telephonic costs via Web.

Title:

AN APPLET BASED QUERY TOOL

Author(s):

Nuno Valero Ribeiro

Abstract: With the WWW exponential development there is a growing need and demand for applications which integrate information from existing databases in one single point of access like a common browser. This paper describes an applet based 3-tier client/server Java™ application, which enables Internet users to easily access and query, via JDBC application programming interface, a remote database using a HTML page.

Title:

BUSINESS PROCESS MODELING AND ANALYSIS USING GERT NETWORKS

Author(s):

Joseph Barjis and Jan L.G. Dietz

Abstract: In this paper we discuss the application of the GERT (Graphical Evaluation and Review Technique) network to business processes. Since the introduction of GERT, many papers have been published to show its application in various fields of engineering, management and system study. Application to information systems in general and business processes in particular has not yet been shown in the literature. One of the strengths of the GERT networks is the graphical representation, which is intuitive and easy to understand. The GERT network model, proposed in this paper, is adapted for business processes and information systems modeling, which means extending the graphical representation of GERT networks with respect to the peculiarities of business processes. We propose a new approach for the development of mathematical models of business processes using Z-transform and Mason's rule. We apply DEMO (Dynamic Essential Modeling of Organization) to produce business process models, which are then further analyzed using GERT. As an application to real world, we illustrate our method taking the case of the work of the Conciliation Board for Consumers in the Netherlands.

Title:

A CAUSAL MODEL FOR THE INTENTION TO USE A WEB SITE: A STUDY BASED ON THE TECHNOLOGY ACCEPTANCE MODEL AND THE USES AND GRATIFICATIONS PERSPECTIVE

Author(s):

Chaoming J. Wu

Abstract: It is interesting to note that while some commercial web sites benefit from continuously attracting consumers, some do not. Questions related to web customers attract a lot attention not only from the business community but also from researchers. Recently, a stream of research focus on WWW related marketing issues, but there seems to be a lack of study from the user intention viewpoint. Information technology (IT) acceptance and intention to use have been studied extensively by researchers in the MIS field (Davis, 1989; Taylor and Todd, 1995). This study attempts to construct a model for analyzing web use intention based on the Technology Acceptance Model (TAM) that has been studied and accepted as a powerful model in studying usage of the information technology.

Title:

BASE ARCHITECTURE FOR NETWORK APPLICATION DEVELOPMENT

Author(s):

Cecilia Sosa Arias Peixoto and Beatriz Mascia Daltrini

Abstract: Over the last couple of years, the Internet has influenced companies to use the Web as a lowcost communication and distribution channel with their customers, as well as to facilitate information sharing within employees in the company. Today, building applications that can give dynamic content in web pages and provide interactivity is one of the keys problems faced by web-masters. The main oddities for system implementation includes HTTP statelessness, difficult dialog control, lack of user profiles to control access rights, lack of structure and extensibility for the HTML and CGI scripting overhead. This way, many independent software vendors designed their own framework to design and deploy web applications.

Title:

DISTRIBUTED NETWORKING COMPUTER MODEL: SUN’S JINI AS AN ADVANCE IN THE TECHNOLOGY: AN OVERVIEW OF JAVA’S DISTRIBUTED PLATFORM

Author(s):

Hugo José P. B. Paulino Pinto

Abstract: As a result of the increasingly demanding information needs of our age, computer systems grew enormously, both in functionality and complexity. In the world we live in, connecting and managing such systems is a hard and unproductive task, and a simple, dynamic, intuitive way of dealing with these now omnipresent devices is needed in order to effectively face the trend. However, problems such as the lack of interoperability between different platforms and the operating systems’ increasingly overwhelming complexity arise, often generating huge gaps from minimal, non-critical incompatibility issues. Sun Microsystems’ Jini addresses these issues based on simplicity and efficiency criteria. Jini technology is separated in three main categories: Infrastructure, Programming Model and Services.

Title:

3270 WEBCLIENT - A JAVA IMPLEMENTATION OF DYNAMIC REVAMPING

Author(s):

Duarténn, C. J., Bontchev, B., Azevedo, J., Cabeleira, J., Granja, R. and Stocker de Sousa, M.

Abstract: Historically, host-to-desktop integration has been achieved through a number of revamping solutions. The revamping process actively transforms the host screen in a set of GUI components, either closely emulating the original host application or radically transforming it through centralized customization. Traditional static revamping solutions do not provide an acceptable answer to the problem at hand because of their great distribution costs and session consistency and synchronization problems. Believing that adopting Web technology must go beyond simply extending legacy applications onto the Web, we created and implemented the concept of dynamic revamping.

Title:

ALEPH: AN ENVIRONMENT FOR MANAGING WEB DATABASE APPLICATIONS

Author(s):

José Paulo Leal

Abstract: Aleph is an environment for developing distributed applications on the World Wide Web (WWW). Its main design goal is to assist non- programmers in the creation of WWW applications involving the coop- erative management of small databases. A typical application developed with Aleph manages a database of a few thousand records, containing text and images in formats supported by the WWW. The records can be inserted by any Internet user and, in general, can only be edited by the same user that produced it, after authentication. The updated data is immediately accessible to all users, and in some cases may require validation by a privileged user. The navigation on the appli- cation's data uses hypertext links and in general is based on a hierarchical structure that is also recorded on the database.

Title:

WEB INTERFACE TO A METEOROLOGICAL STATION

Author(s):

F. Melo Rodrigues, J. Braz Gonçalves and J. Furtado Gomes

Abstract: The Instituto Politécnico da Guarda (IPG), has on campus a meteorological station that acquires several meteorological data such as: air temperature, humidity, precipitation, barometric pressure, solar radiation, direction and speed of the wind, in a total of 14 sensors, used for internal research projects only. The Departamento de Informática of the IPG proposed the development of a web interface to the station that would allow the meteorological data to be presented on the Internet in real time. Nowadays the monitoring and data acquisition systems are in general based on dataloggers that appear on the market with a wide range of characteristics. The datalogger used in this project was a Datataker 500 from Data Electronic, Inc. which communicates with the PC through the serial port and uses the American National Standard Code for Information Interchange (ASCII) data format. The data from the sensors is acquired by the datalogger and sent at specific time intervals, to a local computer via the RS-232 interface. The local computer stores the data, through the local network, in a web server. After the data is in the web server it is possible to respond to hypertext transport protocol (HTTP) requests made by hypertext markup language (HTML) clients.

 

 
Page Updated on 06-05-2003

Copyright © Escola Superior de Tecnologia de Setúbal, Instituto Politécnico de Setúbal