|
Area 1 - DATABASES AND INFORMATION SYSTEMS INTEGRATION
Area 2 - ARTIFICIAL INTELLIGENCE AND DECISION SUPPORT SYSTEMS
Area 3 - INFORMATION SYSTEMS ANALYSIS AND SPECIFICATION
Area 4 - INTERNET COMPUTING AND ELECTRONIC COMMERCE
Area 1 - DATABASES AND INFORMATION SYSTEMS INTEGRATION
Title: |
INTEGRATING
ENTERPRISE MODELS FOR BANKS |
Author(s): |
Khaled Naboulsi
|
Abstract: |
The demand for change in the banking industry has been
continuous. During the past decade alone, the banking industry has
encountered events that have forced banking to an enterprise level never
before seen. These events include increased competition both among banks
and non-banks competitors, an increased awareness in customer preferences,
and technological advances in distributed systems. Banking institutions
need to adopt new system approaches to compete, or face their own demise.
Traditional integration models focus on application logic to solve
compatibility issues. We examined the Application Development Model (AD)
and the Workflow Management System that uses CORBA-Event Services as two
alternative approaches to resolving the banking industry’s problems.
However, due to the volatility of banking as an industry, it would appear
more appropriate for developers to seek out alternatives that offer more
flexibility, especially in light of the problems related to legacy
systems. Utilizing a middleware such as CORBA provides an industry such as
banking with solutions that address the issues of scalability,
performance, and flexibility. |
|
Title: |
DOMAIN
KNOWLEDGE AS CORPORATE RESOURCE OF FINANCIAL FIRMS |
Author(s): |
Michael
S. H. Heng and Steve C.A. Peters |
Abstract: |
It is almost a cliché to say that we live in a knowledge
society and that knowledge is an important resource of firms in their
production of goods and provision of services to their clients. However it
is not so easy to find many examples showing the use of domain knowledge
in financial companies. This paper reports the case of how a vehicle lease
company in the Netherlands combines the use of domain knowledge and
information technology into an expert system to automate the control of
vehicle maintenance activity. The results benefit all parties concerned.
The lease company can control the vehicle repair and maintenance works
more efficiently. For the customers, it is higher rate of vehicle
utilisation, greater safety and lower costs. The dealers who carry out the
repairs and maintenance can do their work faster, and are paid immediately
and automatically via the banks, resulting in lower administrative costs.
The case story suggests that domain knowledge can be perceived as a
corporate resource and its utilisation can produce values for the
stakeholders. We propose that financial firms (1) see themselves as
knowledge system, as a network of knowledge nodes serving their customers,
(2) consider knowledge intensive firms as their role models. We draw on an
idea of Friedrich Hayek who perceives the economic problem of society as a
problem of the utilisation of knowledge not given to anyone in its
totality. The paper concludes by discussing some organizational obstacles
on the road to re-invent banks into knowledge systems. |
|
Title: |
EVALUATION
OF A VISUALISATION DESIGN FOR KNOWLEDGE SHARING AND INFORMATION DISCOVERY |
Author(s): |
Luis
Borges Gouveia and Feliz Ribeiro Gouveia |
Abstract: |
This paper presents a tool using a 3D interactive
visualisation system that allows knowledge sharing and information
discovery. The tool proposes a visualisation design using direct
manipulation techniques to convey information about a structure for
knowledge sharing. The structure describes a knowledge theme described as
a set of concepts. The set of concepts provides a particular context
description about the knowledge being shared. The application tested and
presented in this paper uses the set of concepts to direct searches in the
World Wide Web. The visualisation design is briefly presented and
preliminary evaluation results are reported. These results show that the
system tends to be better supporting people with some knowledge expertise
about the knowledge being shared even if they have little World Wide Web
expertise. These results seem to show some potential for the visualisation
design as an interface for both knowledge sharing and information
discovery for people that have already some knowledge expertise about a
given theme, but usually suffer from information overload or lack of
knowledge about the structure of large information spaces, such as the
World Wide Web. |
|
Title: |
A
DECISION SUPPORT SYSTEM MODEL FOR SUBJECTIVE DECISIONS |
Author(s): |
Vishv Malhotra
|
Abstract: |
. Modern government and business units routinely collect
and store structured data of general interest to them. In the course of
their operations, these organisations often need to take decisions that do
not directly follow from the available data. Specialised managerial skills
are needed to interpret the data and derive useful conclusions. Subjective
assumptions and judgments are made by the mangers to interpret the data.
Where the data volume is large, it may be difficult to sift the data, as
the managerial skills may not be available for the repeated evaluation of
every entity in the database. A decision support system is needed that can
be easily reprogrammed to cater for the subjective judgments and biases of
the decision-makers. In this paper, we develop a model for a decision
support system to identify promising entities based on the subjective
preferences. The model can easily be integrated with a relational database
system/tool such as Microsoft Access to examine entities in the database
and to highlight those that have superior potential based on the
decision-makers subjective judgments. |
|
Title: |
XML
INTERFACE FOR OBJECT ORIENTED DATABASES |
Author(s): |
Frank Van
Lingen |
Abstract: |
Within CERN (European Organization for Nuclear Research)
there are different data sources (different in format, type and
structure). Several of these sources will be flat files or data sources
other than databases. To create a common interface to these sources, we
decided to use XML (eXtended Markup Language). XML is becoming the de
facto standard for data (exchange). Because of this there is a large
number of developers of tools. This document describes a mechanism to
access object-oriented databases as an XML document by using serialization
on demand. Furthermore it discusses an extension of XML to achieve
this. |
|
Title: |
METADATA
FOR THE SEMI-STRUCTURED MONO-MEDIA DOCUMENTS |
Author(s): |
Ikram
Amous and Anis Jedidi |
Abstract: |
One of the main information retrieval problems on the Web
is related to the poverty of describing and cataloguing any information
type. One proposal to cope with this lack consists in introducing the
metadata concept to enrich and structure information description and
improve searching relevance. We propose here a contribution to extend the
existing metadata by a set of metadata describing documents resulting from
various media (text, image, audio and video). These metadata, structured
with XML, makes it possible to model document indexing by their content
and/or structure and to process them by query languages. |
|
Title: |
NOVEL DATA VISUALISATION AND EXPLORATION IN MULTIDIMENSIONAL DATASETS |
Author(s): |
Nikolaos Kotsis, George R S Weir, John D Ferguson and Douglas R MacGregor |
Abstract: |
Commonly, decision support systems require large-scale
data analysis facilities for efficient decision-making. While OLAP tools
provide multidimensional analysis, traditional visualisations prove
inadequate as means of viewing and exploring complex relationships across
multiple dimensions. Commercial databases often support a variety of two
or three dimensional visualisation facilities, including bar charts,
scatter diagrams, data plots, and cross tabulations. In this context, the
present paper describes a novel approach to the problem of access and
visualisation for data in complex hierarchical datasets. The approach
integrates a novel browsing technique, which affords user navigation in
several levels of summary information, with a modified scatter projection
of the measure of interest. A prototype implementation of the proposed
approach has been developed. This is described and advocated as a
multidimensional interface for OLAP that addresses several significant
issues in visualisation and exploration of multidimensional
datasets. |
|
Title: |
AN INSIGHT INTO THE AUSTRALIAN THE ERP MARKET |
Author(s): |
Paul Hawking and Andrew Stein |
Abstract: |
The global ERP industry blossomed in the 1990’s automating
back office operations. Research up to date has been limited especially in
the relation to market penetration of these products around the world.
This paper presents an analysis of the Australasian ERP market place. It
looks at the market movement and demographics of companies that have
implemented SAP software, the dominant ERP vendor within the Australasian
marketplace. The 387 SAP customers are classified by industry sector,
size, and software implemented to establish metrics for ERP implementation
pertaining to the region. |
|
Title: |
OBJECT MODELS FOR MODEL BASED APPLICATIONS |
Author(s): |
Giorgio Bruno and Marco Torchiano |
Abstract: |
Web based systems in general, and e-commerce applications
in particular, are required to face a very high pace of change. The
evolution of such systems is caused both by adaptation to the customer
needs and enterprise continuous improvement strategies. Such rapid change
can be achieved adopting a model-based approach in which the application
is customized according to a model. The concept of model proposed in this
paper is more wide that the one adopted in most modeling languages such as
UML. We propose an object model that allows an application access the
model according to different perspectives and abstraction levels. |
|
Title: |
ARCHITECTURE FOR REENGINEERING LEGACY DATABASES |
Author(s): |
Prasad N. Sivalanka, S. V. Subrahmanya and Rakesh Agarwal |
Abstract: |
There exist different methods to facilitate database
design recovery under the framework of software engineering and
reengineering. These tools and methods are usually limited to a particular
scenario and requirement, and thus, not generic. In most cases, new tools
and methods need to be redeveloped to suit these scenarios. This can
result in a significant waste of effort and increased costs. In this paper
we describe a generic architecture for reengineering legacy databases,
which is an outcome of working on a real software project for one of our
customers 1 . The goal of this research is to formalize a process that is
applicable to different database reengineering scenarios and requirements.
We elaborate the steps that were actually done for implementing the
project. |
|
Title: |
REDUCING INCONSISTENCY IN DATA WAREHOUSES |
Author(s): |
Sergio Luján-Mora and Enrique Montenegro |
Abstract: |
One of the main problems in integrating databases into a
common repository is the possible inconsistency of the values stored in
them, i.e., the very same term may have different values, due to
misspelling, a permuted word order, spelling variants and so on. In this
paper, we present an automatic method for reducing inconsistency found in
existing databases, and thus, improving data quality. All the values that
refer to a same term are clustered by measuring their degree of
similarity. The clustered values can be assigned to a common value that,
in principle, could substitute the original values. We evaluate different
similarity measures for clustering. The method we propose gives good
results with a considerably low error rate. |
|
Title: |
A STRATEGY TO DEVELOPMENT ADAPTIVE WORKERS
INFORMATION MANAGEMENT SYSTEMS |
Author(s): |
Irene Luque Ruiz, Enrique López Espinosa, Gonzalo Cerruela García and Miguel Ángel Gómez-Nieto |
Abstract: |
Difficulties arising in the management of the information
corresponding to the workers (Curriculum Vitae) are created on one hand by
the heterogeneity (multimedia data), lack of structure and sheer volume of
this information, and on the other by the need to analyze and make best
use of these data through complex queries whose results must comply with
predetermined standards or preferences ---delicate processes indeed, given
the nature of the information under scrutiny. A system for the management
of this information must thus combine a multitude of features to permit
the manipulation of highly-structured data (classical business management
systems), immense volumes of information (electronic libraries) and
multimedia (image databases, sound, etc.). The present study is centered
on an analysis of the characteristics of this information and presents a
model which, by focusing on the nature of basic information units and
taking into account the independence between structure and information,
strives to afford adaptability to the requirements and standards of the
organizational environment, while remaining highly adaptable to subsequent
changes. |
|
Title: |
ESTABLISHING THE IMPORTANCE OF ERP IMPLEMENTATION
CRITICAL SUCCESS FACTORS ALONG ASAP METHODOLOGY PROCESSES |
Author(s): |
José Esteves and Joan Pastor |
Abstract: |
This research in progress paper seeks to establish the
relationship between critical success factors of ERP implementations and
the ASAP methodology processes. Applying the process quality management
method and the grounded theory method we derived a matrix of critical
success factors versus ASAP processes. This relationship will help
managers to develop better strategies for supervising and controlling SAP
implementation projects. |
|
Title: |
KNOWLEDGE-POWERED EXECUTIVE INFORMATION SYSTEM DESIGN |
Author(s): |
Sander Nijbakker and Bob Wielinga |
Abstract: |
Why do we want an Enterprise Ontology (Architecture)? If
we want to design an World Wide / Global Enterprise Information system, we
need a rigid (ordered) structure at the top and adaptability at the
bottom. Most ontologies stop at the level of best-practices or activities.
To find the root structure we have to abstract a little further to the
most abstract (atomic) ontology. This generic ontology is filled with the
elementary business concepts comprising the atomic intelligent enterprise
architecture. Ideas, methodology and application thereof. |
|
Title: |
MySDI: A GENERIC ARCHITECTURE TO DEVELOP SDI PERSONALISED SERVICES |
Author(s): |
João Ferreira and Alberto Silva |
Abstract: |
We introduce in this paper a generic architecture to deal
with the general problem: “How to Deliver the Right Information to the
Right User?”. We discuss this issue through the proposal of our SDI
(Selective Dissemination of Information) Personalised Architecture, called
MySDI, which is based on the software agent paradigm as well as on
information retrieval techniques. In order to clarify and validate this
proposal we also introduce in this paper a prototype service, called
MyGlobalNews, which should be a public service to provide personalised
news. |
|
Title: |
DATA WAREHOUSE STRIPING: IMPROVED QUERY RESPONSE TIME |
Author(s): |
Jorge Bernardino and Henrique Madeira |
Abstract: |
The increasing use of decision support systems led to an
explosion in the amount of business information that must be managed by
the data warehouses. Therefore, data warehouses must have efficient Online
Analytical Processing (OLAP) that provides tools to satisfy the
information needs of business managers, helping them to make faster and
more effective decisions. Different techniques are used to improve query
response time in data warehouses, ranging from classical database indexes
and query optimization strategies to typical data warehouse materialized
views and partitioning. But until now most of the works concentrate on
centralized data warehouses where a single computer contains all the data.
However, a large centralized data warehouse is very expensive because of
the great setup costs and does not take advantage of the distributed
nature of actual organizations operating worldwide. In this paper, we
propose a data partitioning approach specially designed for distributed
data warehouse environments called data warehouse striping (DWS). This
technique takes advantage of the specific characteristics of star schemas
and typical OLAP query profile guaranteeing optimal load balance of query
execution and assuring high scalability. The proposed schema is evaluated
experimentally for most typical OLAP operations using different types of
queries and it is shown that an optimal speedup can be obtained. |
|
Title: |
IODBCON: A INTEGRTED OBJECT-ORIENTED DATABASE SYSTEM FOR
INTEGRATING INFORMATION ABOUT ARCHITECTURE DESIGN AND
CONSTRUCTION PROCESSES |
Author(s): |
Farhi Marir and Yau Jim Yip |
Abstract: |
This paper presents IODBCON (Integrated Object Database
for Construction), an interactive system for integrating CAD and
construction related applications to address the problems of design
fragmentation and the gap that exists between construction and design
processes. It provides a vehicle for storing architectural design
information in an integrated construction object-oriented database that
can be shared by a range of computer applications. The IODBCON model is
characterised by several new features. It uses the object-oriented
modelling approach to establish standard models for architectural design
that comply with Industry Foundation Classes (IFC) for common
interpretation of construction design objects and with CORBA (Common
Object Request Broker Architecture) for distribution of the objects
amongst the construction applications. It aims to achieve independence
from the display environment by providing a set of Abstract Factory and
Abstract Design Classes, which provide abstractions that the design model
classes can use to draw and render themselves in any display environments.
More importantly, graphical and textual information about the building
design components is directly saved as instances in an object-oriented
database without passing through the existing CAD databases. To
demonstrate the independence from the display environment, two
applications using IODBCON models are implemented. The first is an
interactive AutoCAD application, which creates instances of the IODBCON
design model and stores them directly in the distributed object database.
The second A web-based application using VRML (Virtual Reality Modelling
Language) for remotely interrogating information stored within the
integrated database, visualising and manipulating the design components in
3D environment. Also, to demonstrate the feasibility and practicability of
the OSCON (Open Systems for Construction) object-oriented product model,
three OSCON construction applications that access and share the IODBCON
building design instances are presented. |
|
Title: |
THE CONCEPT OF INFORMATIONAL ECOLOGY |
Author(s): |
Luigi Lancieri |
Abstract: |
Through the metaphor of informational ecology this
document gives a report on the various information re-use strategies in
the company. We propose a segmentation model of these strategies and an
evaluation method of its benefit. We insist on the re-use of the
information contained in intermediate spaces of storages which are the
proxy-caches, the news servers or other types of shared memories available
in the company. We show, with examples, that informational ecology can be
advantageous in more than one way and in particular to optimize the
information management in the company. |
|
Title: |
MODELLING DATA INTEGRATION IN AN OBJECT BASED GEOGRAPHICAL INFORMATION
SYSTEM |
Author(s): |
Maria Luisa Damiani |
Abstract: |
The integration of geographical data from multiple and
heterogeneous sources is a complex process generally performed in an
incremental way. Current Gis systems provide a wide range of services to
support at least physical data integration. A critical point in the
development of a Gis application is the design of the integration process.
Such a design is necessary to plan out the integration operations which
are to be performed to achieve an effective database. Integration process
modeling can simplify both the analysis and the documentation of the
database construction. In the paper a possible and general approach to the
problem is discussed. A case study of spatial data integration in the
nautical field is finally presented. |
|
Title: |
INTEGRATING ASSOCIATION RULE MINING ALGORITHMS WITH
RELATIONAL DATABASE SYSTEMS |
Author(s): |
Jochen Hipp, Ulrich Güntzer and Udo Grimmer |
Abstract: |
Mining for association rules is one of the fundamental
data mining methods. In this paper we describe how to eÆciently integrate
association rule mining al- gorithms with relational database systems.
From our point of view direct access of the algorithms to the database
system is a basic requirement when transferring data mining technology
into daily operation. This is especially true in the context of large data
warehouses, where exporting the mining data and preparing it outside the
database system becomes annoying or even infeasible. The development of
our own approach is mainly motivated by shortcomings of current solutions.
We inves- tigate the most challenging problems by contrasting the
prototypical but somewhat academic association mining scenario from basket
analysis with a real-world appli- cation. We thoroughly compile the
requirements arising from mining an operative data warehouse at
DaimlerChrysler. We generalize the requirements and address them by
developing our own approach. We explain its basic design and give the
details behind our implementation. Based on the warehouse, we evaluate our
own approach together with commercial mining solutions. It turns out that
regarding runtime and scalability we clearly outperform the commercial
tools accessible to us. More important, our new approach supports mining
tasks that are not directly addressable by commercial mining
solutions. |
|
Title: |
A UNIVERSAL TECHNIQUE FOR RELATING HETEROGENEOUS DATA MODELS |
Author(s): |
David Nelson,
Michael Heather and Brian Nicholas Rossiter |
Abstract: |
Interoperability is considered in the context of the ISO
standards for the Information Resource Dictionary System (IRDS) which
provide a complete definition of an information system from real-world
abstractions through constructs employed for data and function
descriptions to the physical data values held on disk. The IRDS gives a
four-level architecture which is considered 1) informally in terms of an
interpretation of the levels and the level-pairs between them, 2) in terms
of mappings between the levels and 3) formally in terms of a composition
of functors and adjoints across the various levels. An example is given of
the application of IRDS in a categorical context comparing the mappings
from abstractions to values in relational and object-based systems. Such
comparisons provide a route for interoperability between heterogeneous
systems. |
|
Title: |
EXPLOITING TEMPORAL GIS AND SPATIO-TEMPORAL DATA TO ENHANCE
TELECOM NETWORK PLANNING AND DEVELOPMENT |
Author(s): |
Dragan
Stojanovic, Slobodanka Djordjevic-Kajan and Zoran Stojanovic |
Abstract: |
In this paper, modeling and management of
spatio-temporal-thematic data within an object-oriented GIS application
framework, are presented. The object-oriented modeling concepts have been
applied in integration of spatial, thematic and temporal geographic
information in spatio-temporal object model. Based on model implementation
and development of appropriate components for temporal GIS application
functionality, a temporal GIS application framework has been developed.
Description of its architecture and functional components dedicated to
management of temporal aspect of geographic information is given. Planning
and development of telecommunication networks using temporal GIS
application, developed around application framework are presented. Great
performance and qualitative improvements and benefits enabled by such GIS
developments are described. |
|
Title: |
STRATEGIC IS PLANNING PRACTICES |
Author(s): |
Mario Spremic and Ivan Strugar |
Abstract: |
The role of information technology in recent years grows
in its importance. Thus the strategic IS planning becomes a part of
company strategic business plans. This paper presents the results of
survey on the strategic IS planning practices of Croatian companies. The
result of the survey are compared with similar surveys in Slovenia and
Singapore. In large Croatian companies IT is still concerned just as a
tool for automatization of present business processes, completely
neglecting challenging role of IT as competitive resource on market. It is
obvious that source of this problems comes from lack of knowledge and
interest from top management structures of large corporations. Thus the
significant efforts must be done on management side, and solution lays
maybe in necessity to develop completely new hybrid manager profile.
Evidently that this type of manager must get additional knowledge in
strategic business planning and IT management. |
|
Title: |
PRESENTATION OF AN INFORMATION SYSTEMS ARCHITECTURE MODEL
FOR PUBLIC SECTOR |
Author(s): |
Tania Fatima Calvi Tait and Roberto C. S. Pacheco |
Abstract: |
This paper presents an information systems architecture
(ISA) model that comprises the integration of information systems,
technology, business processes and users in public sector environment. The
ISA model presented considers specific aspects of the public sector as
well characteristics of information systems development and use in this
sector found in the literature and in the studies realized in informatic
services Brazilian State public enterprises. The model proposed is based
in the business, information technology (IT) and information systems
integration and it was structured with five components: (a) Government
structure (mission and organizational culture, Planning and government
platform); (b) Public services (considered the “ business” of public
structure, with the information for citizens, for top level and government
administrative technical); (c) Information systems (by including the
legacy systems and executives systems information); (d) Information
technology (centered in investment policies and the government
computational platforms – as micro-computer and mainframe relationship)
and (e) Users (centered in the necessities for public – training,
adaptation and use of the SI). The model proposed was submitted to
validation from informatic services Brazilian State public enterprises.
The results point out the relevance of the integrated vision of their
components and they permit to establish strategies for implantation of ISA
model, in order to observing public sector specificity. |
|
Title: |
SUPPORTING QUERY PROCESSING ACROSS APPLICATION SYSTEMS |
Author(s): |
Klaudia Hergula, Gunnar Beck and Theo Härder |
Abstract: |
With the emergence of so-called application systems which
encapsulate data-bases and related application components, pure data
integration using, for example, a fed-erated database system is not
possible anymore. Instead, access via predefined functions is the only way
to get data from an application system. As a result, the combination of
generic query as well as predefined function access is needed in order to
integrate heterogeneous data sources. In this paper, we present a
middleware approach supporting this novel and extended kind of
integration. Starting with the overall architecture, we explain the
function-ality and cooperation of its core components: a federated
database system (FDBS) and a workflow management system (WfMS) connected
via a wrapper. Afterwards, we concen-trate on essential aspects of query
processing across these heterogeneous components. Motivated by
optimization demands for such query processing, we describe the native
func-tionality provided by the WfMS. Moreover, we discuss how this
functionality can be extended within the wrapper in order to obtain
salient features for query optimization. |
|
Title: |
OBJECT EVOLUTION MECHANISMS IN OBJECT DATABASES |
Author(s): |
Slimane Hammoudi |
Abstract: |
Nowadays, most Object Oriented Database Systems (OODBs)
display serious shortcoming in their ability to model the evolving and
multifaceted nature of common real world entities. While researcher in
knowledge representation systems have been aware of this problem for some
time, database systems represent an environment in which this problem is
particularly severe. OODBs store objects over longer periods, during which
the represented entities evolve. The intimate and permanent binding of an
object to a single type (class) inhibits the tracking of real-world
entities over time. In recent years, various mechanisms for supporting
many-faceted and evolving objects in the context of object oriented data
models have been proposed in the literature. This paper examines in a
first part the semantics associated with object evolution and explores the
implications of these semantics on object oriented concepts. In a second
part, we discusses features regarding the functionality’s of various
object evolution mechanisms proposed in the literature in the context of
OODBs and finally compares four representatives and more recent works in
this area. |
|
Title: |
INTEGRATION OF DIFFERENT DATA SOURCES AND MESSAGE QUEUING SYSTEM
IN POSTNET-CASE STUDIES |
Author(s): |
Dejan Damnjanovic, Zoran Ristic, Miodrag Stanic, Petar Opacic, Boban Peric and Maja Radovanovic |
Abstract: |
In the last 3 years the Post of Serbia has built a large
enterprise network called Postnet. It is organized like an intranet,
connecting all parts of the postal system nationwide. Besides connecting
internal users, the goal of Postnet was to connect various external
systems such as banks, the Telecom of Serbia, Internet providers, tourist
agencies etc. Many of them require on-line real time transactions to
different data sources like Oracle, IBM DB2 or MS SQL Server DBMS. Since
each post-office is using local SQL Server database with its copy of
necessary data, it was needed to ensure that each transaction should occur
across two, most likely different, databases. To solve this problem we
discussed several different technologies like Microsoft DNA, Sun’s EJB and
OMG CORBA. We decided to use multitired architecture based on Microsoft
Transaction Server (MTS) where each involved system is wrapped into one or
more MTS components. Each component exposes corporate data and business
logic through well-defined interfaces. The most interesting applications
are those that integrate Postnet as infrastructure for the PostBank in its
participation in new national payment system. These applications use MTS
as transaction integrator of IBM DB2, SQL Server and Microsoft Message
Queue (MSMQ) transactions. |
|
Title: |
INFORMATION SYSTEMS INTEGRATION |
Author(s): |
Fredrik Ericsson |
Abstract: |
In this paper we address organizational and technological
issues on information systems integration and architectures when it comes
to self-developed relational database management systems, and how these
issues can be identified. Organizational and technological issues have
been identified using a three level of analysis that incorporates
organizational context, application, and data source. We introduce the
concept of data source interface and application interface in order to
analyze the relation between data sources and conformity in how systems
appear and behave on user’s command. The empirical research represented in
this paper has been conducted through a case study. The unit of analysis
is a Swedish small to medium-sized manufacturing company. The
organizational and technological issues outlined in this paper are valid
in a context where developers primary activity does not reside in the
field of information systems development and where the organization does
not have an IT function responsible for the organization’s use of
IT. |
|
Title: |
MIDEA: A MULTIDIMENSIONAL DATA WAREHOUSE METHODOLOGY |
Author(s): |
José María Cavero, Mario Piattini and Esperanza Marcos |
Abstract: |
Developing a Data Warehouse has become a critical factor
for many companies. Specific issues, such as conceptual modeling, schemes
translation from operational systems, physical design, etc... have been
widely treated. Unfortunately, there is not a general accepted complete
methodology for data warehouse design. In this work we present MIDEA, a
multidimensional data warehouse development methodology based on a
multidimensional data model. The methodology is integrated within a
general software development methodology. |
|
Title: |
INSERTING DATA WAREHOUSE IN CORPORATIONS |
Author(s): |
Walter Adel Leite Pereira and Karin Becker |
Abstract: |
A particular interest has been observed in the Data
Warehouse (DW) technology by corporations aiming to improve their decision
processes. A large number of corporations that have no tradition on the
use of computer systems for decision support, has to rely on a team
qualified in the development of traditional operational systems and
database technology, but inexperienced on DW development issues. Moreover,
for a number of reasons (e.g. availability, costs, privacy), it is not
always possible to count on external development teams or consultants.
This work presents a methodology targeted at the development of DW pilot
projects, which aims at the smooth adoption of DW technology by
corporations. The methodology has been successfully tested in a military
DW pilot project, and the results obtained so far confirm its adequacy and
consistency towards the established goals. The paper describes the
striking features of methodology and analyses its application in a real
case study. |
|
Title: |
FRONT-END TOOLS IN DATA WAREHOUSING: INFORMIX METACUBE (ROLAP) VS.
COGNOS POWERPLAY (MOLAP) |
Author(s): |
Enrique Medina and Juan C. Trujillo |
Abstract: |
A Data Warehouse (DW) is a subject oriented, integrated,
nonvolatile, time variant collection of data in support of management
decisions. On-Line Analytical Processing (OLAP) tools based on the
Multidimensional (MD) model are the predominant front-end tools to analyze
data in DW. Current OLAP servers can be either relational systems (ROLAP)
or proprietary multidimensional systems (MOLAP). This paper presents the
underlying semantics to the MD model and compares how current ROLAP and
MOLAP front-end tools provide these semantics. |
|
Title: |
AUTOMATION QUERY PROCESSING SELECTION ALGORITHMS |
Author(s): |
Ying Wah Teh, Abu Bakar Zaitun and Sai Peck Lee |
Abstract: |
In manufacturing environment, most of processing data is
stored in the commercial databases and the design of database depends on
the database designer. Given a query, the database management system picks
up one of query processing strategies to full the user requirement. A user
is given impression is hard to optimize the response time of a query. In
this paper, we introduce the readers to current research activities
pertaining to query processing in the manufacturing environment. We list
out the possible query result, and recommend the appropriate query
processing techniques for respective query result. And then we deduce an
automation query processing selection algorithms to propose the
appropriate query processing technique for users to improve the response
time. |
|
Title: |
DESIGN OF TEXTUAL DATAWEB |
Author(s): |
Kais Khrouf |
Abstract: |
The development of the Internet generated the increase in
the volume of information available on this network. These information are
used more and more by the companies for economic, strategic, scientific or
technical development. The most common way for a user to search these
information across the web, is to use the "search robots". However,
results on this kind of tools don’t often satisfy the users. It is the
reason why, the dataweb constitutes today a need for the companies in
order to take maximum advantages of the web and information it contains.
The proposed warehouse allows us to store and analyze any type of
information extracted from the web. |
|
Title: |
AN ARCHITECTURE FOR INCORPORATING BUSINESS RULES IN AN
OBJECT-ORIENTED SYSTEM |
Author(s): |
Permanand Mohan and Sheik Yussuff |
Abstract: |
This paper discusses the design of an object-oriented
system with business rules where the rules are treated as objects in their
own right. The paper argues that an architecture-centric approach is
essential in developing any software system and puts forward a
multi-tiered architecture for incorporating business rules in an
object-oriented environment. In the architecture, rule objects are
completely separate from the domain objects, promoting rule maintenance.
No assumption is made about other layers such as the persistence layer,
allowing domain objects to be stored in any manner such as in an object
database, a relational database, a flat file, or some other format. The
architecture also lends itself to object distribution of both domain and
rule objects using standards such as the Common Object Request Broker
Architecture. |
|
Title: |
TOWARD MEASURING THE SCALABILITY OF ENTERPRISE INFORMATION SYSTEMS |
Author(s): |
Ronald E. Giachetti, Chin-Shen Chen and Oscar A. Saenz |
Abstract: |
Scalability describes a desirable characteristic of a
system that has come to the forefront with the emergence of the Internet
as a platform for business systems. Scalability broadly defined as the
ability of a system to adapt to change while maintaining an acceptable
level of performance is a critical characteristic any system dealing with
the Internet must possess. Most all software vendors, system
architectures, and system designers claim to have a scalable system. This
article develops a mathematical foundation on which scalability
measurement of enterprise information systems can be based. The
mathematical foundation is based on measurement theory and this paper
shows how this can be applied to measure scalability of enterprise systems
in the context of business-to-business commerce in supply chains. |
|
Title: |
SPINO: A DISTRIBUTED ARCHITECTURE FOR MASSIVE TEXT STORAGE |
Author(s): |
José Guimarães and Paulo Trezentos |
Abstract: |
In this paper we introduce a framework for text data
storage and retrieval. As an alternative for proprietary solutions is used
a distributed architecture based on a Linux Beowulf Cluster and open
source tools. In order to validate the proposed solution an prototype
(Spino -Serviço de Pesquisa INteligente ou Orientada) had been builded.
The prototype is oriented for retrieving USENET articles through a Web
interface. Most of the points described can be applied for scenarios not
related with Internet. The framework proposed is within Databases and
Information Systems Integration area and describes simultaneously the
conceptual and pratical aspects of its implementation. Suggested solution
is presented in three different perspectives: web serving, off-line
processing and database querying. |
|
Title: |
INTERACTIVE SEARCH IN WEB CATALOGUES |
Author(s): |
G. Ciocca, I. Gagliardi, R. Schettini and S. Zuffi |
Abstract: |
E-commerce is one of the most challenging fields of
application of the new Internet technologies. It is clear that the larger
the number of items available to be presented, the more difficult it is to
guide the user towards the product he is looking for. In this article we
present a prototype for the interactive search of images in high-quality
electronic catalogues. The system is based on a visual information search
engine, and integrates a Color Management System for the faithful display
of images. |
|
Title: |
SPECIALIZING AGENTS ON DATA INTEGRATION IN A DATA WAREHOUSING
SYSTEM ENVIRONMENT |
Author(s): |
Joaquim Gonçalves, Anália Lourenço and Orlando Belo |
Abstract: |
Data selection, extraction and processing are frequent
tasks in enterprise information systems. Their relevance emerged from the
information requirements of enterprise managers to have permanently
available all the possible information about their working area. They try
to reach important pieces of data that will allow them to extract useful
and valuable knowledge in order to correct past misleads, to improve their
performance inside the organization, or even to foresee market
opportunities. Nowadays competitiveness between enterprises makes such
process crucial. It is necessary, and convenient, to surround the
decision-makers with all possible elements that might help them in their
daily activities. In fact, most of the enterprises' decision-making
processes need to be global and effective, as the enterprises’ success
depends on it. However, the data maintained in their operational systems
is not commonly arranged according to their analytical needs and
management perspectives, which do not contributes significantly to
decision-makers effectiveness. Usually, the data is structured and treated
aiming to support operational tasks and backup daily activities.
Obviously, some problems rise when it is intended to orient such data to
analytical purposes. Many times its structure has to be rebuilt and its
quality needs to be improved. With today’s data growth, more and more,
selecting, gathering and treating data are very complex and time-consuming
tasks. In order to make these tasks easier, more reliable and faster, it
was conceived and developed a specialized agent-based tool that provides a
distributed computational platform especially designed and conceived to
support such kind of tasks and provide a set of special means of bridging
to integrate operational data into specific data warehousing systems. This
paper describes it, presenting its main functional architecture and
components, and emphasizing the aspects related to its development and
implementation. |
|
Title: |
SUPPORTING DECENTRALISED SOFTWARE-INTENSIVE PROCESSES
USING ZETA COMPONENT-BASED ARCHITECTURE DESCRIPTION LANGUAGE |
Author(s): |
Ilham Alloui and Flavio Oquendo |
Abstract: |
In order to provide a large variety of software products
and to reduce time to market, software engineering is nowadays moving
towards an architecture-based development where systems are built by
composing or assembling existing components that are often developed
independently. In this paper we advocate that a similar approach can be
adopted to build software-intensive processes that are characterised by
their complexity, heterogeneity and decentralisation. Indeed considering a
software-intensive process as monolithic is no longer true since parts of
it may exist and run separately from the others. Therefore those parts
referred to as processlets must be considered as the building blocks for
larger software processes. The paper presents ZETA, an interaction-based
architecture description language for building software-intensive
processes starting from existing processlets. ZETA is intended to be used
by process engineers in order to design the “software glue” that connects
processlets so that their interactions can be managed and/or controlled at
runtime. ZETA addresses the contents of interactions through an
intentional approach where interactions are guided by intentions that are
revoked/maintained given some defined conditions. The underlying theory of
the proposed approach is logic-based. The industrial relevance of the
proposed approach, in a decentralised and heterogeneous context, is
currently being demonstrated based on case studies from the automotive
industry within the framework of an ESPRIT IV LTR Project. |
|
Title: |
A UNIFIED FRAMEWORK TO INCORPORATE SOFT QUERY INTO IMAGE RETRIEVAL SYSTEMS |
Author(s): |
Cyrus Shahabi and Yi-Shin Chen |
Abstract: |
We explore the use of soft computing and user de ned
classi cations in multimedia database systems for content- based queries.
With multimedia databases, due to subjectivity of human perception, an
object may belong to di erent classes with di erent probabilities (\soft"
membership), as opposed to \hard" membership supported by conventional
database systems. Therefore, we propose a uni ed model that captures both
hard and soft memberships. In practice, however, our model signi cantly
increases the computation complexity (both online and o -line) and the
storage complexity of content-based queries. Previously, we introduced a
novel fuzzy-logic based aggregation technique to address the online
computation complexity. In this paper, we propose novel techniques to
cluster sparse user profiles (i.e., items with missing data) to reduce
both the o -line computation complexity and the storage complexity. |
|
Title: |
THE DECOR TOOLBOX FOR WORKFLOW-EMBEDDED ORGANIZATIONAL MEMORY ACCESS |
Author(s): |
Andreas Abecker, Ansgar Bernardi, Spyridon Ntioudis, Gregory Mentzas, Rudi Herterich, Christian Houy, Stephan Müller
and Maria Legal |
Abstract: |
We shortly motivate the idea of business-process oriented
knowledge management (BPOKM) and sketch the basic approaches to achieve
this goal. Then we describe the DECOR (Delivery of context-sensitive
organisational knowledge) project which develops, tests, and consolidates
new methods and tools for BPOKM. DECOR builds upon the KnowMore framework
[34,35] for organizational memories (OM), but tries to overcome some
limitations of this approach. In the DECOR project, three end-user
environments serve as test-beds for validation and iterative improvement
of innovative approaches to build: - knowledge archives organised around
formal representations of business processes to facilitate navigation and
access, - active information delivery services which - in collaboration
with a workflow tool to support weakly-structured knowledge-intensive work
- offer the user in a context-sensitive manner helpful information from
the knowledge archive, and - methods for an organisation analysis from the
knowledge perspective, required as supporting methods to design and
introduce the former two systems In this paper, we present the basic
modules of the DECOR toolkit and elaborate on their current status of
development. |
|
Title: |
IINTEGRATING LEGACY APPLICATIONS WITHIN A JAVA/CORBA ENVIRONMENT |
Author(s): |
Rod Fatoohi and Lance Smith |
Abstract: |
This paper examines the design and implementation process
of applying a Java/CORBA solution to legacy code support environments. A
support environment is defined as a collection of programs and scripts
that support a monolithic application. The goal in developing an object
oriented support environment is to allow monolithic legacy codes to accept
inputs from and provide outputs to highly distributed applications and
databases. The monolithic codes for this research are Computational Fluid
Dynamics (CFD) applications. Batches of CFD jobs are currently run via a
collection of C shell scripts. As production demands have grown, the
scripts have become more complex. A Java/CORBA solution has been developed
in order to replace the scripting system with a flexible, extensible
production system. The implementation of four technologies is explored
here: Java, CORBA, UML, and software design patterns. The architecture,
implementation and design issues are presented. Finally, concluding
remarks are provided. |
|
Title: |
CONCEPTUAL MODELING OF FUZZY OBJECT-ORIENTED DATABASE SYSTEMS |
Author(s): |
A. Goswami and Prabin Kumar Panigrahi |
Abstract: |
In this paper we propose a new approach for the
development of Fuzzy Object Oriented Database model. Real world database
application requires users to specify their need to represent, store and
manipulate imprecise, uncertain, vague information in a natural language
like English. The development of a model in a Relational Database
Management System starts with user requirement specifications,
normalization and then conversion to tables. Our aim is to develop an
equivalent methodology for Fuzzy Object Oriented Database System. |
|
Title: |
THE MOMIS APPROACH TO INFORMATION INTEGRATION |
Author(s): |
D. Beneventano, S. Bergamaschi, F. Guerra and M. Vincini |
Abstract: |
The web explosion, both at internet and intranet level,
has transformed the electronic information system from single isolated
node to an entry points into a worldwide network of information exchange
and business transactions. |
|
Area 2 - ARTIFICIAL INTELLIGENCE AND DECISION SUPPORT SYSTEMS
Title: |
MOVING
CODE (SERVLET STRATEGY) VS. INVITING CODE (APPLET STRATEGY) |
Author(s): |
Zakaria
Maamar |
Abstract: |
In this position paper, we aim at describing two
strategies that could enhance the functioning of software agents. Servlet
and applet denote respectively these strategies. In the servlet strategy,
the flow takes place from the client to the server. The applet strategy
performs differently; the flow takes place from the server to the client.
Applying both strategies to workflows, as a potential application domain,
is also discussed. |
|
Title: |
A
LAYERED ARCHITECTURE FOR MULTI-AGENT SYSTEMS TO SUPPORT A WATER MAIN
REHABILITATION STRATEGY 299 |
Author(s): |
Bernadette
Sharp, E. Robert. Edwards and Angela Dean |
Abstract: |
In complex applications like the water utilities the
expertise that we are trying to integrate into our system consists of the various kinds of knowledge, skills and
strategies possessed by different groups who collaborate to develop an efficient and pro-active water
main rehabilitation strategy. A single knowledge based system would fail to integrate this diversity of
knowledge and strategy. A multi agent system is an ideal set up to represent this diversity. This paper
describes how a layered architecture can bring the local expertise and strategies of various groups together to
meet the business objectives of an organisation and produce a coherent and pro-active rehabilitation strategy.
|
|
Title: |
AGENT-BASED
APPLICATION ENGINEERING |
Author(s): |
Rosario
Girardi |
Abstract: |
Main difficulties in the practice of reuse techniques are
due to the lack of reusable software abstractions for the development of specific applications in various and
rapidly changing domains. Therefore, current research has been centered on problems related to building
reusable software artifacts in a high level of abstraction - like language and domain languages, reusable
software architectures and software patterns. Application Engineering is the main discipline addressing
solutions to these problems. This work proposes a model for developing reusable software using the agent
paradigm. Reusable software abstractions generated through Agent-based Application Engineering are analyzed
considering both their abstraction level and domain dependence.
|
|
Title: |
A MULTIAGENT SYSTEM APPLIED TO THE DESIGN OF
PETROLEUM
OFF-SHORE PLATFORMS |
Author(s): |
José
Avelino Placca and Ana Cristina Bicharra Garcia |
Abstract: |
This paper presents a model for cooperative multi-agents
that interact in a closed environment, i.e., an environment where the rules are well known by the agents
and do not change dynamically. In a computational environment where several different software
agents share resources to reach their goals, the occurrence of conflicts is inevitable. Therefore it is of
highest importance to have an efficient mechanism to solve such conflicts. While other models in the literature
present solutions for conflict resolution based on centralized and distributed algorithms, our proposal is to
solve the conflicts in a hierarchical manner. The proposed model is based on the application of Social Laws
and it was inspired by the theory of Jean Jacques Rousseau’s Social Contract, being denominated Tri-Coord
Model, that is, of Triple Coordination. We implemented a prototype based on the Tri-Coord Model, and
applied to the task of oil and gas process floorplan design in a petroleum off-shore platform. In
that domain the project is divided in several subsystems, with an agent being responsible for each sub-system. Our
experiments indicate that the Tri-Coord Model can indeed yield a better performance in the
development of a project. First, because it decreases the interruptions for accomplishment of meetings to solve
conflicts related to customization of sub-system’s parameters. Second, because it removes the bottleneck
associated to a centralized model for conflict resolution. Third, because through the regulator
environment of Tri-Coord the agents behavior can be controlled implicitly by the environment. In that way the
agents learn how to interact correctly and the independence between the environment and the agents turns
the system more flexible.
|
|
Title: |
MULTI-AGENT DYNAMIC SCHEDULING AND RE-SCHEDULING
WITH
GLOBAL TEMPORAL CONSTRAINTS |
Author(s): |
Joaquim
Reis and Nuno Mamede |
Abstract: |
A co-ordination mechanism is proposed for multi-agent
production-distribution co-operative scheduling problems, based on a purely temporal perspective. This
mechanism is based on communication among pairs of client-supplier agents involved in the problem, and
allows agents to locally perceive hard global temporal constraints. By exchanging limited specific information,
the agents are able to recognise non over-constrained problems and, in that case, rule out non
temporally-feasible solutions and establish an initial solution. The information is then used to guide re-scheduling and repair
the initial solution and converge to a final one.
|
|
Title: |
AN
INTELLIGENT AGENT-BASED ORDER PLANNING FOR DYNAMIC NETWORKED ENTERPRISES |
Author(s): |
Américo
L. Azevedo, César Toscano and João Bastos |
Abstract: |
There is currently an increasing interest in exploring the
opportunities for competitive advantage that can be gained by reinforcing core competencies and innovative
capabilities through networks of industrial and business partners.
This paper firstly identifies some of the gaps that exist
within current information systems that claim to support eBusiness and eWork in networked enterprises and
describes some of the general requirements of distributed and decentralised information systems for
companies operating in networks. It goes on to cover some principles for the design of a distributed IS
providing an advanced infrastructure to support general co-operation, particular methodologies for co-operative
and collaborative planning and guidelines for network set-up and support.
The present work is one of the areas currently being
delivered as part of the European IST consortium called Co-OPERATE. A distributed and decentralised information
system, based on an architecture of agents and extensively using the internet, is being designed and
implemented as a means to provide new and more powerful decision support tools for networked enterprises.
|
|
Title: |
MACHINE
LEARNING APPROACHES FOR IMAGE ANALYSIS: RECOGNITION OF HAND ORDERS BY A
MOBILE ROBOT |
Author(s): |
Basilio
Sierra, Iñaki Rañó, Elena Lazkano and Unai Gisasola |
Abstract: |
The work described in this paper is part of a project in
which the aim is to deliver hand signals to a mobile B21 robot equipped with a color digital camera that
captures these signals in order to be interpreted as robotic instructions. To carry out this objective, we
first need to distinguish the hand and its position in order to achieve the semantics of the order given to the
robot. To identify the hand from the background, we search for the skin colored pixels in the picture. In this
paper the task is presented as a machine learning classification problem in which the goal is to determine,
given a general picture, which of the pixels corresponds to the hand and which of them compose the
background. Results of the classification algorithms used are converted to black and white images,
where the white pixels indicate the hand and the black pixels indicate the background. We have used
different Machine Learning algorithms and compared the accuracy of different approaches used by the Machine
Learning community.
|
|
Title: |
A
KNOWLEDGE-ACQUISITION METHODOLOGY FOR A BLAST FURNACE EXPERT SYSTEM USING
MACHINE LEARNING TECHNIQUES |
Author(s): |
Eugenia
Díaz, Javier Tuya and Faustino Obeso |
Abstract: |
This paper describes a methodology for obtaining the
knowledge that an expert system needs for controlling a blast furnace as if it were an expert operator. The
methodology separates useful knowledge from erroneous knowledge and refines the resulting rules so as
to achieve a manageable and effective rules system.
|
|
Title: |
USING
VIRTUAL REALITY DATA MINING FOR NETWORK MANAGEMENT |
Author(s): |
K.E.
Thornton and C. Radix |
Abstract: |
In this paper a Virtual Reality Data Mining (VRDM) tool is
described which has been developed over the past two years in order to provide network management of
ATM networks. The tool enables network data capture, and visualization of this data. A demonstration
of the tool may be found at :- http://www.durham.ac.uk/CompSci/research/dmg.
In our work we have reduced the time costs of the use of VR environments by combining approaches taken in
Functional Programming and Data Mining. This has led to the production of a VRDM tool which enables rapid,
focused analysis of network information at a required layer, and element, of a network. The tool we
describe is the first application of Virtual Reality for network management via the use of embedded, distributed
(parallelized), Data Mining algorithms.
|
|
Title: |
ONTOLOGY-DRIVE
VIRTUAL PRODUCTION NETWORK CONFIGURATION: A CONCEPT AND
CONSTRAINT-OBJECT-ORIENTED KNOWLEDGE MANAGEMENT |
Author(s): |
Alexander
V. Smirnov |
Abstract: |
Interest in global businesses and such a new form of
co-operation as Virtual Production Network (VPNet) is growing along with increasing use of Internet-based
engineering and management technologies, and the trend towards VPNet data & knowledge management.
Ontology-drive approach for VPNet configuration is a new approach to configuration of global production
network in order to improve knowledge efficiency for decision making over total VPNet facility life-time. A
kernel of this approach is a distributed multi-level constraint satisfaction technology based on a shared
constraint-object-oriented knowledge domain model "product – process - resources". This paper
discusses a generic methodology of VPNet configuration management and ontology-drive information support based on
Knowledge Management Technology for distributed decision making.
|
|
Title: |
CONFLICT
AND NEGOTIATION AMONG INTENTIONAL AGENTS |
Author(s): |
Fernando
Lopes, Nuno Mamede, A. Q. Novais and Helder Coelho |
Abstract: |
Negotiation has been extensively discussed in management
science, economic, and social psychology. Recent growing interest in artificial agents and their
potential application in areas such as electronic commerce has given increased importance to automated
negotiation. This paper defines the social concept of conflict of interests, presents a formal prenegotiation
model that acknowledges the role of conflict as a driving force for negotiation, and introduces a generic
negotiation mechanism that handles multi-party, multi-issue and single or repeated rounds.
|
|
Title: |
QUANTIFICATION
OF THE EMOTIONAL QUOTIENT OF THE INTELLIGENCE USING THE CLASSIC AND FUZZY
LOGIC |
Author(s): |
Lucimar
F. de Carvalho, Roberto Rabello, Rosane R. de Morais, Sílvia M. Nassar
and Cristiane Koehler |
Abstract: |
The objective of this paper is to integrate and compare
the results of several tests of Intelligence Quotient (IQ). To Integrate through the use of a software that
works the classic logic in the identification of the Emotional Intelligence (EI) and compare the tests of IQ,
after the identification of EI through the quantification of three scales: Wechsler of Adult
Intelligence Scale (Wais), Stanford Binet and Progressive Matrices using the modeling of the fuzzy logic. The
software can be used to quantify the people's emotional capacity – an important factor in the development of IQ,
howener it should not be used for the definitive diagnosis, but for aid to the taking of decision.
|
|
Title: |
IMPLEMENTATION
AND APPLICATION OF FUZZY CASE-BASED EXPERT SYSTEM |
Author(s): |
Du
Jianfeng and Song Junde |
Abstract: |
Simulating the thought and inference of human being is an
important methodology to develop the knowledge-based intelligent system. This paper researches
on the implementation techniques of fuzzy casebased expert system (FC-ES). The emphasis is put on the design
and implementation of fuzzy case inference engine (FC-IE). This paper creates the structure of
frame-based classified fuzzy case representation, gives the definition of one case searching principle, puts
forward two kinds practical fuzzy case matching algorithm. Taking the mobile communication network
optimization as one application of FC-ES, one example is presented to prove the feasibility and
efficiency of FC-IE.
|
|
Title: |
WINWIN
DECISION SUPPORT FRAMEWORK: A CASE STUDY ON STUDENTS’
IN-COURSE-ASSESSMENT IN A SOFTWARE ENGINEERING MODULE |
Author(s): |
Peter
K Oriogun and R Mikusauskas |
Abstract: |
This paper investigated how a particular group of students
implemented the WinWin decision support framework in the resolution of conflicts through
negotiation after the stakeholders have had the opportunity to raise issues, select options and finally arriving at a
negotiated agreements satisfying all the parties involved. A statistical analysis of students'
In-course-Assessment (ICA) for the module Software Engineering for Computer Science (IM283) is presented,
and, the paper is proposing the studens’ Negotiated Incremental Model (NIM) as a software process model for
developing software in a semester framework.
|
|
Title: |
MODEL
MANAGEMENT FACILITIES FOR CYBERDSS |
Author(s): |
Jamilin
Jais, H. Selamat, M. Azim A. Ghani ,A. Mamat, Zarina A. Rahman and H.
Hussein |
Abstract: |
Model management in cyber decision support systems (Cyber
DSS) is a new approach to problem-solving. This is because professional judgement and insight are
critical in decision-making. DSS is designed to support a manager's skill at all stages of decision-making
such as problem identification, choosing relevant data to work with, picking an appropriate approach to be
used in making the decision and evaluating the alternative course of action. In complex decision
situations, it will often be necessary to coordinate the application of multiple decision models for solving a
problem. DSS need a model management component that handles the tasks of identifying appropriate models
from a problem description, sequencing their application, and in substantiating them with the necessary
data.
|
|
Title: |
SARPlan:
A DECISION SUPPORT SYSTEM FOR CANADIAN SEARCH AND RESCUE OPERATIONS |
Author(s): |
Irène
Abi-Zeid |
Abstract: |
We present SARPlan, a decision support system designed to
assist search mission coordinators (SMC) of the Canadian Forces in the optimal planning of search and
rescue (SAR) missions. SARPlan is a geographic information system that provides an optimal allocation of
the available search effort. The optimization modules are based on search theory and on Constraint
Satisfaction Programming. Statistical models for determining the search object location distributions are
also included. The anticipated benefits of SARPlan include speeding up search and rescue operations hence
increasing the chances of finding lost aircraft and survivors, resulting in saved lives. In addition to being
a new tool for the SMC, SARPlan introduces new elements to the current working procedures.
|
|
Title: |
THE
FIRST TAX RETURN ASSESSMENT EXPERT SYSTEM IN SWITZERLAND |
Author(s): |
Marco
Bettoni and Georges Fuhrer |
Abstract: |
In realising the first Tax Return Assessment Expert System
in Switzerland we had first to make a convincing business case for an AI innovation in a
traditional governmental environment, secondly to show in the daily business environment that what can be
demonstrated to be viable in theory does also work in practice, thirdly to minimize the gap between the
tacit knowledge in the head of
assessment experts and the explicit model
of expertise that specifies the domain knowledge and finally to minimize
the gap between the specified expert knowledge and the system knowledge
formalized in the knowledge base. We present the project, our approach to meeting these challenges, the
current state of the productive system and a sketch of the final system under development.
|
|
Title: |
A
PRELIMINARY STUDY OF A NEW CLASSIFICATION TO BUILD HOMOGENEOUS PATIENT’S
GROUPS IN HOME-BASED CARE |
Author(s): |
Céline
Robardet and Christine Verdier |
Abstract: |
In the context of home based care evaluation, we propose a
new methodology for creating homogeneous groups of patients. This tool has the particularity of
providing two related partitions, one that gathers patients with similar descriptors, and the other that
groups together the descriptors, which characterized similar patients. The two partitions are linked by the
fact that at each set of descriptors fits a set of patients. These groups can be used for any types of evaluation in
health care: cost, quality of care and so on.
|
|
Title: |
THE
EFFECTS OF DIFFERENT FEATURE SETS ON THE WEB PAGE CATEGORIZATION PROBLEM
USING THE ITERATIVE CROSS-TRAINING ALGORITHM |
Author(s): |
Nuanwan
Soonthornphisaj and Boonserm Kijsirikul |
Abstract: |
The paper presents the effects of different feature sets
on the Web page categorization problem. These features are words appearing in the content of a Web page,
words appearing on the hyperlinks, which link to the page and words appearing on every headings in the
page. The experiments are conducted using a new algorithm called the Iterative Cross-Training algorithm
(ICT) which was successfully applied to Thai Web page identification. The main concept of ICT is to
iteratively train two sub-classifiers by using unlabeled examples in crossing manner. We compare ICT against
supervised naï ve Bayes classifier and Co-Training classifier. The experimental results show that ICT obtains
the highest performance and the heading feature is considerably succeed in helping
classifiers to build
the correct model used in the Web page categorization task.
|
|
Title: |
COMPUTER
AUGMENTED COMMUNICATION IN COOPERATIVE GROUPS |
Author(s): |
Jean-Philippe
Kotowicz |
Abstract: |
Companies are plunged into an unstable and
internationalised environment. This environment evolves very fast technologically and the companies have to adapt
themselves to it. The methods and tools of work organization can benefit from new technologies of
communication to return these more effective adaptations. We think that information and cooperation
mechanisms are going to move massively from a structured information system towards a communication
system without any a
priori structure. A new
type of software will be then necessary to disambiguate, to
facilitate, to structure and to allow complex searches in the exchanges arising
among the actors (human beings
and software agents) of a cooperative sociotechnical system. We thus propose a system of communication
mediation and of memories companies management, based on a multi-agent architecture. This
architecture allows a use adapted to the "professional" activities of tool users
associated to the various levels of language treatment.
|
|
Title: |
RELATIONSHIPS
BETWEEN THE DECISION SUPPORT SYSTEM SUBSPECIALTIES AND ARTIFICIAL
INTELLIGENCE |
Author(s): |
Sean
Eom |
Abstract: |
This is a comprehensive study, that, by means of an
empirical assessment of the DSS literature, systematically identifies the DSS reference disciplines
and traces how concepts and findings by researchers in the contributing disciplines have been picked up by DSS
researchers to be applied, extended, and refined in the development of DSS research subspecialties. Cluster
analysis was employed to an author cocitation frequency matrix derived from a comprehensive database of
the DSS literature over the period of 1970
through 1993. Twelve clusters were uncovered consisting of
six major areas of DSS research (group DSS, foundations, model management, user interfaces,
implementation, and multiple criteria DSS) and six contributing disciplines (multiple criteria decision
making, cognitive science, organization science, artificial intelligence, group decision making, and systems science).
This study concludes that artificial intelligence has made important contributions to the development of
foundational concepts, model management, and multiple criteria decision support systems.
|
|
Title: |
CASE-BASED
DECISION SUPPORT SYSTEMS |
Author(s): |
Abdel-Badeeh
M. Salem and Nadia Baeshen |
Abstract: |
Information systems (IS) development methodologies are
aimed at improving the management and control of development process, structuring it, reducing its
complexity, and standardising both the process and resulting product. Many of IS development methodologies
have been developed over the years, some from practice and some from theory. Recently artificial
intelligence researchers have begun to investigate the usage of the case-based reasoning (CBR) methodology in
improving human decision making. CBR means reasoning from experiences or "old cases" in an
effort to solve problems, critique solutions, and explain anomalous situations. This paper will explore what CBR
methodology involves and examine its processes and knowledge sources. These issues concerned with
representing, indexing, organizing past cases, retrieving and modifying old cases, and assimilating new
ones. Moreover, the paper discusses the potential role of the CBR methodology in the organizational
knowledge management approach, and in the lessons learned information systems.
|
|
Title: |
IMPROVEMENTS
IN THE DECISION MAKING IN SOFTWARE PROJECTS |
Author(s): |
I.
Ramos Román, J. Riquelme Santos and J. Aroba Páez |
Abstract: |
The Simulators of Software Development Projects based on
dynamic models have supposed a significant advance in front of the traditional techniques of
estimate. These simulators enable to know the evolution of a project before, during and after the execution of the
same one. But its use in the estimate of the project before beginning the execution, has been braked by the
great number of attributes of the project that it is necessary to know previously. In this paper are presented
the improvements that have been added to the simulator developed in our department to facilitate the
use of them, and a new improvement obtained when using machine learning and fuzzy logic techniques with the
databases generated by the simulator. In this last case, the project manager can know, in function of the
decisions that he takes, the level of execution of the project objectives.
|
|
Title: |
DEVELOPING
THE INFRASTRUCTURE FOR KNOWLEDGE BASED ENTERPRISES |
Author(s): |
Igor
T. Hawryszkiewycz |
Abstract: |
Knowledge management is now emerging as an important area
of interest in most business systems. It is particularly important in distributed organizations where
knowledge must be shared across distance.
Knowledge management goes beyond simply keeping explicit
documents and providing data mining and search facilities. It must also provide the infrastructure
where people can readily collaborate and combine explicit knowledge with their tacit knowledge to create
new knowledge or to carry out their work more effectively. Furthermore such collaboration must be an
intrinsic part of any business process rather than a process on its own. This paper defines the general
environment of knowledge management within distributed business processes and the kinds of computer
systems to support it. It then describes a way of analyzing knowledge needs followed by an implementation.
|
|
Title: |
DATA
REDUCTION TO IMPROVE KNOWLEDGE EXTRACTION |
Author(s): |
Maria
de Fátima Rodrigues and Pedro R. Henriques |
Abstract: |
Knowledge Discovery in Databases (KDD) is a challenging
machine learning application, because it must be a very efficient process with yet high
understandability and reliability requirements. First, the learning task shall find all valid and non-redundant rules (rule
learning). Second, the datasets for learning are very large.
When learning from very large databases, the reduction of
complexity is of highest importance. We highlight the advantages of combining attribute value discretization
with rough set theory to find a subset of attributes that lets the KDD process discover more useful
patterns. We present the evaluation of such approach by providing results from the application of a
classification algorithm to various public domain datasets.
|
|
Title: |
DESIGNING
INTELLIGENT TUTORING SYSTEMS: A BAYESIAN APPROACH |
Author(s): |
Hugo
Gamboa and Ana Fred |
Abstract: |
This paper proposes a model and an architecture for
designing intelligent tutoring system using Bayesian Networks. The design model of an intelligent tutoring
system is directed towards the separation between the domain knowledge and the tutor shell. The architecture
is composed by a user model, a knowledge base, an adaptation module, a pedagogical module and a
presentation module. Bayesian Networks are used to assess user’s state of knowledge and preferences, in
order to suggest pedagogical options and recommend future steps in the tutor. The proposed architecture is
implemented in the Internet, enabling its use as an e-learning tool.
An example of an intelligent tutoring system is shown for illustration
purposes.
|
|
Title: |
OPTIMIZING
USER PREFERENCES WHILE SCHEDULING MEETINGS |
Author(s): |
Rebecca
Y. M. Wong and Hon Wai Chun |
Abstract: |
Meeting
scheduling is a routine office task, which is highly time-consuming and
tedious. During the scheduling process, we need to consider timing and the
participant preferences (such as preference on meeting time, date or day
of the week). Being able to model and make use of individual user
preferences during scheduling is probably one of the key decisive factors
in the acceptance of these automated systems. This paper describes the
user preference model we have created and the results of simulating our
meeting scheduling algorithms in a set of experiments. Based on software
agent architecture, the meeting scheduling is performed through
negotiation among a set of software agents. From our experiments, we found
that higher quality schedules were generated when knowledge of user
preferences are passed to a coordinator and used during the negotiation.
The results had higher average preference levels, less deviation from
expected schedule and a more even distribution of these deviations among
participants. |
|
Title: |
AFFECT-SENSITIVE
MULTI-MODAL MONITORING IN UBIQUITOUS COMPUTING: ADVANCES AND CHALLENGES |
Author(s): |
Maja
Pantic and Leon J. M. Rothkrantz |
Abstract: |
The
topic of automatic interpretation of human communicative behaviour, that
is, giving machines the ability to detect, identify, and understand human
interactive cues, has become a central topic in machine vision research,
natural language processing research and in AI research in general. The
catalyst behind this recent ‘human-centred computing hoopla’ is the
fact that automating monitoring and interpretation of human communicative
behaviour is essential for the design of future smart environments, next
generation perceptual user interfaces, and ubiquitous computing in
general. The key technical goals concern determining of the context in
which the user acts, that is, disclosing in an automatic way where is the
user, what is he doing, and how is he feeling, so that the computer can
act appropriately. This paper is pertained with the last of these issues,
that is, with providing machines with the ability to detect and interpret
user’s affective states. It surveys the past work done in tackling this
problem, provides taxonomy of the problem domain, and discusses the
research challenges and opportunities. |
|
Title: |
EXPERT
SYSTEMS FOR DISORDERS PREDICTION |
Author(s): |
Soliman
A. Edrees |
Abstract: |
This
paper presents an expert system that uses the climatic changes such as
temperature, frost, hail, storms, wind, downpour and other hazardous
conditions to predict infestation severity and populations of plant
disorders. In addition to prediction process the expert systems gives a
set of advice (i.e. chemical and/or agricultural operations) to the
growers. The system advice should be applied during current season to
control the predicted disorders or applied next season to avoid disorders
infestation. |
|
Title: |
QUALITATIVE
REASONIG FOR SOFTWARE DEVELOPMENT PROJECT BY CONSTRAINT PROGRAMMING |
Author(s): |
Antonio
J. Suárez, Pedro J. Abad, Rafael M. Gasca and J. A. Ortega |
Abstract: |
This
paper presents a new approach on the problem of the estimation and
planning in the software development projects (SDP). Therefore, a
qualitative simulation of a part of the dynamic system of Abdel- Hamid,
(subsystem of human resources) will be carried out. We will model this
subsystem like a CSP (Constrains satisfaction problem), that is, it will
be modelled as a set of restrictions that should be full satisfied. Next,
the associated program will be generated under the constraint-programming
paradigm. This will simulate the dynamic subsystem and will give as result
all its possible behaviours. This way we achieve to improve and to
concrete the qualitative information that we can get to obtain of such a
subsystem. |
|
Title: |
OPTIMISING
THE GROUPING OF EMAIL USERS TO SERVERS USING INTELLIGENT DATA ANALYSIS |
Author(s): |
Steve
Counsell, Xiaohui Liu, Stephen Swift, Allan Tucker and Janet McFall |
Abstract: |
For
a large commercial business employing as many as twelve thousand staff, an
efficient email service is of paramount importance. This is particularly
true when large volumes of data are being sent by users to other users
located on different servers in the same site, or between users located on
servers in different sites. Choice of how to allocate users to servers is
equally important. A rule-of-thumb approach used to allocate users to
servers, based on the likelihood of one user (or group of users) emailing
other users (or groups), will rarely produce an optimal solution in terms
of minimising internal network traffic. In this paper, intelligent data
analysis is used to optimise the configuration of users to servers in
order to minimise internal traffic flow across the network. A
hill-climbing algorithm is used to identify the optimal arrangement of
users to servers within one site of a large accountancy organisation,
based on data from the log files of all emails sent between users in the
same site over a two month period. A metric is introduced by which each
potential configuration provided by the hill-climbing algorithm can be
measured for its fitness. Results show that for a single site with over
two thousand employees, a forty-two percent reduction in interserver
traffic between users within that site was achievable over that two month
period; this represented approximately seventeen percent of all messages
sent between users in that period. |
|
Title: |
TOWARDS
SCALABLE MULTI-AGENT SYSTEMS |
Author(s): |
Ralph
Deters |
Abstract: |
The
multi-agent research community is currently faced with a paradox. While
promoting the use of agents as the silver bullet for various software
engineering problems, it faces difficulties in presenting successful
deployments. Despite the countless multi-agent prototypes that have been
developed, the number of actually deployed and in use MAS is at best very
small [9]. And as long as multi-agent frameworks continue to encounter
difficulties in scaling up, it seems unlikely that this will change. This
paper has two aims. First, it is an attempt to relate the scalability
problem of multi-agent systems with that of executing large numbers of
concurrent threads. Second, it evaluates a CORBA/Java middle-ware layer
for transparent access to distributed resources. Using such a layer, it is
possible, to build multi-agent systems that require large numbers of
concurrent threads and significant memory resources. |
|
Title: |
A
GDSS FOR SUPPORTING MANAGEMENT DECISIONS |
Author(s): |
Alberto
Carneiro |
Abstract: |
This
article is concerned with the decision process and examines the
relationships among strategic alternatives’ evaluation, groupware
technology, and group decision support systems (GDSS). The whole
evaluation and comparison process includes two phases: the establishment
and selection of basic strategic evaluation criteria and the appreciation
of the ranking of the strategic alternatives. A GDSS model has been
developed and used to support the evaluation of strategic alternatives.
The experimental results show that the proposed model produces better
results in evaluating strategic alternatives with multicriteria methods.
The major findings are discussed and directions for future research are
suggested. |
|
Title: |
AN
AGENT-BASED ARCHITECTURE OF FUTURE INTEGRATED OPERATIONAL DECISION SUPPORT
SYSTEMS |
Author(s): |
Blaga
N. Iordanova |
Abstract: |
Conflict-free
planning for air traffic is a new Integrated Operational Decision Support
(IODS) policy for airlines and air traffic control and management both in
Oceanic Airspace and in Domestic Airspace. It can be accomplished by an
agent-based architecture of a global network of new generation IODS
Systems for air traffic management. It puts forward an advanced air
space-time management supported by an agentbased monitoring of flights and
planning of their conflict-resolutions off-line. It aims at securing the
efficiency of airspace use and of air traffic control operations supported
by an IODS for pilots and controllers through satellite communications. |
|
Title: |
A
LOGIC PROGRAMMING APPROACH TO NEGOTIATION FOR CONFLICT RESOLUTION IN BDI
AGENTS SYSTEM |
Author(s): |
Myung-Jin
Lee and Jin-Sang Kim |
Abstract: |
In
most Multi-Agent Systems (MAS), each agent needs to be designed to
negotiate with other agents and to reach a mutual acceptable state where
agents can avoid any goal conflicts due to theirs interdependencies upon
others such as some price conflicts in electronic commerce systems. Many
problematic situations, however, may arise such that an agent can meet
goal conflicts among agents and cannot possess a complete set of knowledge
about others. Negotiation provides a solution to these problems even
though agents’ knowledge is not complete, and it is also used to
allocate tasks or resources in goal inconsistency states. We present a
logic programming framework for negotiation in which agents are
represented on the basis of three major components: Belief, Desire, and
Intention (BDI). Further, we suggest a negotiation mechanism to resolve
goal conflicts accompanied with each agent’s problem solving activities
in cooperative MAS. We implement the proposed negotiation mechanism and
test it using a simple example in InterProlog, a Java front-end and
functional enhancement for Prolog, based on a variant of FIPA Agent
Communication Language (ACL) specification and its interaction protocols. |
|
Title: |
MULTIVARIATE
DISTRIBUTION GENERATION |
Author(s): |
Symon
Podvalny, Alexander Kalinin and Irina Chernikova |
Abstract: |
There
is wide variety of the classification methods in decision support systems.
Multivariate testdistributions with given correlation matrix allows to
determine a quality of the classification algorithms by their exactness,
speed, noise-rigidity and other appears. In order to generate such
distributions we need to generate the initial random vector with
independent components and given distribution law, to define covariance
matrix for desired distribution and to find the matrix of linear
transformation from initial vector to desired. The first problem is wide
known in literature and the solutions are presented in various
mathematical libraries. But the last (creation matrix of linear
transformation) is more complex. In order to solve it we need to implement
the Cholesky decomposition for initial correlation matrix. These
theoretical calculations are verifying by experiments of comparative
calculations 2D/3D distribution. For the task of generation one-dimension
distributions and execution of Cholesky decomposition we use Java language
and “Colt” scientific library. |
|
Title: |
KNOWLEDGE
TRANSFER AS ENTERPRISE PERPETUM MOBILE? |
Author(s): |
Andrea
Kõ and András Gábor |
Abstract: |
Many
companies agree that their success in a micro and macro level is depending
on how fast they can response for the challenges of the knowledge society.
One interesting area is the levereging human capital, efficient
compensation of knowledge workers. The paper introduces a knowledge based
model of compensation system. Building the knowledge base the enterprise
ontology creation was followed. The applied methodology is the CommonKADS.
The paper discusses the maintenance conditions of a cafeteria benefit
system. |
|
Title: |
USER
MODEL ISSUES IN WIDE SCOPE I.T.S. |
Author(s): |
António
Silva, Zita A. Vale and Carlos Ramos |
Abstract: |
Typically,
the user models used in systems like Intelligent Tutors tend to be
exclusively controlled by the system itself, due to the constraints posed
by the specific nature of the tutoring process. Therefore, it’s not
common for the trainee to be allowed to inspect and control his/her
model's contents. In order to make the evaluation of the tutoring process
a cooperative task between user and system, adequate techniques should be
devised. This paper describes early attempts to build a user model module
for an Intelligent Tutor to be used in the training of electrical network
Control Center operators. Furthermore, this user model component tries to
address the different demands of two distinct phases of this tutoring
environment: the conceptual/procedural knowledge acquisition phase and the
drill and practice phase. |
|
Title: |
AN
ALGORITHM FOR DETERMINING SUBSPACES CONTAINING CLUSTERS WITH MULTIPLE
MINIMUM DENSITY THRESHOLDS FOR NUMERICAL DATA |
Author(s): |
P.
R. Rao |
Abstract: |
Clustering
algorithms are used in database mining for finding interesting patterns in
high dimensional data. These are useful in many applications of knowledge
discovery in databases. Recently, there have been attempts to find
clusters embedded in the subspaces of high dimensional data. CLIQUE is one
such algorithm. In this algorithm, each attribute is partitioned into
user-given number of intervals. Each interval is called a unit. A unit is
called dense if it contains a user-given fraction of total data points.
Thus, the denseness of a unit is determined by a single user-given number,
for all the attributes. A single denseness criterion for all the
attributes implicitly means that all the attributes in the data set have
similar frequencies. This is not the case in some real-life applications.
In this paper the user is allowed to specify multiple denseness ,one for
each attribute, to reflect the nature of their varied frequencies. An
algorithm is designed for identification of subspaces that contain
clusters, given the user-specified denseness value for each attribute. |
|
Title: |
FUZZY
REASONING IN JESS: THE FUZZYJ TOOLKIT AND FUZZYJESS |
Author(s): |
Robert
Orchard |
Abstract: |
Jess,
the Java™ Expert System Shell, provides a rich and flexible environment
for creating rule-based systems. Since it is written in Java it provides
platform portability, extensibility and easy integration with other Java
code or applications. The rules of Jess allow one to build systems that
reason about knowledge that is expressed as facts. However, these facts
and rules cannot capture any uncertainty or imprecision that may be
present in the domain that is being modelled. This paper describes an
extension to Jess that allows some forms of uncertainty to be captured and
represented using fuzzy sets and fuzzy reasoning. We describe the NRC
FuzzyJ Toolkit, a Java API that allows one to express fuzzy concepts using
fuzzy variables, fuzzy values and fuzzy rules. Next, we describe a Java
API called FuzzyJess that integrates the FuzzyJ Toolkit and Jess. Finally,
we show the modifications that were made to the Jess code to allow this
extension (and others with similar requirements) to be added with modest
effort and with minimal or no impact as new releases of Jess are
delivered. |
|
Title: |
MODELLING
THE GENERATION OF CUSTOMISED POETRY IN JESS |
Author(s): |
Pablo
Gervás and Raúl Murciano |
Abstract: |
The
present paper presents an application that composes formal poetry in
Spanish in a semiautomatic interactive fashion. JASPER is a forward
reasoning rule-based system that obtains from the user an intended
message, the desired metric, a choice of vocabulary, and a corpus of
verses; and, by intelligent adaptation of selected examples from this
corpus using the given words, carries out a prose-to-poetry translation of
the given message. In the composition process, JASPER combines natural
language generation and a set of construction heuristics obtained from
formal literature on Spanish poetry. |
|
Title: |
IMPLEMENTING
BUSINESS RULES IN AN OBJECT-ORIENTED SYSTEM USING JESS |
Author(s): |
Permanand
Mohan and Sheik Yussuff |
Abstract: |
This
paper describes how the Jess expert system shell can be used to implement
business rules in an objectoriented system. It presents a simple taxonomy
of business rules and provides implementation details for the rules in
this taxonomy. The paper shows that compared to other approaches, the
Jess-based implementation is an attractive alternative since it supports
the separation of domain objects from the business rules. Jess also allows
rules to be specified in a declarative fashion, which is regarded as the
best way to incorporate business rules in an information system. |
|
Title: |
ONTOLOGY
NEGOTIATION USING JESS |
Author(s): |
Sidney
C. Bailin and Walt Truszkowski |
Abstract: |
This
paper describes a framework for ontology negotiation between information
agents. Ontologies are declarative (data driven) expressions of an agent’s
“world”: the objects, operations, facts, and rules that constitute the
logical space within which an agent performs. Ontology negotiation enables
agents to cooperate in performing a task, even if they are based on
different ontologies. We have developed an Ontology Negotiation Protocol
(ONP) and implemented it in the Java Expert System Shell (Jess). In this
paper we describe the ONP and some of the issues that arise in its
implementation in Jess. |
|
Area 3 - INFORMATION SYSTEMS ANALYSIS AND SPECIFICATION
Title: |
USING
XML AND FRAMEWORKS TO DEVELOP INFORMATION SYSTEMS |
Author(s): |
Toacy
C. de Oliveira, Ivan Mathias Filho and Carlos J. P. de Lucena |
Abstract: |
To accomplish the software development time and cost
constraints this development should take place in an environment that
helps the designer to deal with the large amount of concepts obtained
during the domain analysis phase and the semantic gap between those
concepts and the object oriented design model due to their different
levels of abstraction. This paper describes the main features of an
environment designed to support the development of IS software based on
framework reuse and XML specifications. |
|
Title: |
DEFINING
PATTERN CLASS STEREOTYPES IN UML |
Author(s): |
Ludwik
Kuzniarz and Maciej Piasecki |
Abstract: |
Stereotype in UML was introduced as a mean to allow its
extension by defining specific semantics for a chosen modelling element.
Possible usage of stereotype is constrained by the type of model element
to which it can be applied and usually also by the context defined by
configuration of other model elements. The paper examines some basic
proper-ties that characterize stereotypes. Three categories of stereotypes
are examined and elaborated in more detail. These include their
characteristics, the way of defining them formally within the UML, and
finally their usage as a mean for presentation of models in more compact
form and as a set of ‘reusable’ elements for modelling. |
|
Title: |
AN
OBJECT-ORIENTED FRAMEWORK FOR THE DEVELOPMENT OF DISTRIBUTED INDUSTRIAL
PROCESS MEASUREMENT AND CONTROL SYSTEMS |
Author(s): |
Kleanthis
Thramboulidis, Chris Tranoris and Chris Koulamas |
Abstract: |
Software industry increasingly faces today the challenge of
creating complex custom-made Industrial Process Measurement and Control
System (IPMCS) applications within time and budget, while high competition
forces prices down. A lot of proprietary solutions address the engineering
process, and evolving standards exploit the function block construct as
the main building block for the development of IPMCSs. However existing
approaches are procedural-like and they do not exploit the maximum
benefits introduced by the object technology. In the context of this
paper, new technologies in Software Engineering that assist in improving
the efficiency of software development process are considered. An
Object-oriented framework is defined, to improve the engineering process
of IPMCSs in terms of reliability, development time and degree of
automation. This framework embodies an abstract design capable to provide
solutions for the family of distributed IPMCSs. It will attempt to
increase reusability in both architecture and functionality by addressing
issues such as interoperability and integrated development of distributed
IPMCSs. |
|
Title: |
FLOW
COMPOSITION MODELING WITH MOF |
Author(s): |
Marin
Litoiu, Mike Starkey and Marc-Thomas Schmidt |
Abstract: |
With the current unprecedented e-business explosion,
business analysts and enterprise software architects alike are faced with
developing increasingly complex business processes or applications, while
time to market gets shorter and shorter. Separating the application domain
concerns (such as the flow of data and control) from implementation issues
reduces the complexity and allows different roles to coexist in the life
cycle of a software system. This paper describes a meta-model that allows
the problem domain architect to define flows of data and control at any
granularity level. To allow easier interchangeability among the tools or
run times, flows are based on OMG’s Meta Object Facility and use XMI as
an interchange format. |
|
Title: |
DOMAIN
ORIENTED FRAMEWORK CONSTRUCTION |
Author(s): |
Ivan
Mathias Filho, Toacy Cavalcante de Oliveira and Carlos J. P. de Lucena |
Abstract: |
Object-oriented Application Frameworks is a powerful reuse
technique that allows the sharing of requirements, design and code among a
set of application systems instantiated from an original framework.
Nevertheless, little attention has been given to the role of an explicit
Domain Analysis phase in the framework construction process. This paper
describes an approach where the requirements capture for an entire
application family takes a central role in the framework development, thus
facilitating verification processes and providing reliable documentation
that will assist the instantiation step. |
|
Title: |
MANAGING
PROCESSES THROUGH A BASE OF REUSABLE COMPONENTS |
Author(s): |
Bernard
Coulette, Xavier Cregut, Dong Thi Bich Thuy and Tran Dan Thu |
Abstract: |
RHODES is a Process centred Software Engineering Environment
(PSEE) that allows to describe software development processes and to
control their enactment. To make process reuse efficient in such PSEE, we
think that it is necessary to go towards an engineering of reusable
process components. To efficiently store and retrieve such components, we
have defined a Process Component Base offering classical database
functionalities, especially consistency properties. In this paper, we
first describe the RHODES PSEE principle, then we define reusable process
components (patterns), and we focus on the definition of component
consistency, particularly the topological consistency based on
relationships among constituents of a component. The Process Component
Base is implemented over the Jasmine object-oriented database. |
|
Title: |
ENTERPRISE
INFORMATION SYSTEMS: SPECIFYING THE LINKS AMONG PROJECT DATA MODELS USING
CATEGORY THEORY |
Author(s): |
Michael
Johnson and C. N. G. Dampney |
Abstract: |
Major enterprise information systems are frequently
specified by integrating information models which have been developed in
separate divisions of the enterprise. These models, often called project
data models, embody the information that needs to be modelled in the
enterprise, but they can be difficult to link because when common
information is stored in different divisions it is frequently stored in
significantly different forms. This paper describes a new technique, based
on category theory, that uses the specification of logically data
independent views to link project data models. The link mechanism is
powerful because the use of views permits data in radically different
forms to be linked, and a new solution to the view update problem allows
the linking mechanism to be embodied as code, thus allowing the linked
data models to be implemented as interoperating information systems. The
paper is somewhat theoretical since it describes the foundation for a new
technique, but the methods described here are being tested in large
consultancies including the enterprise models for a government department,
an oil company, and a telecommunications carrier. |
|
Title: |
COORDINATES:
A LANGUAGE FOR ENTERPRISE MODELING |
Author(s): |
G.
Mannarino, H. Leone and G. Henning |
Abstract: |
Information requirements identification and specification
are among the most important phases of the software development process.
Both, the context in which the information system will be implemented and
the impact it will have on the domain have to be evaluated, if the right
system is to be constructed. Models are common tools for abstracting a
domain. In particular, the complexity of production organizations turns
enterprise models a prerequisite for deriving the organization information
requirements. This paper presents a language for enterprise modeling. The
language integrates the Task, Domain and Dynamic views of an organization.
The Task view abstracts business and production processes in terms of a
set of tasks that transform different resources in order to achieve their
goals. The Domain view describes the static relationships among the
organization entities and the Dynamic view puts emphasis on the
interaction and evolution of resources when they participate in different
tasks, assuming specific roles. |
|
Title: |
A
KNOWLEDGE CREATION STRATEGY TO ENRICH ENTERPRISE INFORMATION SYSTEMS WITH
ENTERPRISE-SPECIFIC TACIT KNOWLEDGE |
Author(s): |
Syed
Sibte Raza Abidi and Yu-N Cheah |
Abstract: |
Enterprise information systems need to leverage enterprise
knowledge management methodologies and tools to formally manage and
capitalize on enterprise-wide knowledge resources. In this paper, we
propose a novel knowledge creation strategy, together with its
computational implementation, to (a) capture tacit knowledge possessed by
domain experts in an enterprise; and (b) crystallize the captured tacit
knowledge so that it can be added to the enterprise’s existing knowledge
info-structures for usage by front-end enterprise information or knowledge
systems. The formulation of the strategy purports a synergy between
artificial intelligence techniques for representation, reasoning and
learning purposes, with existing concepts and practices in knowledge
management. |
|
Title: |
CEM:
COLLABORATIVE ENTERPRISES MODELING |
Author(s): |
Kayo
Iizuka and M. J. Matsumoto |
Abstract: |
Collaborative enterprises modeling (CEM) is defined as
enterprise modeling for multiple and collaborative enterprises. In this
paper, we propose a framework for collaborative business process modeling.
The model emphasis the supply chain relationship connection among firms.
This framework will suggest analyzing an effective method of information
exchange between enterprises in a supply chain, because this method must
be different for inter enterprise relationships under the given
conditions. To develop the model, we will discuss already conducted survey
studies about enterprise relationships in Japan. The trends shown in the
main findings of the survey are that trust and accountability affect the
quality and precision of the information. That means that those factors
will affect supply chain performance. CEM with supply chain relationship
connection will result in the ability to build effective supply chain
business processes and information systems in the real world. |
|
Title: |
A
CONTRACT-BASED THEORY OF INFORMATION SYSTEMS |
Author(s): |
Claudine
Toffolon and Salem Dakhli |
Abstract: |
Heavy investments in IT made by organizations have to be
questioned regarding their effectiveness notably in terms of productivity
growth. In that way, the term “software crisis” has been frequently
coined since the 60’s to allude to a set of problems encountered in IS
development activities. Since the hardware aspects of computers and
networks are well mastered within almost all organizations, the problems
with IT are in great part related to information systems (IS) and in
particular to their computerized part called software systems. Well-known
models of IS present some weaknesses related on the one hand, to the gap
separating theories and frameworks of IS and their computerization process
and on the other hand, to the development tools and languages which
generally do not rely on theoretical foundations. In this paper, we
propose a framework which analyze IS as contracts linking organizational
actors involved in operational and decision-making processes. Such
contracts are related to goods and services and information flows
exchanged. This framework provides instruments which may be used to define
a software development process which eliminate at least partly many
important causes of the “software crisis”. |
|
Title: |
DESIGNING
USABLE SOFTWARE PRODUCTS |
Author(s): |
Nuno
Jardim Nunes and João Falcão e Cunha |
Abstract: |
This paper describes an UML-based lightweight software
development method that provides integration between usability engineering
and conventional object-oriented development. Here we briefly introduce
the Wisdom method and the different techniques used to improve the
usability of software products developed by small software companies
(SSDs). We present two UML based architectural models to leverage the
usability aspects of software products and discuss the importance of
participatory techniques to improve requirements gathering, taking
advantage of the increased communication and access to end-users that we
observed in SSDs. Finally, we provide several Wisdom artifacts based on a
real-world web-application, illustrating the different UML notational
extensions to support interactive system design. |
|
Title: |
HANDLING
MUTUAL EXCLUSION IN UML CLASS DIAGRAMS |
Author(s): |
João
Araújo and Ana Moreira |
Abstract: |
UML is a standard modelling language that is able to specify
a wide range of object-oriented concepts. However, there are some aspects
that UML does not fully discuss. For example, UML has no mechanism to
prevent the specification, for semantic reasons, of undesirable
relationships. With the fast evolution of the requirements of our nowadays
applications we cannot simply rely on omitting what we do not want to
happen. We explicitly have to specify unwanted concepts. We are referring
to the concept of mutually excluding classes. Moreover, the lack of
formalisation compromises the precision of the specification of the
concepts. By using formal description techniques, such as Object-Z, we can
reason about the requirements and identify ambiguities and inconsistencies
earlier in the development process. Particularly, the formal specification
can be used through the software evolution. In general, we can say that
formalising helps obtaining a more reliable system. Our aim is to specify
precisely mutually excluding classes. |
|
Title: |
AN
INTEGRATED COMPONENT-BASED APPROACH TO ENTERPRISE SYSTEM SPECIFICATION AND
DEVELOPMENT |
Author(s): |
Zoran
Stojanovic, Ajantha Dahanayake and Henk Sol |
Abstract: |
Component-Based Development (CBD) represents an advanced
system development approach, capable for managing complexity and
ever-changing demands in the business and IT environment. While many of
the component technology solutions have been already settled in practice,
of equal importance to their success are the methods and techniques
closely aligned with CBD principles. Current methods do not offer a
systematic and complete support for component-based way of thinking. This
paper presents a new approach to CBD, integrating the component concept
consistently into all phases and aspects of the enterprise system
development. The approach combines the CBD paradigm and ISO Reference
Model for Open Distributed Processing (RM-ODP), providing a comprehensive
component-based specification and development framework for building
enterprise systems of nowadays. |
|
Title: |
BUSINESS
PROCESSES EXTENSIONS TO UML PROFILE FOR BUSINESS MODELING |
Author(s): |
Pedro
Sinogas, André Vasconcelos, Artur Caetano, João Neves, Ricardo Mendes
and José Tribolet |
Abstract: |
In today’s highly competitive global economy, the demand
for high quality products manufactured at low costs with shorter cycle
times has forced various industries to consider new product design,
manufacturing and management strategies. To fulfill these requirements
organizations have to become process-centered so they can maximize the
efficiency of their value chain. The concept of business process is a key
issue in the process-centered paradigm. In order to take the most out of
the reengineering efforts and from the information technology, business
processes must be documented, understood and managed. One way to do that
is by efficiently modeling business processes. This paper proposes an
extension to UML Profile for Business Modeling to include the concepts of
business process. |
|
Title: |
BUSINESS
PROCESS MODELING WITH UML |
Author(s): |
Nuno
Castela, José Tribolet, Alberto Silva and Arminda Guerra |
Abstract: |
This paper focuses the reasons and advantages of the
application of the Unified Modeling Language (UML) in organizational
architecture modeling. A presentation and description of the methodology
to apply business modeling is made, namely, the organization of the
modeling in views and the application of those views. A case study is
presented, as an illustration. |
|
Title: |
RSHP:
A SCHEME TO CLASSIFY INFORMATION IN A DOMAIN ANALYSIS ENVIRONMENT |
Author(s): |
Juan
Llorens, José Miguel Fuentes and Irene Diaz |
Abstract: |
This work presents the theoretic aspects of a new domain
analysis technique. That technique tries to improve the fundamental
aspects in domain analysis; it joins knowledge acquisition and knowledge
classification in only one automatic step. Also this
acquisition-organization knowledge process is made automatically. This
property is so important in two aspects: First, the process was
semi-automatic until now, with human help in key steps. Secondly, most
domain analysis methods are manual, implying huge costs. The repository
used in this technique is adapted to Unified Modeling Language (UML). That
characteristic allows reusing all the information used in previous coding
steps in an easily way. |
|
Title: |
AN
INTEGRATED APPROACH OF MODELLING, TRANSFORMATION AND MEASUREMENT TO
EVALUATE BUSINESS PROCESS RE-ENGINEERING |
Author(s): |
Geert
Poels and Guido Dedene |
Abstract: |
We present an approach that combines systems modelling with
complexity measurement to evaluate business process complexity changes
that are caused by Business Process Re-engineering (BPR). Reduced business
process complexity is one of the criteria typically used to evaluate the
effectiveness of BPR. In our approach conceptual schemata are used to
model the current and the envisioned business process. The complexity
properties of the AS-IS and TO-BE schemata are measured using a suite of
conceptual schema measures presented in the paper. The complexity changes
caused by generic patterns of BPR, modelled as schema transformations, are
also measured. As a 'proof of concept' the approach is applied to a
reference framework for business transformation that is used in the
context of a high-level, strategic approach to BPR. |
|
Title: |
A
FRAMEWORK FOR EVALUATING WIS DESIGN METHODOLOGIES |
Author(s): |
Cernuzzi
Luca and González Magalí |
Abstract: |
Actually, a waste range of methodologies is available for
Web-based Information Systems (WIS) designers. So, may be very interesting
for WIS designers to analyze or evaluate the existing methodology looking
for the appropriate to use in each case. This study presents a proposal of
a framework for the evaluation process of WIS design methodologies. The
proposal, based on previous works, takes into consideration qualitative
evaluation criteria employing quantitative methods. In order to clarify
the proposal, this framework is also applied to a case study and some
interesting aspects are analyzed from both a qualitative and a
quantitative perspective. |
|
Title: |
AN
INFORMATION SYSTEM VIEW OF CONSISTENCY AND INTEGRITY IN ENTERPRISE
OPERATIONS |
Author(s): |
Yoshiyuki
Shinkawa and Masao J. Matsumoto |
Abstract: |
Mining for association rules is one of the fundamental data
mining methods. In this paper we describe how to efficiently integrate
association rule mining algorithms with relational database systems. From
our point of view direct access of the algorithms to the database system
is a basic requirement when transferring data mining technology into daily
operation. This is especially true in the context of large data
warehouses, where exporting the mining data and preparing it outside the
database system becomes annoying or even infeasible. The development of
our own approach is mainly motivated by shortcomings of current solutions.
We investigate the most challenging problems by contrasting the
prototypical but somewhat academic association mining scenario from basket
analysis with a real-world application. We thoroughly compile the
requirements arising from mining an operative data warehouse at
DaimlerChrysler. We generalize the requirements and address them by
developing our own approach. We explain its basic design and give the
details behind our implementation. Based on the warehouse, we evaluate our
own approach together with commercial mining solutions. It turns out that
regarding runtime and scalability we clearly outperform the commercial
tools accessible to us. More important, our new approach supports mining
tasks that are not directly addressable by commercial mining solutions. |
|
Title: |
BUSINESS
PROCESS MODELING BASED ON THE ONTOLOGY AND FIRST-ORDER LOGIC |
Author(s): |
Toshiya
Hikita and Masao J. Matsumoto |
Abstract: |
The current social, economic, and technical environments
surrounding enterprises are undergoing a period of rapid change. Such
changes produce permanent alterations to the information systems within
enterprises. This paper proposes a framework for developing an adaptable
system that anyone can change immediately when a new requirement arises
from those changes. The adaptable system consists of a business process
model, a requirement navigator, and a systems synthesizer. The business
process model is constructed for clear and rigorous specifications of
business processes in an enterprise. The model is structured by the
Ontology, which is a framework to define concepts clearly, and formalized
by First-Order Logic (FOL) to give rigorous syntax and semantics on the
model. One of the important reasons to select FOL for expression of the
model is that FOL has equivalency between provability and validity. This
equivalency allowed verifying consistency of the model with theorem
proving, which is a syntactic operation. |
|
Title: |
GEODA:
A GEOGRAPHICAL OBJECT DIGITISING APPLICATION |
Author(s): |
Jesús
D. Garcia-Consuegra, Luis Orozco, Guillermo Cisneros, Angel Martínez and
Antonio Castillo |
Abstract: |
In recent years, software developers have focused their
efforts in adapting, or designing, novel applications able to take full
advantage of the wide range facilities offered by the World Wide Web.
Geographical Information System (GIS) applications are not exception to
this trend. By using the World-Wide Web as underlying communications
infrastructure, GIS applications can gain access to the large number of
facilities provided by distributed heterogeneous GIS. In this paper we
describe GEODA (GEographical Object Digitising Application) a prototype
system to be deployed over the Internet. The design of GEODA follows a
groupware approach allowing multiple users to jointly participate in the
visualisation and processing of geographical data distributed across the
Internet. The structure of GEODA has been based on the Object Oriented
Programming paradigm. The use of Java technology ensures portability
across platforms. The paper also summarizes related concepts of CSCW and
Geographic Information Science, in order to understand similar experiences
and justify the design criteria adopted in GEODA. |
|
Title: |
THE
PATTCAR APPROACH TO CAPTURING PATTERNS FOR BUSINESS IMPROVEMENT |
Author(s): |
Isabel
Seruca and Pericles Loucopoulos |
Abstract: |
Patterns as a technology is in its infancy with few
theoretical underpinnings. Most work published about patterns is based on
practice. Indeed, the whole trust of patterns capturing and writing
derives from experience. Realizing the difficulty associated with pattern
development due to its empirical and knowledge-intensive nature, we
propose a method to aid in the process of capturing and reusing patterns
in a business domain. In this paper, we describe the first stage of the
method dedicated to the capture of patterns. Our approach to pattern
development is based on domain analysis principles and is processoriented,
so as to ensure a progressive and increasing understanding of the business
domain and the awareness of new opportunities for improving business. We
report our experiences in applying the pattern development approach within
the Clothing Manufacturing domain in the context of a business process
improvement project. |
|
Title: |
RELYING
ON THE ORGANIZATIONAL STRUCTURE TO MODEL WORKFLOW PROCESSES |
Author(s): |
Cirano
Iochpe and Lucinéia Heloisa Thom |
Abstract: |
According to the business literature, one can classify a
social organization relying on a set of well-known structural features.
Depending on the values taken by each of these features, one can conclude
whether the type of a specific organization is functional, divisional,
hybrid, matrix-like, or process-oriented. The organization type has strong
influence upon the way business processes are executed. The workflow
technology, on the other hand, aims at supporting the automation of the
organization processes. However, most of today’s workflow modeling
techniques does not consider those structural features in order to assist
designers during the modeling process. The present paper discusses the
possibility of using the knowledge of the organizational structure to
support the workflow modeling process. |
|
Title: |
INTEGRATING
ORGANIZATIONAL SEMIOTIC APPROACH WITH THE TEMPORAL ASPECTS OF PETRI NETS
FOR BUSINESS PROCESS MODELING |
Author(s): |
Joseph
Barjis and Samuel Chong |
Abstract: |
This issue of business process modeling has been debated in
numerous papers. Despite the widely available source of information, this
area of study is still a poorly understood issue. In this paper a
methodology is proposed for business process modeling. The methodology
introduced in this paper aims to create a more clearer, formalized and
comprehensible approach to business process modeling. In this methodology
we combine two different methods and modeling techniques in order to
propose a more complete approach. These two methods are the well-tested
semiotic approach and Petri net modeling method and technique. The aim is
to bridge the gap between the precision of capturing business requirements
using the semiotic approach with the graphical time sequence of the Petri
net method. In order to demonstrate applicability and appropriateness of
the proposed approach, the paper considers a real life example. The
example case was conducted in the Electro-Medics International b.v. (EMI). |
|
Title: |
THE
USE OF FORMAL AND INFORMAL MODELS IN OBJECT-ORIENTED REQUIREMENTS
ENGINEERING |
Author(s): |
Linda
Dawson |
Abstract: |
Little is understood, or reported on the basis of research,
of the use of object-oriented models and methods by practising
professionals in the production of requirements specifications for
commercial or industrial sized projects. This paper describes a research
project and the findings from a set of six case studies that have been
undertaken which examine the use of object-oriented models in professional
requirements engineering practice. In these studies, it was found that the
more formal models of object-orientation were rarely used to validate, or
even clarify, the specification with clients or users. Rather, analysts
tended to use informal models such as use cases or ad hoc diagrams, to
communicate the specification to users. Formal models are more often used
internally within the analysis team and for communicating the
specification to the design team. |
|
Title: |
A
CASE TOOL APPROACH FOR SOFTWARE PROCESS EVOLUTION |
Author(s): |
Mohamed.
Ahmed-Nacer |
Abstract: |
Software processes not only have a very long life, but they
are incomplete and no deterministic. This explains that, all along a
project, one is brought to modify the process during execution to redefine
it or to complete it. This paper discusses evolution, an important feature
in software processes. We present an evolution model that allows adapting
software process dynamically to the new needs in order to correct
inconsistencies found during execution, to modify some constraints or to
act directly on the process execution. This approach is complementary to
the modification of software process models that we have developed and
which supports different evolution strategies. |
|
Title: |
ORGANIZATION
OF ANALYSIS PATTERNS FOR EFFECTIVE REUSE |
Author(s): |
Maria
João Ferreira and Pericles Loucopoulos |
Abstract: |
Since the introduction of patterns in Computer Science, a
large number of libraries of patterns for different domains have been
identified. In most cases these patterns are represented in a ‘flat’
fashion making their use difficult especially when there is a large number
of patterns to consider in a particular application. In this paper we
propose both an analysis pattern classification scheme and an analysis
pattern representation - usage perspective - for enhancing the reuse of
analysis patterns. The proposed classification scheme associates a problem
(embodied in an analysis pattern), to a set of pre-defined terms whereas
the representation scheme -pattern template- provides the necessary
information for a designer to evaluate and revise a solution embodied in
an analysis pattern. |
|
Title: |
FROM
SYSTEM TO TEXT |
Author(s): |
Rodney
J. Clarke |
Abstract: |
Using a semiotic model of language called Systemic
Functional Linguistics, this paper identifies and describes the function,
structure and features of two text patterns or genres, the Taxonomic
Report and the Instructional Procedure, that are commonly employed in
computer application and tool documentation. A familiarity with these and
other relevant genres constitutes a significant aspect of computer
literacy for documentation users and producers. These specific genres can
be used in isolation to organise the overall structure of small texts, or
they can be used in combination to form a composite structure called a
macrogenre. The structure of the so-called Computer Training or CT
macrogenre is identified, described and exemplified. Genre theory suggests
that readers who are familiar with particular kinds of texts expect the
specific staging of the appropriate genre or set of genres. Conforming to
an appropriate genre or combinations of genres increases the likelihood of
the computer documentation being judged as useful by the community for
which it is written. The identification of specific genres can be useful
for writers as well who would then have templates that could assist then
in the process of creating useful documentation. |
|
Title: |
METHOD
FOR OBTAINING CORRECT METRICS |
Author(s): |
Coral
Calero, Mario Piattini and Marcela Genero |
Abstract: |
Metrics can be used as a mechanism for assuring product
quality. However, metrics will have this application only if they are
well-defined. To obtain correct metrics a number of steps must be
followed. In this paper we present the method we have designed for
obtaining correct metrics. This method is composed of the metrics
definition, formal validation and empirical validation of the metrics.
After these steps we can know if a metric is or not correct. However, this
information is not sufficient and we must be able to make some kind of
interpretation regarding the value that a metric takes. For this reason,
we have added the psychological explanation step to the method. |
|
Title: |
ENTERPRISE
MODELLING FOR AN EDUCATIONAL INFORMATION INFRASTRUCTURE |
Author(s): |
Ing
Widya, Cees Volman, Stanislav Pokraev, Italo De Diana and Eddie Michiels |
Abstract: |
This paper reports the modelling exercise of an educational
information infrastructure that aims to support the organisation of
teaching and learning activities suitable for a wide range of didactic
policies. The modelling trajectory focuses on capturing invariant
structures of relations between entities in educational organisation into
Enterprise object models. An Educational Model Space has been introduced
to define the problem domain context for the modelling. In this space,
educational requirements have been elaborated towards the Open Distributed
Processing Enterprise Viewpoint object models expressed in terms of the
Unified Modelling Language. Recursive structures, which are uniform for
the planning, performance and evaluation activities of education, have
been used to capture the dynamic needs of education. |
|
Title: |
SUPPORTING
HUMAN ACTIVITIES |
Author(s): |
Grégory
Bourguin and Xavier Le Pallec |
Abstract: |
Because we have been involved for many years in both the
Computer Supported Cooperative Work (CSCW) and the Computer Supported
Cooperative Learning (CSCL) research domains, we take particular interests
in the results coming both from the human and the computer sciences.
Thanks to this crossdisciplinary culture, we have understood that computer
systems aim at supporting human activities and that these activities need
systems better supporting their emergence. In other words, the systems we
traditionally design lack in supporting the inevitable users emerging
needs. This paper presents our new approach founded on the human science
framework called the Activity Theory and some advanced software design
techniques. It shows the results and promises we have found in intensively
using the meta-level of the systems we design, thus better taking into
account of the expansiveness property of the human activities we want to
support. |
|
Title: |
MANAGING
RISK IN EXTENDED ENTERPRISES |
Author(s): |
B.
Vassiliadis, A. Tsakalidis, K.Giotopoulos, S. Likothanassis, N.
Bogonikolos, P. Gatomatis and K. Platikostas |
Abstract: |
Because we have been involved for many years in both the
Computer Supported Cooperative Work (CSCW) and the Computer Supported
Cooperative Learning (CSCL) research domains, we take particular interests
in the results coming both from the human and the computer sciences.
Thanks to this crossdisciplinary culture, we have understood that computer
systems aim at supporting human activities and that these activities need
systems better supporting their emergence. In other words, the systems we
traditionally design lack in supporting the inevitable users emerging
needs. This paper presents our new approach founded on the human science
framework called the Activity Theory and some advanced software design
techniques. It shows the results and promises we have found in intensively
using the meta-level of the systems we design, thus better taking into
account of the expansiveness property of the human activities we want to
support. |
|
Title: |
TOWARDS
E-MANAGEMENT AS ENABLER FOR ACCELERATED CHANGE |
Author(s): |
Hugo
Lérias, João Luz, Pedro Moura, Ana Mendes, Isabel Teixeira and J Paulo
Teixeira |
Abstract: |
The new economy is the result of the information revolution
that promotes the emergence of efficient, ubiquitous and virtual-based
business models. It is a common belief that the Extended Enterprise (EE)
model has the potential to be a competitive advantage, particularly
nowadays when the globalisation of trade has increased the number of
competitors. Nevertheless, it is necessary to consider the EE as another
business model, which has not yet overcome classic problems, such as the
management of risk. In this paper we examine the notion of risk, its
impact on EE functions and propose a framework for its control. |
|
Title: |
BUSINESS
PROCESS REENGINEERING AND E-COMMERCE: CROATIAN PERSPECTIVE |
Author(s): |
Vesna
Bosilj Vuksic |
Abstract: |
The Internet is altering the ways in which businesses
operate and interact with customers, suppliers and partners. According to
recognized trends, business process reengineering should influence not
only internal, but also interorganizational processes in order to support
the demands of e-commerce. The paper stresses the necessity for
organizational restructuring in the context of global information
connectivity (ecommerce). The characteristics and perspectives of business
process reengineering efforts in Croatia are presented. The research is
based on a questionnaire about Business Process Reengineering (BPR)
projects implemented in Croatian companies. The main goal of the paper was
to identify whether the managers of country in the transition have
recognized the importance of business process reengineering and e-commerce
for their companies. |
|
Title: |
THE
MEETING REPORT PROCESS: BRIDGING EMS WITH PDA |
Author(s): |
Carlos
J. Costa, Pedro Antunes and João Ferreira Dias |
Abstract: |
Personal Digital Assistants (PDA) are important tools to
support personal processes. However, their contribution to co-operative
processes, like meetings, is reduced. On the other hand, the link between
meetings and other processes existent in organizations is also a problem.
This paper discusses the integration of results produced during meeting
sessions supported by Electronic Meeting Systems (EMS) with other
processes, especially the ones supported by scheduling tools implemented
by PDA. The paper develops a framework linking “personal data” and “meeting
data.” The framework originated a combined PDA-EMS system. The system
was experimented by an organisation in order to evaluate the concept. |
|
Title: |
MODELING
EXTENSIONS FOR OBJECT-ORIENTED WEB APPLICATION DESIGN |
Author(s): |
Ronald
E.Giachetti, Mayankkumar Patel and Maneli Rodriguez-Medina |
Abstract: |
The incorporation of greater functionality into web
applications requires use of technologies beyond HTML that integrate
interfaces, application servers, database servers, and increasingly
back-office applications. Design of these web applications has more in
common with software engineering than with what has evolved as a highly
graphical web site design approach. This paper describes the extension of
the unified modeling language (UML) for modeling web applications during
the analysis and design phases of a project. The target implementation of
the design is done with a new framework called CFObjects, which combines
the benefits of object orientation, Cold Fusion which a tag-based
language, and the utilization of a rapid application development
environment. The UML modeling extensions, web application design
methodology, and application of CFObjects is illustrated through a case
study of building a commercial web application. The modeling extensions
provide a sorely needed tool for web application designers to communicate,
design, and reuse objects to rapidly develop enterprise web applications. |
|
Title: |
FROM
CLIENT’S DREAMS TO ACHIEVABLE PROJECTS |
Author(s): |
Juan
José Escribano, Raúl Murciano and Pablo Gervás |
Abstract: |
This paper presents a web based expert system application
that carries out an initial assessment of the feasibility of a web
project. The system allows detection of inconsistency problems before
design starts, and suggests correcting actions to solve them. The
developed system presents important advantages not only for determining
the feasibility of a web project but also by acting as a means of
communication between the client company and the web development team,
making the requirements specification clearer. |
|
Title: |
BUSINESS
RENOVATION PROJECTS IN SLOVENIA |
Author(s): |
Ales
Groznik, Andrej Kovacic, Jurij Jaklic and Mojca Indihar Stemberger |
Abstract: |
The main goal of the paper is to present the characteristics
of business renovation efforts in Slovenia. The research is based on the
questionnaire about BPR projects and strategic IS planning, methods and
tools implemented in Slovenian organizations. The results of the research
are analysed. The paper focuses on using business renovation concept, as
well as on the necessity of strategic IS planning for developing an
information system that will be able to successfully support renovated
processes. It stresses necessity for changes in organizations.
Organizations become more reactive and self-adaptive, faster to respond
and capable to deal with the changing environment. |
|
Title: |
EXPERIENCES
IN THE DEVELOPMENT OF INFORMATION SYSTEMS IN AN INDUSTRIAL INNOVATION
CONTEXT |
Author(s): |
António
Lucas Soares, Waldemar Gaida and Christian Schmidt |
Abstract: |
This paper describes the methods and experiences of
developing of an information system in an industrial innovation context.
The case analysed respects to a system supporting quality management tasks
of working teams in industrial companies, developed within a research
& development EU funded project. The initial concept of the system is
described first, followed by the way how requirements where elicited and
structured and the system specified. Next, design and prototyping are
analysed , detailing the end-users evaluation approach. Conclusions are
drawn on the opportunities and difficulties od R&D involving
industrial enterprises. |
|
Title: |
ANALYSIS
OF SUITABILITY, APPROPRIATENESS AND ADEQUACY OF USE CASES COMBINED WITH
ACTIVITY DIAGRAM FOR BUSINESS SYSTEMS MODELING |
Author(s): |
Boris
Shishkov and Jan L.G. Dietz |
Abstract: |
This paper considers the potentials for application of use
cases in business process analysis and modeling. In particular, it
investigates suitability and appropriateness of use cases as tools for
elicitation of processes, and the applicability of the use case diagram
for visualizing the models of the business processes under study. It is
shown that use cases represent a promising tool for business process
modeling at the essential level. It is also studied that use cases could
be combined with activity diagram, for building consistent and more
complete models of a system, representing different system-actor
interactions |
|
Title: |
INTRODUCING
COMMON SYSTEMS IN INTERNATIONAL FINANCIAL FIRMS |
Author(s): |
Steve
C.A. Peters and Michael S.H. Heng |
Abstract: |
In international financial firms the Information Systems
(IS) developments are normally centralized where the commercial activities
are decentralized. This has specific effects on the use of information
systems in these organizations. There is a strong movement towards a
centralized development of systems because of expected higher efficiency
in development and maintenance.Ignoring or being unaware of the
differences in the system requirements for the foreign branches by
assuming that all processes should be equal at a nondetailed level leads
often to the proposal of a common system for the foreign branches. We
investigated three cases, where common systems were introduced and failed
over time because of similar reasons although all normal project
conditions were established. The cases are with three global financial
service companies where the holding resides in the Netherlands. In fact in
all three cases the management had all the right arguments to start
introducing common systems and made all the right actions to make the
projects a success. Despite all, the projects did not give the wanted
results. In all three cases the management could not foresee the failure
because all measurements according to the current know how about global
projects were taken. We did our research over a period of 10 years
starting around 1990. The research is based on open discussions with
central and local management including the members of the project. The
results of the research show that the influence of local evolution of the
organization has major impact on the introduction of common systems. This
result can be used for the current projects introducing e-commerce
activities for the global financial service companies. |
|
Title: |
TEMPLATE-BASED
REQUIREMENTS SPECIFICATION: A CASE STUDY |
Author(s): |
Linda
Dawson |
Abstract: |
A requirements specification describes the system which will
be built in a software development project independently of design or
implementation detail. In this paper the process of developing a
requirements specification as a stand-alone activity is illustrated by
describing the requirements specification process undertaken for a
government department by a small software development organisation. This
case used a commercial requirements-only semi object-oriented template
method which involved producing a set of requirements cards on which the
specification document was based. A major consideration for the consulting
organisation was convincing the client that completing the requirements
engineering process is vital to a successful working product. The case
study illustrates the process from the individual system
developer/consultant's point of view based on transcripts of
semi-structured interviews. |
|
Title: |
PROMOTING
COMPONENT?BASED SOFTWARE DEVELOPMENT THROUGH DESIGN REUSE |
Author(s): |
Peter
Hornsby and Ian Newman |
Abstract: |
Although component-based development holds great promise,
very little support is available to manage the complexity inherent in an
effective development process based around the reuse of components. One
area that is currently being explored is the use of design materials as a
stimulus for reusing components within the design process. This approach
uses the design materials to provide a rich source of descriptive
information about the components used. As components are used in different
contexts, more information about their range of uses is built up, enabling
components to be reused based on the problem areas to which they may be
applied, as well as the solution they are initially created to provide.
This approach has been implemented in the DesignMatcher tool, which
operates as a background process during development, notifying the
developer of opportunities for reuse based on the changing state of the
design. |
|
Title: |
INFORMATION
SYSTEMS PLANNING: CONTRIBUTIONS FROM ORGANIZATIONAL LEARNING |
Author(s): |
Jorge
Luis Nicolas Audy, João Luiz Becker and Henrique Freitas |
Abstract: |
A new outlook on the process of information systems planning
is required with the emergence and consolidation of new perceptions and
concepts of organizational learning. The field of organizational learning
offers viable opportunities for gains in the planning processes of
organizations. Several authors point to processing of information as a
relevant source of increased productivity and competitive advantage in our
society. However, within the field of Information Systems (IS), several
problems pertaining to planning and effective use of new Information
Technologies (IT) have challenged researchers to find ways of minimizing
the problems pertaining to IS planning, and implementation. This paper
analyses the impact of Organizational Learning, and the opportunities
generated by Organizational Learning on IS planning, as responses to the
difficulties of implementation of technology-based plans and the resulting
organizational changes. |
|
Title: |
AN
APPROACH FOR COORDINATION PROBLEM SOLVING |
Author(s): |
Patrick
Etcheverry, Philippe Lopistéguy and Pantxika Dagorret |
Abstract: |
This paper focuses on the problem of coordination
specification. Two principles rule our work: On the one hand, coordination
problems are recurrent problems and on the other hand, tested forms of
coordination exist. We define a typology of coordination problems that can
be solved by the enforcement of well known coordination forms. We present
a catalogue of coordination patterns that makes an inventory of a set of
coordination problems, and a set of solutions that describe how these
problems can be solved. After describing an example of coordination
pattern, we finally present an approach that uses the catalogue in a
process modelling context. |
|
Title: |
REPRESENTING
BUSINESS STRATEGY THROUGH GOAL MODELING |
Author(s): |
Ricardo
Mendes, André Vasconcelos, Artur Caetano, João Neves, Pedro Sinogas and
José Tribolet |
Abstract: |
This paper focuses on the representation of business
strategy through goal modeling. Traditional approaches to goal modeling
focus on capturing the business goals into an accurate representation.
Business goals originate from the vision and strategy of the company being
modeled. By restraining to model the business goals, traditional
approached often fail to capture the meaning of goals and the managers’
vision of business. By capturing some of the concepts underlying
management theories such as the Balanced Scorecard, a new approach to goal
modeling is presented. This approach aims at providing a modeling language
that is closer to the manager’s and business needs. |
|
Title: |
O.C.:
A NEW CONCEPT FOR MODELLING AND INFORMATION INTEGRATION |
Author(s): |
Claude
Petit and Claude Dussart |
Abstract: |
This paper describes an object-oriented language. A new
concept object, the cellular objects, is associated with the concept of
class of objects. The other types of representation of the knowledge can
be also associated easily. Basic object is a matrix of cells objects. It
is not a spreadsheet. Each cell object is located by its Cartesian
coordinates in its matrix object. The cell object is composed of a data
and a method. The data of a matrix object are private data. There are not
public data. Some matrices are specialized: matrix algebra, data
acquisition, decision table, interface data bases,...Each developer can
create its tools objects. This language has original orders of navigation
inside a matrix object or between matrices. The problems of heritage are
simplified. A new capacity of heritage is proposed: reflexive heritage and
conditional heritage. The integration of the treatments is simplified. The
modeling of complex systems multiformalisms is intuitive. The developer
sees its application in three dimensions. A variable in-depth reasoning is
easy to set up. Thirty applications validated this tool. |
|
Title: |
SUPPORTING
DEVELOPMENT OF BUSINESS APPLICATIONS BASED ON ONTOLOGIES |
Author(s): |
Noriaki
Izumi and Takahira Yamaguchi |
Abstract: |
This report proposes an integrated support methodology for
constructing business models including employing new business models,
transplanting existing business activities to computers, and decision
making support in employing new environment of computers. In order to
model enterprises and business activities and to implement them as
software applications, two business repository in different granularities
are devised based on ontologies: Business specification repository and
Business software repository. By developing a framework, which transforms
descriptions in Business specification repository into ones in Business
software repository, our framework achieves the reuse of existing
repositories of business activities and software libraries. We have
implemented the prototype system by JAVA and confirmed that it supports us
in various phases of business application development including business
model manifestation, detailed business model definition and an
implementation of business software applications. |
|
Title: |
UNIFIED
RESOURCE MODELLING |
Author(s): |
João
Neves, André Vasconcelos, Artur Caetano, Pedro Sinogas, Ricardo Mendes
and José Tribolet |
Abstract: |
This paper presents an overview of the state of the art of
resource modelling used by either information systems specialists and
human resources professionals. A generic framework for resource modelling
is used for comparing these two approaches in terms of the context and
description of the work the resource does and is able to do. This is a
base work to the development of a new tool for human resources management
that makes explicit the connection between human resources and business
processes. |
|
Area 4 - INTERNET COMPUTING AND ELECTRONIC COMMERCE
Title: |
PRIVACY
AUDITS AND TRUST: THE INTERNET DILEMMA |
Author(s): |
John
E. Gochenouer and Michael L. Tyler |
Abstract: |
This paper reviews the current efforts at self-regulation on
the Internet. It examines audit procedures used by Web companies to
protect personal privacy and compares them with those used by Brick and
Mortar companies. A series of focus groups valued consumer insurance and
security technologies of highest priority to instil trust. The conclusion
is that insuring Web users against the damages of lost privacy would be
the most powerful method to gain consumer trust. Surprisingly, there are
no major Web vendors carrying such an insurance policy. |
|
Title: |
VOYEURISM,
EXHIBITIONISM, AND PRIVACY ON THE INTERNET |
Author(s): |
John
E. Gochenouer and Michael L. Tyler |
Abstract: |
This paper contrasts the natural, genetically based tendency
of people to display voyeuristic and exhibitionistic behaviors with issues
pertaining to privacy on the Internet. Results of a survey provide
evidence that suggests personal control of information, not personal
privacy, is the issue and that a trusted Internet-based organization would
be the ideal repository of a person’s authorized and authenticated
information. |
|
Title: |
PLANNING
SECURITY POLICY ON E-COMMERCE |
Author(s): |
María
Martín, Alejandro Carrasco, Joaquín Luque and Rosa Gonzalo |
Abstract: |
This article intends to reflect the need for taking steps to
ensure the right operation of any e-commerce platform. The final aim
consists not only on describing the different technical options available
to build up a secure commerce, but also, on transmitting the importance of
giving a sense of security and confidence to our clients. We describe the
different options we should take into account in order to implement the
appropriate security policy. All this is based on the experience obtained
on electronic bank and on a University’s site. |
|
Title: |
C-ISCAP
: CONTROLLED-INTERNET SECURE CONNECTIVITY ASSURANCE PLATFORM |
Author(s): |
Ji-Hoon
Jeong, Jae-Hoon Nah, Sung-Won Sohn and Jong-Tai Lee |
Abstract: |
IPsec is a standard protocol to offer Internet information
security service. Recently IPsec is implemented through out the world on
the base of various operating systems. Through the inter-operability test
among multiple independent implemented devices, it is now the mandatory
function of Internet equipment. IPsec adds two headers (i.e.,AH and ESP)
and protocol to the legacy IP packet so therefore, IPsec offers not only
internet security service such as internet secure communication, and
authentication service but also the safe key exchange and anti-replay
attack mechanism. In this paper, we propose the design and implementation
of C-ISCAP, which is IPsec based Internet information security system and
also we will show the data of performance measurement. |
|
Title: |
A
CORBA/XML-BASED ARCHITECTURE FOR DISTRIBUTED NETWORK PLANNING TOOLS |
Author(s): |
Anton
Riedl |
Abstract: |
This paper presents a novel architecture for network
planning tools where functionality of the tools is distributed over the
network and algorithms can be contributed by various providers. The
concept is based on a client/server architecture with CORBA as the
middleware and XML as common information exchange language. The framework
breaks up the functionalities, which are commonly implemented in one tool,
into several independent modules, which then can run on different machines
in an IP network. This type of realization allows for easy extendibility
and administration of the tool and offers possibilities to implement
features like load sharing, accounting, or authentication. Furthermore,
the platform could be used to provide planning services and algorithm
implementations over the Internet. We describe a first prototype, which
implements the concept of this distributed architecture and discuss
implementation issues. In this context a very generic and versatile Java
client is introduced, which establishes the front end of the network
planning platform. To allow this client to adapt to the dynamic
environment and to integrate different algorithms within one graphical
user interface, we specify a data exchange format, which is based on XML. |
|
Title: |
AN
XML-BASED MULTIMEDIA MIDDLEWARE FOR MOBILE ONLINE AUCTIONS |
Author(s): |
Matthias
Wagner, Wolf-Tilo Balke and Werner Kießling |
Abstract: |
Pervasive Internet services today promise to provide users
with a quick and convenient access to a variety of commercial
applications. However, due to unsuitable architectures and poor
performance user acceptance is still low. To be a major success mobile
services have to provide device-adapted content and advanced value-added
Web services. Innovative enabling technologies like XML and wireless
communication may for the first time provide a facility to interact with
online applications anytime anywhere. We present a prototype implementing
an efficient multimedia middleware approach towards ubiquitous value-added
services using an auction house as a sample application. Advanced
multi-feature retrieval technologies are combined with enhanced content
delivery to show the impact of modern enterprise information systems on
today’s e-commerce applications. |
|
Title: |
AN
ANALYSIS OF B2B CATALOGUE INTEGRATION PROBLEMS |
Author(s): |
Borys
Omelayenko and Dieter Fensel |
Abstract: |
Content Management becomes a cornerstone of successful B2B
electronic commerce. The B2B players use different document standards to
represent their business documents, and different content standards to
specify the products. Thousands of the players meet together at B2B
marketplaces, and the marketplaces must be able to integrate numerous
document and content standards. The large number of the standards and
their significant complexity make the integration problems non-trivial and
require development of special integration architecture. In the present
paper we discuss the tasks and the problems which occur during the content
and document integration, and survey possible solutions and available
techniques. |
|
Title: |
A
CORBA AND WEB TECHNOLOGY BASED FRAMEWORK FOR THE ANALYSIS AND OPTIMAL
DESIGN OF COMPLEX SYSTEMS IN THE OIL INDUSTRY |
Author(s): |
Carlos
Arévalo, Juan Colmenares, Nestor Queipo, Nelson Arapé and Jorge
Villalobos |
Abstract: |
This paper discusses the design and implementation of a
CORBA and WEB technology-based framework for the analysis and optimal
design of complex engineering systems. The framework provides an
environment for the coupled execution over a network of data analysis,
modeling and optimization heterogeneous software using a WEB browser. A
framework application is illustrated solving a reservoir characterization
problem, critical for devising an optimal strategy for the development of
oil and gas fields. The results suggest that the framework can be
effectively and efficiently used for solving reservoir characterization
problems and holds promise to be useful not only in other areas of
petroleum engineering (e.g. hydraulic fracturing design, enhanced oil
recovery processes) but also in the solution of complex system design
problems found in industries such as electronics, automotive and
aerospace. |
|
Title: |
THE
WEB SERVER USING DYNAMIC HISTOGRAMS |
Author(s): |
Ying
Wah Teh and Abu Bakar Zaitun |
Abstract: |
In a large computer networked system, and load balancing
among these machines is usually done by a trial and error method which
gives unsatisfied results when much data is fed into these machines. The
increased data traffic volume can badly affect the networked system. To
overcome this, we present a Web Server technique using dynamic histograms
for load balancing. The classification of techniques is divided into
static and dynamic histograms. Static histograms are usually defined by
the users, and are plotted by dividing the useful data into different
classes subjectively. However, dynamic histograms are done using improving
query processing techniques. It is like an adaptive system that learns the
patterns of the existing data input. The basic idea is that a client
machine of the system will query data from the server machine. The server
will produce dynamic histograms to optimize the performance of these
machines. Based on these histograms, a load-balancing pattern can be
predicted and then further study to improve the load distribution; that
means we can make the loads to be evenly distributed among the machines in
the network. |
|
Title: |
WEBMASTER
- AN INTERNET INFORMATION SUPPORT SYSTEM FOR ACADEMIC SERVICES USING ASP
TECHNOLOGY |
Author(s): |
Carlos
Ferreira, Leonor Teixeira and Rui Santiago |
Abstract: |
Nowadays the management and publication of academic and
administrative information should not ignore the new information
technologies. This paper describes the WebMaster application, a Web
Information System for computerizing the academic information concerning a
Master degree within an academic department. It performs, as well, the
automation of all administrative jobs and the inherent procedures for the
distribution of this information, increasing its accessibility,
reliability, up-to-dateness and decreasing the bureaucratic burden. With
this application it is possible to speed up some routine activities,
decreasing the need for manual, exhaustive and repetitive tasks. The
general goal of this work is to determine how and to what extent different
groups of users within an academic department can interchange information
using the Web; in order to attain this goal, we have developed a framework
with three essential aspects, that will be briefly presented: contents,
design and implementation. |
|
Title: |
AN
APPROACH FOR TOTALLY DYNAMIC FORMS PROCESSING IN WEB-BASED APPLICATIONS |
Author(s): |
Daniel
J. Helm and Bruce W. Thompson |
Abstract: |
This paper presents an approach for dynamically generating
and processing user input form variants from metadata stored in a
database. Many web-based applications utilize a series of similar looking
input forms as a basis for capturing information from users, such as for
surveys, cataloging, etc. Typical approaches for form generation utilize
static HTML pages or dynamic pages generated programmatically via scripts.
Our approach generates totally dynamic forms (including page controls,
presentation layout, etc.) using only metadata that has been previously
defined in relational tables. This significantly reduces the amount of
custom software that typically needs to be developed to generate and
process form variants. |
|
Title: |
AN
INTEGRATED ARCHITECTURE FOR WEB-BASED ENTERPRISE DOCUMENT MANAGEMENT
SYSTEMS |
Author(s): |
Weidong
Zhang and Atta Badii |
Abstract: |
The critical challenge facing information technologists
today is to develop integrated computer systems capable of effective and
efficient management of the increasing amount of information and knowledge
that is available to our Information Society. This paper explores system
solutions for enterprise document management systems within distributed
and heterogeneous computing environments. An integrated framework is
presented, which makes use of various emerging technologies of
object-orientation, internet, databases, and expert systems, to
incorporate different components of document management functions, such as
document retrieval and workflow automation. |
|
Title: |
AN
EMPIRICAL STUDY OF THE BUSINESS MODEL IN THE KOREAN INTERNET STOCK TRADING
MARKET |
Author(s): |
Kun
Chang Lee, Jinsung Kim, Namho Chung and Soonjae Kwon |
Abstract: |
The objective of this paper is to empirically test the
validity of the prevailing business model of the Korean Internet stock
trading market, and search for the possibility of updating it to
incorporate the changes in market needs. Since 1998, the Internet stock
trading companies in Korea have been adopting the commission discount
strategy as their major business model so as to attract more customers to
their web site. However, customer needs as well as the Internet technology
have changed drastically in the meantime, which means that their current
business model should be adapted accordingly. Nevertheless, literature
tackling this issue has seldom been reported. In this sense, this paper is
aimed at empirically testing the validity of the current business model
and seeking the possibility to revise it for the purpose of incorporating
the changes in market trend. For this purpose, we collected 83 valid
questionnaires and performed empirical analysis. Results showed that the
commission discount policy, a currently prevailing business model in the
Korean Internet stock trading market should be modified to incorporate
several additional factors such as Easy to use, Reliability, Relative
Advantage, and Need to Improve Speed. |
|
Title: |
CODING
STANDARDS BENEFITING PRODUCT AND SERVICE INFORMATION IN E-COMMERCE |
Author(s): |
Alea
M. Fairchild and Bruno de Vuyst |
Abstract: |
E-Commerce can be streamlined as products and services are
unambiguously identified with industry-agreed XML tags through the coding
of products and services according to standard classification conventions,
such as the Universal Standards Products and Services Classification
(UNSPSC). Standardized coding allows for more effective electronic
purchasing management, assists marketing and sales functions and provides
better customer and distribution channel services. With its hierarchical
taxonomy and open standards, the UNSPSC is considered superior to existing
product coding schemes. However, care should be taken to further bind up
such systems with taxation harmonization efforts such as those pursuant to
the revised Kyoto International Convention on the simplification and
harmonization of customs procedures under the auspices of the Customs
Co-operation Council (World Customs Organization). |
|
Title: |
INTERNET
PROGRAMMING: TEACHING BY EXAMPLES |
Author(s): |
Nikola
B. Serbedzija |
Abstract: |
The last few years have seen fundamental changes taking
place in teaching Web-based technology. As the World Wide Web develops
rapidly, it is very hard to stay up-to-date, while focusing on sound
concepts and avoiding short-term trendy approaches. Furthermore, teaching
such a pragmatic subject has to be accompanied by practical
demonstrations. Here, an approach to teach Internet programming is
presented with a major emphasis on teaching by examples. In this way,
students can learn Web techniques and experience their application in
practice, by following Web-enabled on-line material. The introduction of a
Web browser as a universal front-end to both learning and teaching process
has prompted a re-evaluation of what is taught and how the lecture
material is presented. The course starts with an introduction to the
client/server programming model. All other techniques are presented in
accordance to this basic principle, from client-side html to server-side
Java programming. |
|
Title: |
THE
USE OF INFORMATION ON THE WEB TO ENHANCE TEACHING |
Author(s): |
Nouhad
J. Rizk |
Abstract: |
For well over a century, the higher education system has set
the world standard for academic excellence and equitable access for all
people. Today, the higher education sector--by which we mean both public
and private institutions of post-secondary education and training such as
colleges and universities-- should pursue greater mission differentiation
to streamline their services and better respond to the changing needs of
their constituencies. Individual institutions and parts of statewide
systems should focus on their points of comparative advantage rather than
all striving to become full-service campuses. Community colleges,
undergraduate universities, and research universities, for example, should
embrace different missions, give priority to activities central to those
missions, and reduce or eliminate more- marginal activities. Colleges and
universities should also develop sharing arrangements to improve
productivity. A greater sharing of resources--requirements, classes,
services, infrastructure, and libraries--could lead to significant savings
and even improve services. In the twenty first century, all people should
be encouraged to pursue some form of post-secondary education or training,
not only by searching using the Internet but also by taking advantages of
all services of the World Wide Web. Designing one's web site is now,
sharing one's lectures, laboratories, textbooks and handouts with the
entire world. In this research, we will focus on the positive influence of
technology in teaching and learning. Both teacher and student can replace
the traditional methods of teaching and learning by technological tools
starting with the small calculator up to the fastest and the most
efficient way of information retrieval with the latest newcomer of the
Internet: the Web. |
|
Title: |
DESIGN
AND IMPLEMENTATION OF A VISUAL ONLINE PRODUCT CATALOG INTERFACE |
Author(s): |
Juhnyoung
Lee, Ho Soo Lee and Priscilla Wang |
Abstract: |
One of the key elements of e-commerce systems is the online
product catalog. It provides sellers with a content management system that
assembles, aggregates, normalizes, and distributes product information. It
also provides potential buyers with an interactive interface that offers a
multimedia representation of the product information as well as retrieval,
classification and ordering services. In this paper, we discuss the
interface of online product catalogs, focusing on its ability to help
shoppers navigate and analyze product information. Specifically, we
present a new interactive interface for online product catalogs that is
effective in navigating through the product information space and
analytically selecting suitable products. It uses a multi-dimensional
visualization mechanism based on parallel coordinates and augmented by a
number of visual facilities for filtering, tagging, color-coding, and
dynamic querying. To demonstrate the capabilities of the product catalog
interface for online retail stores and marketplaces, we implemented a
prototype displaying a set of automobile data. We explain how the
prototype visualizes the entire product information space in a single page
and supports intuitive exploratory analysis with the visual facilities. |
|
Title: |
THE
USE OF INFORMATION SYSTEMS MODELLING TO IDENTIFY AND EVALUATE E-BUSINESS
STRATEGIES FOR SMALL AND MEDIUM ENTERPRISES (SMES) |
Author(s): |
José
Miguel Baptista Nunes and David Patterson |
Abstract: |
This paper seeks to illustrate the difficulties faced by
SMEs in identifying a suitable process to develop an E-Business strategy.
An evaluation of the present literature on E-Business is conducted. While
it is possible to identify numerous options for E-Business,(and indeed
build a compelling case for its adoption), the principle strategies
offered are for the execution of an already chosen business model, not for
the choice of the components to build an E-Business model. The motives for
use of E-Business by SMEs are considered, and a model to meet their needs
in developing an E-Business strategy is adapted from the world of
information systems. The model was utilised working with a SME and its
effectiveness analysed. The models utility was tested principally on an
Extranet as a model for E-Business development. Both the models utility in
its use of techniques such as ‘value chain analysis’ and ‘critical
success factors’ was considered in its evaluation of the impact that an
Extranet would have on the organisation. It was possible to conclude that
Extranets will be both a threat and an opportunity to organisations, and
the issue of trust is the key determinant. The models failings (or some of
its component parts) were firstly its inability to deal fully with a ‘knowledge
based’ organisation, and secondly its inability to represent the
importance of the human inter-relationships to the business concerned. |
|
Title: |
E-BUSINESS
CHANGE AND ORGANISATIONAL PERFORMANCE |
Author(s): |
Colin
Ash |
Abstract: |
Many ERP enabled organisations have undertaken significant
e-business initiatives over the past two years. Initially these were
concerned with cost savings but now the trend is towards revenue
generation. Also, the earlier thinking on this topic indicated a
significant role for information technology in these initiatives. We
depart from this by emphasising the importance of managing the change of
e-business projects. The paper examines a research model that proposes
various antecedents to successful e-business change management in ERP
environments. A case study of the first B2B e-business integration with
Dell Computer Corporation and its largest corporate customer is examined
in the context of this model. The study demonstrates the integration of
ERP and non-ERP systems, using Web-based technologies, to provide to
optimise an overall B2B value chain. The specific goal is to determine
facilitators that lead to success of an e-business project. Finally the
study is used to emphasise the role of change management and cultural
readiness when adopting e-business solutions. |
|
Title: |
INTEGRATING
NETWORK SERVICES FOR VIRTUAL TEAMS |
Author(s): |
Pascal
Molli, Hala Skaf-Molli, Claude Godart, Pardeep Ray, Rajan Shankaran and
Vijay Varadharajan |
Abstract: |
Virtual team provider is an emerging business on the
Internet. It allows people to work together distributed across space, time
and organization. Tools like BSCW or SourceForge allow an organization to
host virtual teams. Although, these tools deliver functionalities, they
lack required features (e.g. security, dependability and quality of
service) to make them commercially acceptable. In this paper, we describe
underlying effort needed at the network services level to make virtual
team software commercially viable. |
|
Title: |
A
NEW MECHANISM FOR DISTRIBUTED MANAGERS PERSISTENCE |
Author(s): |
Rui
Pedro Lopes and José Luis Oliveira |
Abstract: |
SNMP is currently a worldwide used network management
framework. This primacy is based on simple characteristics - it limits
itself to describe the structure of management information and the
procedures to access data, i.e. low-level operations. Recent work inside
IETF propose a management distribution architecture (DISMAN), which allows
to build agents that can cope with rather complex information structures.
The model still lacks to define persistence mechanisms for the management
delegates. In this paper we present an XML-based data model that can
provide persistence for distributed managers configuration. Moreover,
several application scenarios will be discussed such as, the definition of
high-level macros to group together elementary SNMP operations, and the
specification of mobile agent policies for SNMP agent interaction. |
|
Title: |
PORTUGUESE
PARLIAMENTARY RECORDS DIGITAL LIBRARY |
Author(s): |
Joaquim
Sousa Pinto, Joaquim Arnaldo Martins, Helder Troca Zagalo and Rui José
Pereira Costa |
Abstract: |
The first phase of the digitalisation of the Portuguese
Parliament proceedings will cover the historical period between 1935-2000
and will sum up to around 200.000 pages. In this stage we intend to
develop the mechanisms as well as the tools for the entire project of the
Parliament Digital Library, which we predict will have up to one million
pages. The process involves the microfilming of all the available
material, the scanning of the material so as to obtain the image of the
page as well as the use of an OCR - Optical Character Recognition - that
recovers the original text, allowing to search in the original documents.
The project concerns printed material only, manuscripts will be handled
separately in a later stage. This project was initially aimed at the
disposal of the Parliament Intranet, however it seems likely that most of
its information will be accessible on the Internet. At this stage, the
material is organized in small brochures with an average of 40-50 pages
each and contains the speeches of members of Parliament. Each one of the
pages is treated individually, so when the user is looking for specific
information, he is drawn to the page or pages in which the expression is
used. The visualization of the pages can be either in text mode or in the
original text through digitalized image. Despite the granularity of the
system being “page”, which means that each page is treated as a
complete element, it is possible to print the entire document, in text or
image, obtaining therefore a copy of the original document, because each
page is wrapped with metadata. |
|
Title: |
JINI
TECHNOLOGY IN E-COMMERCE |
Author(s): |
Luminita
Vasiu |
Abstract: |
Jini technology represents a significant step in the
evolution of distributed computing, as a simple yet remarkably reliable
and flexible infrastructure that enable all types of devices to simply
connect into impromptu networks, making access to and delivery of new
network services as simple as plugging in a telephone. Built on a top of
Java software infrastructure, Jini technology enables all types of digital
devices to work together in a community organised together without
extensive planning, installation or human intervention. In Jini
distributed systems, Java programs interact spontaneously, enabling
services to join or leave the network with ease and allowing clients to
view and access available services with confidence. It is expected that
Jini technology will make a big impact on embedded devices market by
providing a highlevel, object-based protocol instead of low-level data
exchange protocols. The paper presents some funding on the suitability of
Jini technology for E-commerce applications. It also debates some security
issues associated with Jini technology. |
|
Title: |
ELECTRONIC
VOTING VIA THE INTERNET |
Author(s): |
Alexander
Prosser and Robert Müller-Török |
Abstract: |
Electronic Transactions over the Internet, particularly
using the World Wide Web, have become an integral part of economic life.
Recently, also the public sector has started to use the new medium for its
administrative processes. This paper analyses whether the Internet could
also be used to support voting and other democratic processes. It advances
the hypothesis that voting via the Internet can meet generally accepted
election standards and proposes a protocol for implementing the voting
process. It also analyses the co-existence of conventional and electronic
voting. |
|
Title: |
QOS
NEGOTIATION BASED ON MANAGEMENT DELEGATES |
Author(s): |
José
Luís Oliveira and Rui L. Aguiar |
Abstract: |
The introduction of Quality of Service in the Internet will
bring increased needs for efficient service management in the network.
Current approaches to this problem rely on distributed management
protocols with centralized service control points. This paper proposes the
usage of software agents for this task, developed in a two-stage approach
due to its implementation complexity. The usage of agents, supported by
appropriate platforms, would naturally provide delegation and control
mechanisms, improve management flexibility and even transparently support
the existence of multiple management paradigms across network operators. |
|
Title: |
INTENTIONS
VALUE NETWORK |
Author(s): |
Ulrike
Baumöl and Robert Winter |
Abstract: |
By using the Web, service companies gain direct access to
consumers world-wide, and consumers gain direct access to service
companies world-wide. Simplifying distribution and communication of
services, aggregators emerge as an innovative layer of business models.
While traditional service industries are focused on products and economies
of scale, aggregators focus on customer processes. In this paper, business
models in a “Intentions Value Network” are analyzed, and pioneering
intentions value networks are described. |
|
Title: |
AN
INVESTIGATION INTO AGENCY REQUIREMENTS IN E-BUSINESS INFORMATION SYSTEMS |
Author(s): |
Pascal
van Eck and Roel Wieringa |
Abstract: |
In digital marketplaces, companies are present in the form
of their software, which engages in business interactions with other
companies. Each organisation that is active in the marketplace is trying
to reach its own business goals, which may be in conflict with the goals
of other organisations. The software by which an organisation is present
in a digital marketplace must act on behalf of this organisation to reach
these goals. Thus, there is a relation of agency between the software and
the organisation that the software represents. This relation gives rise to
a number of agency requirements on the software, which are identified and
compared with functional requirements. Results in the area of Multi-Agent
Systems may be applicable in the design of information systems for which
agency requirements hold. A number of such results are briefly described,
and further research issues are identified. |
|
Title: |
DICE
SHOPPING MODEL |
Author(s): |
R.
Badri Narayan |
Abstract: |
The ubiquity and flexibility that the Internet lends to any
enterprise is making the electronic marketplace highly competitive and
largely independent of conventional management systems. With the online
presence of more and more electronic shops, the efficiency and success of
any such commercial venture depends largely on it’s credibility,
reliability, customer relations and kind of shopping experience it offers.
The kind of interactivity and interface presented to customers is
instrumental in achieving increased sales and traffic. We therefore now
investigate the design of DICE (Dynamism and Interactivity Coupled
Environment), a shopping model which is designed to provide an enhanced
and rewarding shopping experience, utilizing increased interactivity, ease
of navigation and dynamic content. DICE aims to be modeled on real-world
shops and lays emphasizes on inter-consumer interaction as against
conventional e-shops and incorporates features such as real time
processing and the use of intelligent agents to provide for a more “lively”
and “stimulating” shopping environment. Factors known to drive sales
and traffic are also highlighted in the design and their effect on the
economics of the e-shop is also presented. |
|
Title: |
AGENT
BASED WORKFLOWS FOR SMALL TO MEDIUM ENTERPRISES |
Author(s): |
Botond
Virginas |
Abstract: |
This paper examines the development of technologies that
will enable small enterprises to set up and manage intra-enterprise and
cross-enterprise workflow processes to access the next generation of
market places. It summarises results of recent research in agent based
business process management and virtual electronic markets. A proposed
agent based distributed market and contracting environment is presented.
Such an environment could have a number of benefits including the
effective participation of SMEs into electronic contracting markets. A
proposed architecture for a prototype market is described. Possible
services offered by such a market are presented. Finally the paper
predicts that today’s experimental agent based market places will become
significant technology in the future. Efficient, Just-In-Time Enterprises
will become significant players in future markets. |
|
Title: |
THE
DIALECTICS OF INTERNATIONAL INTERNET-BASED TEAMS |
Author(s): |
Miguel
Pina e Cunha, João Vieira da Cunha and Ângela Lacerda Nobre |
Abstract: |
Drawing on a grounded theory research we present a grounded
model of improvisation in cross-cultural contexts, whose major
contribution lies in advancing the concept of the dialectical team, where
a minimal structure and a compatible perception of reality foster
improvisational action, with diverse members responding to a turbulent
environment using simple resources. This arrangement creates the
conditions that allow a team to improvise successfully, and remain both
efficient and effective. The model strengthens the argument for a
dialectical perspective of organizations, unearths the presence of
curvilinear relationships in cross-cultural phenomena where linear ones
were thought to prevail, and provides alternative answers to some of the
problems found in cross-cultural research. |
|
Title: |
MAY
YOUR INFORMATION SERVICE LIVE IN INTERESTING TIMES… |
Author(s): |
Steven
Willmott and Bernard Burg |
Abstract: |
Information systems are undergoing a transformation - they
are becoming faster, richer, more intelligent, personalised and mobile.
Furthermore, some of the greatest advances may yet be to come with the
creation of networked environments supporting complex direct interaction
between heterogeneous information systems: enabling services to act
autonomously, discover one another, provide services to each other and
dynamically form composite information services. This paper paints a
vision of such future “Dynamic Service Environments”, outlines the
technologies which may play a role in creating them and discusses some of
the challenges that lie ahead. |
|
Title: |
AGENT-BASED
PERSONALIZED SERVICES FOR MOBILE USERS OVER A VPN |
Author(s): |
Hamid
Harroud, Mohamed Ahmed, Ahmed Karmouch |
Abstract: |
This paper proposes an agent-based service provisioning
system for mobile users. It features a set of cooperative agents
distributed over different sites that work together to provide
personalized services for nomadic users within a Virtual Private Network
(VPN). For instance, when a user moves outside of his office, he would
like to have a similar office environment while he is at home, in a
meeting at another company, on a business trip or at a temporary location
such as a hotel. Agents representing the end-users and the system agents
may engage into a negotiation process to help the user access personalized
services at particular site. Such access is performed in accordance with
the user’s home policies as well as his current location policies.
Adaptive Service Presentation agent is used to adapt the service
presentation to the capabilities of the user’s device (e.g. Workstation,
Laptop, Phone, PDA). This work is part of the Mobile Agent Alliance
project involving University of Ottawa, National Research Council and
Mitel Corporation in Canada. |
|
Title: |
AN
AGENT-BASED KNOWLEDGE SHARING MODEL FOR INFORMATION RETRIEVAL ON THE
INTERNET |
Author(s): |
Bin
Ling, Colin Allison and Kecheng Liu |
Abstract: |
With the proliferation of electronically available
information, and its diverse nature, a critical problem has arisen which
is how to locate, retrieve and process relevant information effectively.
It is impractical to build either a unified peer-to-peer system or a
centralised architecture that combines all of these information resources.
A more promising approach is to program specialised informative agents
within an organisational structure that provide public access to
information resources on the Internet. These agents can co-operate with
other agents to share their knowledge when appropriate. This paper
describes an integrated approach to developing this type of agent-based
information retrieval system model, which employs a user-modelling
strategy, co-operation methods and matchmaking middle-agents. The
feasibility of the resultant knowledgesharing environment is demonstrated
by experimental results from a simulation based on requirements from the
UK National Health Service. |
|
Title: |
ADOPTION
OF THE WORLD WIDE WEB BY TRADITIONAL AUSTRALIAN MEDIA ORGANISATIONS |
Author(s): |
Darren
Burden, Philip Joyce and Jamie Mustard |
Abstract: |
The aim of this paper is it to identify whether Australian
media companies moved online with a clear business model in mind or in an
ad-hoc manner. In-depth interviews were conducted with four Internet media
managers from two large Australian media organisations. All four had been
involved in Web publishing from its early stages and had extensive
knowledge of the development of Web publishing in the industry. The
interviews focused on the period around the mid 1990’s when the early
development of the organisations’ websites took place. |
|
Title: |
ACTIVE
PROXIES FOR STREAMING MEDIA TO MOBILE DEVICES |
Author(s): |
Kevin
Curran and Gerard Parr |
Abstract: |
The Internet at present with its multiple standards and the
interconnection of components such as decoders, middleware,
synchronisation scripts, databases, QoS modules requires more than a plug,
try and play mentality. This is especially problematic when mobility is
introduced due to the proliferation in possible actions. Adaptable systems
provide a means of coping with change in a computing system as it allows
the reconfiguration of the system to achieve optimal quality. The
communicating objects can be represented as a graph of objects that
together realise the required behaviour. This paper describes the role of
active service proxies within the Chameleon framework for constructing
mobile applications. Active Service Proxies can be dynamically loaded and
activated within the network to provide for individual services within a
heterogeneous multicast group. Innovative steps are the design of
adaptable protocol machinery within each active intermediate system that
allows for efficient and flexible service translations. This paper
discusses this flexible execution environment. |
|
Title: |
SOFTWARE
AGENTS FOR DIGITAL TELEVISION |
Author(s): |
João
Ascenso and Alberto Silva |
Abstract: |
Nowadays more and more audiovisual information is available,
from many sources around the world. Computer and data technologies are
continuing to develop at a rapid rate, providing higher performance while
reducing the cost and size of the system components. Users require
assistance to avoid being overwhelmed by this amount of information and
the information providers require assistance in authoring and managing it.
This large volume of highly dynamic and distributed audiovisual
information is an ideal candidate to systems that make use of agent
technology. This paper focuses on a specific problem that the audiovisual
broadcasting and entertainment industry faces today and how the agent
technology will aid in solving it. Intelligent agents, personalization and
standards of communication and representation of audiovisual information
are discussed. A generic reference model that illustrates the principal
concepts and components is presented and some conclusions drawn. |
|
Title: |
DESIGN
AND EVALUATION OF SCALABLE VIDEO DELIVERY SERVICES OVER INTRANETS |
Author(s): |
Luis
Orozco-Barbosa and Mohamed Toukourou |
Abstract: |
Packet switched networks, especially IP-based networks,
continue to grow world wide at a formidable pace. This is in part due to
their efficiency at carrying all the traffic generated by popular
applications such as the WWW and the electronic mail. However, quality of
service (QoS) sensitive applications such as multimedia communication
applications are not as widely spread in the Internet because of the
discrepancies between their specific requirements and the inherent
characteristics of IP networks. Indeed the effective transmission of video
or voice for real-time decoding requires QoS guarantees from the
underlying transport mechanisms. This paper proposes a framework for the
efficient transmission of video traffic through IP-based networks. The
IETF’s Integrated Services is used as a QoS architecture to provide
specific QoS guarantees to the video flow. Specifically, the Resource
ReSerVation Protocol (RSVP) allows the end user to request deterministic
QoS guarantees from the network. In the evaluation phase of the
experimental work, we evaluate the performance of an RSVP-aware switching
point for video transmission under different load conditions. In
particular, the QoS provided is evaluated in terms of the video frame
inter-arrival times and the packet forwarding delay at the router node.
Our experimental results show that RSVP is able to protect the real-time
traffic during times of congestion. Also, under light load conditions, the
traffic control mechanism exhibits some slight overhead. Finally, the use
scalable video fits pretty well with the use of a wide diversity of
resources, an inherent characteristic of the Internet. |
|
Title: |
COLLABORATIVE
SYSTEMS ARCHITECTURE TO REDUCE TRANSACTION COSTS IN E-BUSINESS |
Author(s): |
John
Perkins and Sharon Dingley |
Abstract: |
Collaborative systems are a reality for major manufacturers
and retailers in the UK. This paper reports the current developments in an
action research programme being undertaken in an international e-business
trading network. The paper briefly reviews a typical manufacturer’s
experience of using collaborative systems to trade with supermarkets.
Proposals to extend the inter-firm network to reduce the transaction costs
incurred between manufacturers and small independent retailers are then
presented. The paper concludes by considering the potential changes to the
balance of power in the trading network by engaging in e-business. |
|
Title: |
AUTOMATIC
ANALYSIS OF CUSTOMER FEEDBACK AND INQUIRIES |
Author(s): |
Ulrich
Bohnacker, Lars Dehning, Jürgen Franke and Ingrid Renz |
Abstract: |
Several business to customer (b2c) applications, i.e.
analysis of customer feedback and inquiries, can be improved by text
mining (TM). TM gives new insights in the customer's needs and desires by
automatically processing their messages. Previously unknown facts and
relations can be detected and organisations as well as employees profit by
these knowledge management tools. The used techniques are rather simple
but robust: they are derived from basic distance calculation between
feature vectors in the vector space model. |
|
Title: |
MODERN
TOOLS FOR DEVELOPMENT AND DESIGN OF VIRTUAL INSTRUMENTS |
Author(s): |
Remigiusz
J. Rak |
Abstract: |
Virtual instrument is a natural evolution of the first DSP
based system. Their main feature consists of the exploiting of the ever
–increasing computational power of the modern DSP processors to simplify
the development of the measurement algorithms, the man-machine interface
(GUI), and the system interconnection. Everyone can build an infinite
number of instruments using his single personal computer. The computer
becomes a powerful, multipurpose, laboratory tool that can replace
expensive, outdated, easily-broken equipment. The cost of instrumentation
goes down and the productivity increases. Distributed measurement systems
are easy to do on the basis of local computer networks and the Internet.
The DataSocket control is a flexible, easy to use programming interface
for accessing, sharing and saving data. With that software tool you can
load data stored in files or on remote Web and FTP sites on the Internet.
In addition you can exchange data with other applications anywhere in the
world. You can even load this control on your Web site to build
interactive Web pages that Web users can use to remotely view and analyze
your data. DataSocket provides a simple consistent programming interface
for a variety of data handling tasks, replacing more complex programming
methodologies such as TCP/IP and DDE. |
|
Title: |
MULTIUSER
3D LEARNING ENVIRONMENTS IN THE WEB |
Author(s): |
Christmas
Meire Bressan, Simone de Abreu, Regina Borges de Araujo and Celso Goyos |
Abstract: |
Multiuser 3D environments are characterized by a common
virtual environment which is shared among multiple users. These
environments are important because they will allow a whole new range of
applications, from entertainment to education, to be offered in a
ubiquitous web interface. Learning virtual environments are particularly
interesting as a way to consolidate knowledge acquired from (virtual)
classes. This paper describes two such learning virtual environments which
use the MPEG-4 standard as their supporting tool through a proposed
extension to the current APIs of MPEG-J. |
|
Title: |
AGILE:
INTELLIGENT AGENTS TO ASSIST ELECTRONIC AUCTION |
Author(s): |
Ana
Cristina Bicharra Garcia, Anderson Lopes and Cristiana Seidel |
Abstract: |
The overwhelming options conveyed by Internet exaggerated
growth bring new issues for users engaged in buying and/or selling goods
using the net as the business medium. Goods and services can be exchanged,
directly sold or negotiated in auctions. In any of these situations,
finding the required product by the right price is the big challenge for
Internet users. Especially in e-auction, timing and strategic actions are
vital to a successful deal. In this paper, we propose a model for
e-auction based on intelligent agents technology. The use of agents make
possible to reflect better what happens in real auctions. Agents act
together with buyers, sellers and auctioneers to assist them obtaining the
best deal or at least finding Nash equilibrium point. |
|
Title: |
A
TRUSTED BROKERING SERVICE FOR PKI INTEROPERABILITY AND THIN-CLIENTS
INTEGRATION |
Author(s): |
Carlos
M. A. Costa, José Luís Oliveira and Augusto Silva |
Abstract: |
E-commerce seems to be one of most promising business areas
for the upcoming years. However, for its plain fulfillment, security
issues have to be judiciously managed. Information protection and digital
signatures are essential features for documents that represent
commitments. While several solutions already exist on the market, current
problems are mainly related with the lack of interoperability. On this
paper we present a security broker that uses XML to provide the end-user
with a set of security services and tools that are independent of the
client hardware, operating system, PKI solutions and network
infrastructure. |
|
Title: |
JUSTIFYING
THE BENEFITS OF WEB BASED ELECTRONIC COMMERCE APPLICATIONS: AN AUSTRALIAN
PRESPECTIVE |
Author(s): |
Jasmina
Drinjak, Phil Joyce and Graeme Altmann |
Abstract: |
This paper explores why organizations invest in eCommerce
applications and highlights the benefits they wish to achieve. It outlines
a Delphi study was used to determine the underlying benefits of investing
in Web applications and discusses issues pertaining to the benefits
derived. |
|
Title: |
NOTIFICATION
AND ROUTING OF ELECTRONIC MAIL TO MOBILE PHONE DEVICES |
Author(s): |
Hans
Weghorn |
Abstract: |
Today the desire to link digital mobile telephony to
Internet services is growing considerably. Besides specialized services,
like e.g. the WAP protocol, a traditional approach is to directly route
the information from regular Internet services to mobile terminals, for
instance GSM hand-held phones. The latter devices are not only capable of
performing digital voice connections, but these operate also a package
message service – the so-called small message system (SMS) – for
directly interchanging text data. Here, a method is described for routing
headers and selected contents of electronic Internet mails to mobile
phones by using this SMS datagram service. Since the content of each SMS
is limited to a length of 160 bytes, a system concept is described
employing packaging and compression for a convenient data forwarding to
the mobile phone terminal. This system is realized by introducing
additional software subsystems on both ends of this communication
expansion, which are the regular e-mail client running on a host connected
to the Internet, and the mobile phone acting as synchronized e-mail
display terminal. |
|
Page Updated on 06-05-2003
Copyright © Escola Superior de Tecnologia
de Setúbal, Instituto Politécnico de Setúbal |