|
Area 1 - DATABASES AND INFORMATION SYSTEMS INTEGRATION
Area 2 - ARTIFICIAL INTELLIGENCE AND DECISION SUPPORT SYSTEMS
Area 3 - INFORMATION SYSTEMS ANALYSIS AND SPECIFICATION
Area 4 - INTERNET COMPUTING AND ELECTRONIC COMMERCE
Area 1 - DATABASES AND INFORMATION SYSTEMS INTEGRATION
Title: |
DATA
SOURCES SERVER |
Author(s): |
Pedro
Pablo Alarcón, Juan Garbajosa, Agustín Yagüe and Carlos García |
Abstract: |
A proposal for a multi-platform architecture to work with
heterogeneous data sources is presented. It is based on a server that
allows the client applications to work with heterogeneous data sources
(heterogeneous RDBMS, XML files, text files, etc) without needing the
client application part to know any information on the data source. A
prototype based on the proposed architecture and oriented to heterogeneous
RDBMS has been implemented. |
|
Title: |
DESCRIPTORS
AND META-DOCUMENTS FOR MONO-MEDIA AND MULTIMEDIA DOCUMENTS |
Author(s): |
Ikram
Amous and Florence Sèdes |
Abstract: |
This paper presents in the first time the use of XML to
structure media (text, fixed image, sound and animated image) in flexible
and extensible descriptors and in the second one the metadata that can be
extracted from each media. These metadata are stored in an XML document
called ‘meta-document’. To query the mono-media and/or multimedia
documents, we use in queries the two XML documents: the descriptor
(containing the document structures) and the meta-document (containing
metadata) in order to answer and respond better the user needs and
requests. These documents can be queried by languages like XML-QL, XQL,
etc. |
|
Title: |
ORGANISING
AND MODELLING METADATA FOR MEDIA-BASED DOCUMENTS |
Author(s): |
Ikram
Amous, Anis Jedidi and Florence Sèdes |
Abstract: |
One of the main problems of information retrieval on the Web
is the poverty of describing and cataloguing information of different
type. One proposal to cope with this lack consists in introducing the
concept of metadata, to enrich and structure information description and
improve searching relevance We propose here a contribution to extend the
existing the media based metadata by a set of metadata describing
documents resulting from various media (text, image, audio and video).
These metadata are modeled in UML. The schema instantiation is structured
in XML documents, describing the media content and structure. The XML
documents, can be processed by query languages such as XML_QL. |
|
Title: |
XML-BASED
DOCUMENT TO QUERY A RELATIONAL DATABASE |
Author(s): |
Wilmondes
Manzi de Arantes Júnior, Christine Verdier and André Flory |
Abstract: |
This paper deals with the design of a system which creates a
XML document for the different medical information systems-users in order
to display medical information on each computer for reading, modifying and
querying medical data. The system is structured on the main idea to link
relational database (with structured data) and XML (with semi-structured
data). The system works as follows : the medical expert creates a document
(with the help of a HMI) and the system checks the document is
semantically correct, creates the document XML and its DTD, generates
automatically the SQL queries to build the document and to fill in. |
|
Title: |
MEDIWEB:
A MEDIATOR-BASED ENVIRONMENT FOR DATA INTEGRATION ON THE WEB |
Author(s): |
Ladjane
S. Arruda, Cláudio S. Baptista and Carlos A. A. Lima |
Abstract: |
Data integration of heterogeneous information systems has
been investigated for a long time. However, with the advent of the
Internet this problem has gained more attention due to many reasons. One
of the main aims in interoperable systems is to provide in a transparent
way access to the distributed data using a unified view of the whole
system. It is important to mention that the underlying data sources may be
independent and heterogeneous. This paper addresses the problem of data
integration on web-based systems. We present an architecture and design of
a web-based query system in which users, by using an ontology, can specify
their queries and submit to the underlying data sources. These data
sources can be either database systems or XML files. The system interface
uses several devices. 1 INTRODUCTION Data integration of heterogeneous
information systems has been investigated for a long time. However, with
the advent of the Internet this problem has acquired more attention due to
many reasons including different data features - structured,
semi-structured and unstructured data; the increasing demand on semantic
web using, for instance, ontologies for data integration, and the growing
number of applications on the Web which demand system interoperability –
for example, business-to-business and business-to-customer applications.
One of the main aims in interoperable systems is to provide in a
transparent way access to distributed data using a unified view of the
whole system. It is important to mention that the underneath data sources
may be independent and heterogeneous. Molina (Mol at al, 2000) addresses
some general problems concerning system integration: • Data type
differences; • Value differences - different constants might represent
different concepts in different |
|
Title: |
THE
ROLE OF ENTERPRISE ARCHITECTURE FOR PLANNING AND MANAGING FUTURE
INFORMATION SYSTEMS INTEGRATION |
Author(s): |
Thomas
Birkhölzer and Jürgen Vaupel |
Abstract: |
Complex IT-environments are characterized by deconstruction
of traditional packaging and consolidation of common infrastructure and
services. In a “consolidated” business environment, the business
success depends crucially on a successful embedding of own systems and
products into the overall environment. This requires more than just some
external interfaces, but coordination with and anticipation of this
environment. This task is described in this paper as “Enterprise
Architecture”. The relation to other architectural roles in software
engineering is similar to the well-understood and established relation
between “city planning” and “building blue-prints” in the building
domain. There is a difference in scale, scope, necessary competences and
methodologies. This paper outlines these distinct roles, their tasks, and
scopes in order to stimulate the understanding summarized in the following
two theses:
- Enterprise Architecture is a necessary and distinct architectural role.
Successful large-scale system development requires appreciation and
inclusion of this role in the IT-engineering process.
- Enterprise Architecture means cross-system coordination with similar
stakeholders, e.g. system development efforts, outside the own business
ownership. This distinguishes Enterprise Architecture from traditional
architectural roles and implies distinct tasks, methodologies, and
required skills. |
|
Title: |
FSQL:
A FLEXIBLE QUERY LANGUAGE FOR DATA MINING |
Author(s): |
Ramón
Alberto Carrasco, María Amparo Vila and José Galindo |
Abstract: |
At present we have a FSQL server available for Oracle©
Databases, programmed in PL/SQL. This server allows us to query a Fuzzy or
Classical Database with the FSQL language (Fuzzy SQL). The FSQL language
is an extension of the SQL language, which permits us to write flexible
(or fuzzy) conditions in our queries to a fuzzy or traditional database.
In this paper we show an extension of FDBR architecture of FSQL for fuzzy
handling of different types of data. The main advantage is that any user
can to define his own fuzzy comparator for any specific problem. We
consider that this model satisfies the requirements of Data Mining systems
(handling of different types of data, high-level language, efficiency,
certainty, interactivity, etc) and this new level of personal
configuration makes the system very useful and flexible. |
|
Title: |
PREDICATE-BASED
CACHING SCHEME FOR WIRELESS ENVIRONMENTS |
Author(s): |
Pauline
Chou and Zahir Tari |
Abstract: |
Wireless computing has recently increased its demand.
Although it provides greater convenience, flexibility to end users,
wireless communication has its limitations such as low bandwidth and long
latency. In addition, mobile devices usually have limited power resources.
To address such limitations, caching techniques (with consistency control
mechanisms) are used to reduce the communication between clients and
servers over wireless networks. In this paper we propose a server-based
broadcasting caching approach that uses predicates to reflect updates in
the broadcasted reports, called Cache Invalidation Reports (CIR). A
predicate mapping function is associated with each attribute, which
produces a binary representation of the attribute. A matching algorithm is
also designed for detecting relevancy between the cache predicate and the
predicates in the CIR. The proposed predicate-based CIR has several
advantages (e.g. efficient in overall bandwidth usage) as it informs the
cache manager which items need to be refreshed, and which ones need to be
discarded. |
|
Title: |
SEMI-AUTOMATIC
WRAPPER GENERATION AND ADAPTION |
Author(s): |
Michael
Christoffel, Bethina Schmitt and Jürgen Schneider |
Abstract: |
The success of the Internet as a medium for the supply and
commerce of various kinds of goods and services leads to a fast growing
number of autonomous and heterogeneous providers that offer and sell goods
and services electronically. The new market structures have already
entered all kinds of markets. Approaches for market infrastructures
usually try to cope with the heterogeneity of the providers by special
wrapper components, which translate between the native protocols of the
providers and the protocol of the market infrastructure. Enforcing a
special interface to the provider limits their independence. Moreover,
requirements such as a direct access to the internal business logic and
databases of the providers or fix templates for internal data structures
are not suitable to establish a real open electronic market. A solution is
the limitation of the access to the existing Web interface of the
provider. This solution keeps the independence of the providers without
burdening them additional work. However, for efficiency reasons, it keeps
necessary to tailor a wrapper for each provider. What comes more, each
change in the provider or its Web representation forces the modification
of the existing wrapper or even the development of a new wrapper. In this
paper, we present an approach for a wrapper for complex Web interfaces,
which can easily be adapted to any provider just by adding a source
description file. A tool allows the construction and modification of
source descriptions without expert knowledge. Common changes in the Web
representation can be detected and comprehended automatically. The
presented approach has been applied to the market of scientific
literature. |
|
Title: |
A
SYSTEM FOR DATA CHANGE PROPAGATION INHETEROGENEOUS INFORMATION SYSTEMS |
Author(s): |
Carmen
Constantinescu, Uwe Heinkel, Ralf Rantzau and Bernhard Mitschang |
Abstract: |
Today, it is common that enterprises manage several mostly
heterogeneous information systems to supply their production and business
processes with data. There is a need to exchange data between the
information systems while preserving system autonomy. Hence, an
integration approach that relies on a single global enterprise data schema
is ruled out. This is also due to the widespread usage of legacy systems.
We propose a system, called Propagation Manager, which manages
dependencies between data objects stored in different information systems.
A script specifying complex data transformations and other sophisticated
activities, like the execution of external programs, is associated with
each dependency. For example, an object update in a source system can
trigger data transformations of the given source data for each destination
system that depends on the object. Our system is implemented using current
XML technologies. We present the architecture and processing model of our
system and demonstrate the benefit of our approach by illustrating an
extensive example scenario. |
|
Title: |
TEMPORAL
DATA WAREHOUSING: BUSINESS CASES AND SOLUTIONS |
Author(s): |
Johann
Eder, Christian Koncilia and Herbert Kogler |
Abstract: |
Changes in transaction data are recorded in data warehouses
and sophisticated tools allow to analyze these data along time and other
dimensions. But changes in master data and in structures, surprisingly,
cannot be represented in current data warehouse systems impeding their use
in dynamic areas and/or leading to erroneous query results. We propose a
temporal data warehouse architecture to represent structural changes and
permit correct analysis of data over periods with changing master data. We
show how typical business cases involving change in master data can be
solved using this approach and we discuss architectural variants for the
implementation. |
|
Title: |
A
FRAMEWORK TO ANALYSE MOST CRITICAL WORK PACKAGES IN ERP IMPLEMENTATION
PROJECTS |
Author(s): |
José
Esteves and Joan A. Pastor |
Abstract: |
In order to achieve success in a software project, it is
important to define and analyze the most critical processes within the
project. A common approach to define most critical processes is the
Process Quality Management (PQM) method. However, the process structure of
the PQM method is too simple since it only provides one level of process
analysis. Real cases imply project process structures that are more
complex. We have improved the PQM analysis section to provide more depth
to real project structures. This study attempts to analyze this issue in a
specific type of software projects: Enterprise Resource Planning (ERP)
implementation projects. We present a framework to analyze most critical
work packages in ERP implementation projects. We then apply the result of
the analysis to SAP implementation projects. The result is a list of
critical work packages in each phase of a SAP implementation project.
These results show the higher importance of work packages related with
organizational and project management aspects compared with the technical
ones. Therefore, these results evidence the need of project managers to
focus on these work packages. |
|
Title: |
INFORMATION
ORGANIZER: A COMPREHENSIVE VIEW ON REUSE |
Author(s): |
Erik
Gyllenswärd, Mladen Kap and Rikard Land |
Abstract: |
Within one organization, there are often many conceptually
related but technically separated information systems. Many of these are
legacy systems representing enormous developmºp-ç m,ent efforts, and
containing large amounts of data. The integration of these often requires
extensive design modifications. Reusing applications “as is” with all
the knowledge and data they represent would be a much more practical
solution. This paper describes the Business Object Model, a model
providing integration and reuse of existing applications and cross
applications modelling capabilities and a Business Object Framework
implementing the object model. We also present a product supporting the
model and the framework, Information Organizer, and a number of design
patterns that have been built on top of it to further decrease the amount
of work needed to integrate legacy systems. We describe one such pattern
in detail, a general mechanism for reusing relational databases. |
|
Title: |
A
PROCESS MODEL FOR ENTERPRISE-WIDE DESIGN OF DATA ACQUISITION FOR DATA
WAREHOUSING |
Author(s): |
Arne
Harren and Heiko Tapken |
Abstract: |
Data warehouse systems nowadays are well established to
provide a technical fundament to decision support. Due to their integrated
and unified view over data of various operational and external systems
they provide a reliable platform for enterprise-wide, strategic data
analyses and business forecasts. Therefore sound data acquisition with
data from various data sources is crucial at construction time as well as
at maintenance time. Within the scope of this paper we present a process
model for the design of data acquisition processes. Comprehensibility and
maintainability of acquisition processes are achieved by clear distinction
between process descriptions and corresponding implementations.
(Semi-)Automatic derivation of optimized implementations is provided.
Although not limited to a single application domain we mainly focus on the
area of data warehouse systems. In this paper we sketch the underlying
framework and propose the process model. |
|
Title: |
DATA
INTEGRATION USING THE MONIL LANGUAGE |
Author(s): |
Mónica
Larre, José Torres, Eduardo Morales and Sócrates Torres |
Abstract: |
Data integration is the process of extracting and merging
data from multiple heterogeneous sources to be loaded into an integrated
information resource. Solving structural and semantic heterogeneities
between source and target data is the most complex problem for data
integration. With the appearance of Data Warehouse technology, the
developing of tools for effectively exploiting source data to populate
Data Warehouses, has become a challenging issue. This paper describes an
integration language called MONIL as an alternative to solve integration
problems. MONIL is an expressive programming language based on: a) An
integration metamodel, b) A set of built-in conversion functions, and c)
An algorithm to automatically suggest integration correspondences. MONIL
language is embedded in a framework with a set of tools to develop, store
and execute integration programs following a 3-phase integration process.
When a MONIL program is executed, MONIL code is translated into both Java
language and JDBC commands. The MONIL Language has been sucessfully used
to integrate several sources with different levels of heterogeneity. |
|
Title: |
DIDAFIT:
DETECTING INTRUSIONS IN DATABASES THROUGH FINGERPRINTING TRANSACTIONS |
Author(s): |
Wai
Lup Low, Joseph Lee and Peter Teoh |
Abstract: |
The most valuable information assets of an organization are
often stored in databases and it is pertinent for such organizations to
ensure the integrity and confidentiality of their databases. With the
proliferation of ecommerce sites that are backed by database systems,
databases that are available online 247 are ubiquitous. Data in these
databases ranges from credit card numbers to personal medical records.
Failing to protect these databases from intrusions will result in loss of
customers’ confidence and might even result in lawsuits. Database
intrusion refers to the unauthorized access and misuse of database
systems. Database intrusion detection systems identify suspicious,
abnormal or downright malicious accesses to the database system. However,
there is little existing work on detecting intrusions in databases. We
present a technique that can efficiently identify anomalous accesses to
the database. Our technique charaterizes legitimate accesses through
fingerprinting their constituent SQL statements. These fingerprints are
then used to detect illegitimate accesses. We illustrate how this
technique can be used in a typical client-server database system setup.
Experimental results show that the technique is efficient and scales up
well. Our contributions include introducing a novel process for
fingerprinting SQL statements and developing an efficient technique to
detect anomalous database accesses. |
|
Title: |
AN
INTEGRATED OBJECT DATABASE AND DESCRIPTION LOGIC SYSTEM FOR ONLINE CONTENT
AND EVENT-BASED INDEXING AND RETRIEVAL OF A CAR PARK SURVEILLANCE VIDEO |
Author(s): |
Farhi
Marir, Kamel Zerzour and Karim Ouazzane |
Abstract: |
This paper addresses the need for a semantic video-object
approach for efficient storage and manipulation of video data to respond
to the needs of several classes of potential applications when efficient
management and deductions over voluminous data are involved. We present
the VIGILANT model for content and event-based retrieval of video images
and clips using automatic annotation and indexing of contents and events
representing the extracted features and recognised objects in the images
captured by a video camera in a car park environment. The underlying
video-object model combines Object-Oriented modelling (OO) techniques and
Description Logics (DLs) Knowledge representation. The OO technique models
the static aspects of video clips and instances and their indexes will be
stored in an Object-Oriented Database. The DLs model will extend the OO
model to cater for the inherent dynamic content descriptions of the video,
as events tend to spread over a sequence of frames |
|
Title: |
A
MODEL FOR ADVANCED QUERY CAPABILITY DESCRIPTION IN MEDIATOR SYSTEMS |
Author(s): |
Alberto
Pan, Paula Montoto, Anastasio Molano, Manuel Álvarez, Juan Raposo and
Ángel Viña |
Abstract: |
Mediator systems aim to provide an unified global data
schema over distributed heterogeneous structured and semi-structured data
sources. These systems must deal with limitations on the query
capabilities of the sources. This paper introduces a new framework for
representing source query capability along with the algorithms needed to
compute the query capabilities of the global schema from sources. Our
approach for computing query capabilities is able to support a richer
capabilities representation framework than the ones previously presented
in the literature. We show that those approaches are insufficient to
properly represent many real sources, and how our approach can solve those
limitations. |
|
Title: |
USING
FULL MATCH CLASSES FOR SELF-MAINTENANCE OF MEDIATED VIEWS |
Author(s): |
Valéria
Magalhães Pequeno and Vãnia Maria Ponte Vidal |
Abstract: |
Sharing information among multiple heterogeneous and
autonomous data sources has emerged as a new and strategic requirement in
modern enterprises. In this paper, we use a mediator-based approach for
integrating multiple heterogeneous data sources. The mediator supports
materialized views (mediated views) which are stored in a centralized
repository. The queries on the view can be processed directly from the
integrated view, with no need for accessing the remote sources. The main
difficulty with this approach is to maintain the consistency of the
materialized view with respect to the source databases updates. Usually,
match classes are not self-maintainable. In a prior paper, we presented a
technique for self-maintenance of full match classes. In this work, we
show how to make other types of match classes self-maintainable by using
full match classes as auxiliary classes. |
|
Title: |
PROPOSING
A METHOD FOR PLANNING THE MATERIALISATION OF VIEWS IN A DATA WAREHOUSE |
Author(s): |
Alexander
Prosser |
Abstract: |
Data warehouses store multidimensional and aggregate data
for analysis and decision support. The question arises which aggregates
should be materialised given user access profiles. The paper proposes the
Aggregation Path Array (APA) as a framework for (i) systematically
representing all cubes that can be derived from a given set of dimensions
and hierarchy levels in a compact way, (ii) representing the cubes which
are of interest to the users, (iii) finding out which cubes can be derived
from a given materialised cube (=view), and (iv) to support the decision
which cubes to materialise by showing the ceteris paribus “net effect”
of materialising a certain cube. The paper also presents a software tool
to implement the method shown which is available as freeware from
http://erp.wu-wien.ac.at/install.exe. |
|
Title: |
DATA
REPRESENTATION IN INDUSTRIAL SYSTEMS |
Author(s): |
Claudia
Raibulet and Claudio Demartini |
Abstract: |
The specification and implementation of data related to
heterogeneous resources are still actual problems in industrial systems in
spite of the variety of data storage models and technologies available on
the market today. And this is because industrial resources have associated
proprietary specifications and implementations for their related data. The
paper proposes two possible solutions to these problems. The first
specifies a Distributed Repository Model that aims at providing a
unified/common view of the heterogeneous resources in an industrial
system. This approach makes use of the ISO 10303 standard. The second
proposes the definition of an industrial-specific language that provides
the syntax and the rules to create logical data models for industrial
systems. It is based on the eXtensible Markup Language. Both approaches
are independent of any implementation detail and/or storage-model
architecture. A comparison of the two solutions is provided at the end of
the paper. |
|
Title: |
D-ANTICIP:
A PROTOCOL SUITABLE FOR DISTRIBUTED REAL-TIME TRANSACTIONS |
Author(s): |
Bruno
Sadeg, Samia Saad-Bouzefrane and Laurent Amanton |
Abstract: |
Many problems arise when we address issues on distributed
real-time database systems (DRTDBMSs). A distributed database consists in
general of a database located in a main site, the master, where is
executed the coordinator process and of other databases located in other
sites, the participant sites, where are executed cohort processes. The
main problem is then to maintain the distributed database consistency
while insuring that the transactions meet their deadlines. Even, in
centralized RTDBMSs, this objective is difficult to reach. When the
database is distributed the problem is much more difficult due to the
communication delays. Hence, one of problems to solve is to manage
efficiently real-time subtransactions in participant sites. A
subtransaction is a part of a global transaction that executes within a
participant site. In this paper, we present a protocol (D-ANTICIP) a that
permits to enhance subtransactions performances, enhancing then the global
transactions performances. Simulation results show that the mechanism we
have used increases the number of subtransactions that meet their
deadlines in comparison with the traditional two-phase commit protocol. |
|
Title: |
USING
DATA MINING TECHNIQUES TO ANALYZE CORRESPONDENCES BETWEEN PARTITIONS |
Author(s): |
D.
Sánchez, J.M. Serrano, M.A.Vila, V. Aranda, J. Calero and G. Delgado |
Abstract: |
In many occasions, information and knowledge employed to
make decisions about a certain topic come from different sources. The
fusion of information is needed in order to facilitate its analysis,
comparison and exploitation. One particular case is that of having two
different classifications (partitions) of the same set of objects. A first
step to integrate them is to study their possible correspondence. In this
paper we introduce several kinds of possible correspondences between
partitions, and we propose the use of data mining techniques to measure
its accuracy. For that purpose, partitions are represented as relational
tables, and correspondences are identified with association rules and
approximate dependencies. The accuracies of the former are then measured
by means of accuracy measures of the latter, and some results relating
accuracy values to correspondence cases are shown. Finally, we provide
some examples of application of our proposal in a real-world problem, the
integration of user and scientific classification of soils, that is of
primary interest for decision making in agricultural environments. |
|
Title: |
A
HIERARCHICAL APPROACH TO COMPLEX DATA CUBE QUERIES |
Author(s): |
Rebecca
Boon-Noi Tan and Guojun Lu |
Abstract: |
Data Cube has become a topical issue among the research
community for its multidimensional presentation of data. However, there is
no existing data cube query classification technique that covers all the
aspect of data cube queries model. In this paper, we propose a
comprehensive study of complex data cube queries in OLAP. A query
classification is essential, especially to exploit the full capacity of
data cube queries. The classification is also essential for query
optimization purposes as it now becomes clear that the types of data cube
queries need to be optimized. Consequently, the domain of query
optimization is determined by the scope of data cube queries. |
|
Title: |
IMPLEMENTATION
OF FUZZY CLASSIFICATION QUERY LANGUAGE IN RELATIONAL DATABASES USING
STORED PROCEDURES |
Author(s): |
Yauheni
Veryha |
Abstract: |
A framework of the fuzzy classification query language
(fCQL) for data mining in information systems is presented. The fuzzy
classification query language provides easy-to-use functionality for data
extraction similar to the conventional non-fuzzy classification and SQL
querying. The developed prototype is based on the stored procedures and
database extensions of Microsoft SQL Server 2000. It can be used as data
mining tool in large information systems and easily integrated with
conventional relational databases. The benefits of using the presented
approach include high flexibility for data analysis, user-friendly data
presentation at the report generation phase and additional data security
features due to the introduction of additional viewbased data layer. |
|
Title: |
AN
XML-BASED VIRTUAL PATIENT RECORDS SYSTEM FOR HEALTHCARE ENTERPRISES |
Author(s): |
Zhang
Xiaoou and Pung Hung Keng |
Abstract: |
With the advent of shared care, there is a need to integrate
patient records which spread in disparate information systems. In this
paper, the design and implementation of an XML-based Virtual Patient
Records System, XVPRS, is described. It uses World Wide Web to consolidate
patient data across multiple organizations. The system uses XML-encoded
HL7 as the application level protocol between legacy systems and XML as
the main information format in the system itself. XVPRS also demonstrates
how to transmit and process a clinical document using CDA. Our experience
in XVPRS shows that using XML as the primary information format not only
simplifies the development of single information system but also
facilitates information integration among enterprise systems. |
|
Title: |
IMPORTING
XML DOCUMENTS TO RELATIONAL DATABASES |
Author(s): |
Ale
Gicqueau |
Abstract: |
XML has made such a big impression on the technology
industry that many thought that XML databases would eventually replace
more traditional RDBMS. Now that IT professionals have started to
implement viable XML solutions and the first excitement and sensation
generated by this new technology has passed, we are realizing that XML and
RDBMS can be considered complementary technologies. In fact, the value
brought by the intelligent use of these combined technologies is
significant because their individual strengths reside in very different
areas. XML has become the lingua franca for data exchange between
heterogeneous systems because it is text-based, platform independent,
license free with a self-descriptive nature to present information and its
structure. However, in many instances, you still need a traditional
relational database like Oracle, DB2 or SQL Server to store, query and
manipulate this data as XML is still inefficient as a data storage and
access mechanism. Relational databases are by far the most commonly-used
type of database today because it provides superior querying abilities,
reduced data set size and richer data type support. For this reason, RDBMS
and XML are here to stay and it is imperative to know how to map XML
documents to relational databases. After reviewing the differences between
XML and RDBMS format, this session will present you with programmatic ways
and methods to import XML documents corresponding to any DTD into any
relational database. |
|
Title: |
MANAGING
UNCERTAIN TRAJECTORIES OF MOVING OBJECTS WITH DOMINO |
Author(s): |
Goce
Trajcevski, Ouri Wolfson, Cao Hu, Hai Lin, Fengli Zhang and Naphtali Rishe |
Abstract: |
This work describes the features of the DOMINO (Database fOr
MovINg Objects) system, which brings several novelties to the problem of
managing moving objects databases. Our robust model of a trajectory
captures the inherent parameter of uncertainty of the moving objects
location, which impacts both the semantics of spatio – temporal queries
and the algorithms for their processing. In DOMINO, we present a set of
novel operators which capture the spatial, temporal and uncertainty
aspects of a moving object. The operators are implemented as UDFs (User
Defined Functions) on top of existing ORDBMS and can be used for answering
queries and generating notification triggers. DOMINO’s implementation,
in which ORDBMS are coupled with other systems, seamlessly integrates
several technologies: 1. existing electronic maps are used to generate the
trajectory plan on behalf of a mobile user; 2. real-time traffic sources
are used to automatically update the moving object’s trajectories; 3.
powerful (web-browser) GUI enables users to monitor and pose queries about
objects. |
|
Title: |
AN
INTEGRATED APPROACH FOR FINDING ENROUTE BEST ALTERNATE ROUTE |
Author(s): |
M.
A. Anwar and S. Hameed |
Abstract: |
Finding a good route for traveling has been a necessity of
human beings and also one of the major problems faced by the
transportation industry. The huge and complicated road network in a modern
country makes it difficult to find a best route for traveling from one
place to another and in developing countries this problem becomes more
complex and complicated due to small number of inevitable links and
road-track-crossing links, etc. The route searched by the shortest path
algorithm alone may be a shortest one but could not guaranteed as a best
route because many irrelevant/unusable road segments may be the part of
the solution. Moreover, enROUTE emergencies may cause already decided
route unusable or more time is required than in normal situations. In this
paper, we discuss and propose adhoc database changes to find enroute best
alternate route in case of any emergency. We also used knowledge-based
techniques. |
|
Title: |
DATA
MODELING FOR THE PURPOSE OF DATABASE DESIGN USING ENTITYRELATIONSHIP MODEL
AND SEMANTIC ANALYSIS |
Author(s): |
Joseph
Barjis and Samuel Chong |
Abstract: |
Database is the core of most Information Systems. While
developing a new information system or analyzing an existing one, the
analyst definitely has to deal with analysis and design of database as
well. In order to design and develop a successful database application, it
is very important to apply an appropriate modeling and formalization
technique while building a conceptual model. In this paper the authors
demonstrate the application of two modeling techniques for conceptual
modeling of database application. The first one is semantic analysis,
which is founded on the semiotic principles and the second is the
Entity-Relationship (ER) model, which is a popular high-level conceptual
data model. For illustration of these techniques in practice, the paper
introduces a ‘Car Dealership’ case study. By way of the case study,
this paper will demonstrate how the semantic analysis and its deliverable
can add value to the ER model. |
|
Title: |
TOOLKIT
FOR QOS MONITORING IN MIDDLEWARE |
Author(s): |
Peter
Bodorik, Shawn Best and Dawn Jutla |
Abstract: |
Problems associated with provisioning of Quality of Service
(QoS) include negotiation and renegotiation of QoS level contracts between
clients and servers, monitoring of services and system parameters,
estimating performance by modeling, storage and management of data
describing the system state, management of resources for QoS, and others.
This paper describes a toolkit, developed for the Java platform, that
facilitates monitoring of middleware components of e-business
applications, particularly when they are accessing DBs. The toolkit
provides for use of classes to measure delays of critical activities to
“probe” the state of the system. The tookkit provides agents that
collect and report data, and agents that initiate probes to obtain data on
the system performance. Also provided is an agent that controls these
monitoring activities. This approach is applicable to any QoS in which
delays of activities need to be measured and which require probing the
system to determine its state. |
|
Title: |
WEB
APPLICATION MAKER |
Author(s): |
Miguel
Calejo, Mário Araújo, Sónia Mota Araújo and Nuno Soares |
Abstract: |
Declarativa's Web Application Maker (WAM) is a software
development tool to build and maintain web interface front-ends to
relational database back-ends, using a model-based approach. To specify
interfaces it pragmatically extends a mainstream database application
model: the relational database schema itself. Interface generating
capabilities are available to the application programmer at runtime,
minimizing the traditional conflict between model-based and customized
code. The initial WAM prototype supports Microsoft SQL Server and Active
Server Pages, for Windows and Macintosh browsers, and is being used in
several customer projects. |
|
Title: |
USING
PERSISTENT JAVA TO CONSTRUCT A GIS |
Author(s): |
Mary
Garvey, Mike Jackson and Martin Roberts |
Abstract: |
Object oriented databases (OODB) have been portrayed as
being the solution for complex applications such as Geographical
Information Systems (GIS). One problem found with current GIS is that they
concentrate on spatial data, rather than aspatial, ideally both should be
able to be accessed within the one system. This paper discusses the
development of a GIS that integrates both environments, which uses an
object-oriented database and persistent programming technology. |
|
Title: |
VIRTUAL
REALITY WEB-BASED ENVIRONMENT FOR WORKCELL PLANNING IN AN AUTOMOTIVE
ASSEMBLY |
Author(s): |
Oleg
Gusikhin, Erica Klampfl, Giuseppe Rossi, Celestine Aguwa, Gene Coffman and
Terry Marinak |
Abstract: |
This paper describes a new distributed, interactive software
system to plan and optimize the layout of workcells in an automotive
assembly line environment. The new system integrates a web-based client
server architecture, a Virtual Reality Modeling Language (VRML) interface,
and mathematical algorithms capable of computing the total time required
to complete a given sequence of tasks within a workcell. The system is
designed to facilitate collaboration between the different functions that
participate in the assembly line planning process. |
|
Title: |
PERSISTENCE
FRAMEWORK FOR MULTIPLE LEGACY DATABASES |
Author(s): |
Sai
Peck Lee and Chin Heong Khor |
Abstract: |
This paper describes the development of an object
persistence framework in the Java language to work with different storage
mechanisms, while concentrating on transparency and reusability aspects.
The persistence framework is made up of reusable and extendable sets of
classes that provide services for persistence objects such as for
translation of objects to records to be saved in a certain type of
relational database and translation of records to objects when retrieving
from the database. It supports storage in relational databases, flat
files, e-mail servers, and the ObjectStore object database. The framework
was found to be successful in providing basic persistence services while
maintaining transparency. |
|
Title: |
INTRODUCING
AN ENTERPRISE RESOURCE PLANNING (ERP) SYSTEM IN A HOSPITAL |
Author(s): |
Steve
C. A. Peters |
Abstract: |
The introduction of integrated systems like ERP systems in
service organisations often leads to unforeseen problems. Even when all
necessary conditions for good project management are fulfilled, the
implementation project gives problems. After our research with financial
services companies we studied a similar project in a hospital. Based on
our findings we developed a model explaining the reasons for the problems
and suggesting another approach using a multi layer agent system to
support the knowledge intensive processes. |
|
Title: |
STATE-SENSITIVE
DESIGN OF DATABASE TRANSACTIONS |
Author(s): |
Yoshiyuki
Shinkawa and Masao J. Matsumoto |
Abstract: |
Many of the programs in enterprise information systems are
performed in the form of database transactions. Unlike ordinary programs
or modules, programs in this form do not transform input data uniquely
into output data, whereas those in the ordinary forms transform input data
uniquely. This non-deterministic property of database transactions causes
the program semantics and correctness to be subtle, and make the design of
enterprise information systems difficult. In addition, most enterprise
business processes and operations are composed of those transactions, and
designing such processes and operations is also a hard task because of the
above non-determinism. This paper presents a formal approach to dealing
with non-deterministic property of database transactions from enterprise
information system and business process viewpoints. First we discuss the
environmental characteristics that affect database transactions and
business processes. Next we present a way to deal with concurrent
transaction processing and state transition in an enterprise information
system, which cause the non-determinism. Then we extend the discussion
from single transaction to a complex of partially ordered transactions,
which is referred to as a business process. Lastly, we consider
non-determinism in inter-enterprise business processes which are often
implemented as web-based collaboration systems. |
|
Title: |
DESIGNING
AN OBJECT AND QUERY CACHE MANAGEMENT SYSTEM FOR CODAR DATABASE ADAPTER |
Author(s): |
Zahir
Tari, Abdelkamel Tari and Vincent Dupin |
Abstract: |
CODAR is a CORBA-based adapter, designed at by the
Distributed Object Research Group at RMIT University. It enables
transparently making object persistent across different databases,
including relational and object-oriented databases. CODAR is an extension
of the OMG’s Portable Object Adapter (POA) to deal with specific aspects
of the life cycle of persistent distributed objects. The first version of
CODAR (Tari et al., 2002) had all the required core functionalities,
however it failed to provide appropriate performance required by most of
distributed applications. This paper presents an extension of CODAR to
include an appropriate caching technique so better performance are
obtained. Because CODAR also deals with (SQL) queries, object and query
caches were proposed. The former caches generic collections so they can be
re-used in later interactions, whereas the query cache deals with the
eviction of objects based on several parameters (e.g. number of
collections, frequency of access and update, cost of remote retrieval). A
multi-level queue is designed to efficiently deal with the eviction of
objects. |
|
Title: |
MODELING
RELATIONAL DATA BY THE ADJACENCY MODEL |
Author(s): |
Jari
Töyli, Matti Linna and Merja Wanne |
Abstract: |
The World-Wide-Web contains data that cannot be constrained
by a schema. Another source for such data is heterogenous corporate
systems which are integrated in order to get better service for the users.
Such data is commonly called semistructured data. Semistructured data has
been under intensive investigation during the last few years. The main
focus of interest has been on the development of new data models and new
query languages. The most widely used data model for representing
semistructured data is a graph-like or tree-like structure. The problem is
to develop a model which could be all-embracing. In order to develop such
a model we have introduced a new model called the Adjacency Model (AM).
Our model is a general model which can be used to represent semistructured
data as well as relational data. |
|
Title: |
THE
MILLENNIUM INFORMATION SYSTEM (MIS) FOR EUROPEAN PUBLIC HEALTH AND
ENVIRONMENT NETWORK (EPHEN) |
Author(s): |
Frank
Wang, Ruby Sharma, Na Helian, Farhi Marir and Yau Jim Yip |
Abstract: |
The European Public Health and Environment Network (EPHEN)
had a pressing need to change the way their work activities were
conducted. The aim of this project is to create a multi-user network
information system to automate the daily activities carried out by the
members’ of EPHEN. An integral part of the system will be the addition
of a personalised internal email system to facilitate the flow of
communication within the group. Also an innovative element will be
integrated into the system to promote health awareness, especially as
EPHEN’s primary concern is to encourage public health in society. |
|
Title: |
AN
ELECTRONIC SCIENTIFIC NOTEBOOK: METADATA AND DATABASE DESIGN FOR
MULTIDISCIPLINARY SCIENTIFIC LABORATORY DATA |
Author(s): |
Laura
Bartolo, Austin Melton, Monica Strah, Cathy Lowe, Louis Feng and Christopher
Woolverton |
Abstract: |
This work in progress defines a user-based approach in the
effective organization and management of data objects generated within a
scientific laboratory from data creation to long-term use. The project
combines a computer science approach of database systems with an
information science approach of metadata formatting to organize and tag
laboratory data. Long-term goals of this project include 1) learning how
to organize and store biotechnology information, from raw data to finished
research papers and electronic presentations, in formats which will
encourage multidisciplinary use of the information; 2) applying the
organizing knowledge gained and tools developed in storing biotechnology
information to the storage of other similar scientific information; 3)
developing an environment in which scientific information from different
disciplines can be made more easily accessible by and meaningful to
multidisciplinary research teams; and 4. ) constructing electronic
scientific notebooks for the storage, retrieval, and dissemination of
multidisciplinary scientific information. |
|
Title: |
THE
IMPACT OF CHANGE ON IT PROFESSIONALS |
Author(s): |
Matthew
C. F. Lau and Rebecca B. N. Tan |
Abstract: |
This paper presents the results of an online survey carried
out to establish the impact of change on Information Technology (IT)
professionals in Singapore. The online questionnaire survey covered two
major issues - the extent of the impact of change, and management
response. It was found that most of the organizations are in the advanced
stage of IT maturity, with a large majority having client/server
technology implemented in consultation with staff and providing
professional development for them. Though most of the respondents found
their new role more exciting after implementing change and that their
preferred career path is towards a management role with more varied
skills, a significant percentage were interested in higher remuneration as
well and not ruling out moving to better paid positions even in a volatile
market. These findings are of practical significance for organizations
involved in change management in general and in improving IT change
management policies and strategies in particular, in today’s
ever-changing business environment. |
|
Area 2 - ARTIFICIAL INTELLIGENCE AND DECISION SUPPORT SYSTEMS
Title: |
SEMIQUALITATIVE
REASONING FOR SOFTWARE DEVELOPMENT PROJECT BY CONSTRAINT PROGRAMMI |
Author(s): |
Pedro
J. Abad, Antonio J. Suárez, Sixto Romero and Juan A. Ortega |
Abstract: |
This paper presents a new approach for problem of the human
effort estimation in software development projects (SDP). It represents a
variation to the work presented by the same authors in the Third
International Conference on Enterprise Information Systems
[Suarez&Abad’01]. The subsystem of human resources of the
Abdel-Hamid’s dynamic system is simulated in a semiqualitative way. In
this approach we mix the purely qualitative information with the
quantitative one to offer more precise results than obtained in the
precedent work. We use CSP (Constrains Satisfaction Problem) for modelling
the human resource subsystem. This way we generate a program under the
constraint-programming paradigm that contains all the restrictions that
should be full satisfied. Results of the simulation offer us a
quantitative and qualitative idea of the necessity of human resources in
software project. |
|
Title: |
INSURANCE
MARKET RISK MODELING WITH HIERARCHICAL FUZZY RULE BASED SYSTEMS |
Author(s): |
R.
Alcalá, O. Cordón, F. Herrera and I. Zwir |
Abstract: |
The continued development of large, sophisticated,
repositories of knowledge and information has facilitated the
accessibility to vast amounts of data about complex objects and their
behavior. However, in spite of the recent renewed interest in
knowledge-discovery techniques (or data mining), the usefulness of these
databases is partially limited by the inability to understand the
system-related characteristics of the data. Some applications from the
financial or insurance market –such the ones concerned with risk
analysis– require to meet solutions that emphasize precision while
aiding to understand and validate their structure and relations. We
present results about an ongoing project being carried out by the
Argentinian State Insurance Agency for tracking the status of the
insurance companies, i.e., for screening and analyzing their condition
through time. Specifically in this paper, we will tackle with the modeling
of the mathematical reserves of the premiums, or risk reserves, of the
insurance companies in the local insurance market. To do so, we propose
the use of Linguistic Modeling which is one of the most important
applications of Fuzzy Rule-Based Systems. Particularly, we apply
Hierarchical Linguistic Modeling with the aim of obtaining the desired
trade-off between accuracy and interpretability of the system modeled,
i.e., decomposing such nonlinear systems into a number of simpler
linguistically interpretable subproblems. The achieved results will be
also compared with global hierarchical methods and other system modeling
techniques, such as classical regressions and neural networks. |
|
Title: |
NEURAL
NETWORKS AND WAVELETS FOR FACE RECOGNITION |
Author(s): |
Li
Bai and Yihui Liu |
Abstract: |
In this paper we present two novel face recognition methods
based on wavelets and neural networks: one combines wavelets with
eigenfaces, the other uses wavelets only. We also discuss face recognition
methods based on orthogonal basis vectors such as the eigenface and
fisherface methods. Though in different shapes and forms, there is
something common in all the face recognition methods mentioned - they all
involve producing a new set of orthogonal basis vectors to re-represent
face images. We report the results of our extensive experiments on the new
methods. Though there have been many pattern recognition methods based on
wavelets and neural networks, our methods are novel in the sense that they
either combine wavelets and eigenfaces in a novel way, or apply wavelets
on 2D face images represented as 1D signals. Both methods have achieved
better recognition rates than the known methods in the literature. The
experiments are conducted on the ORL face database using a hierarchical
radial basis function neural network classifier. |
|
Title: |
SUPPORTING
ENGINEERING DESIGN PROCESS WITH AN INTELLIGENT COMPLIANCE AGENT A WAY TO
ENSURE A STANDARD COMPLIED PROCESS |
Author(s): |
Larry
Y. C. Cheung, Paul W. H. Chung and Ray J. Dawson |
Abstract: |
Current workflow management systems (WfMSs) lack the ability
to ensure a process is planned and performed in accord with a particular
standard. The current best practice of providing reliable systems is to
embody the development process in recent industry safety standards and
guidelines, such as IEC 61508. These standards are generic, however, their
every application is different because of the differences in project
details. Our Compliance Flow research project aims to provide support for
handing standard complied, complex, ad-hoc, dynamic changing, and
collaborative engineering design process. This paper describes the use of
an intelligent compliance agent, called Inspector, in Compliance Flow to
ensure a standard complied process. The standard that the design process
intended to be complied with is required to be modelled using the Standard
Modelling Language in advanced in order to facilitate the compliance check
performed by Inspector. The modelling is performed by means of a software
tool called Standard Modeller in the system. Some examples drawing on a
draft version of IEC 61508 are used to illustrate the mechanism of the
modelling of standards and the compliance check. |
|
Title: |
APPLICABILITY
OF ESTIMATION OF DISTRIBUTION ALGORITHMS TO THE FUZZY RULE LEARNING
PROBLEM: A PRELIMINARY STUDY |
Author(s): |
M.
Julia Flores and José A. Gámez |
Abstract: |
Nowadays, the machine learning is one of the most relevant
problems in the computational scientific world. It results specially
attractive to learn models showing both a predictive and descriptive
behaviour at the same time. It is also desirable for these models to be
able to deal with uncertainty and vagueness, inherent in almost every real
world problem. Fuzzy Linguistic Rule-Based Systems represent one of the
models that have all these features. Recently a methodology to learn such
systems has been proposed: it treats the problem as a combinatorial
optimization task. Several evolutionary algorithms have been used to guide
the search, such as ant colonybased algorithms. In this paper, we propose
to study the applicability of a family within evolutionary algorithms that
has recently appeared: estimation of distribution algorithms. Since this
is a first approach, we will focus on the simplest variants of this
family, for example those based on univariate models. The experiments that
have been carried out show them as competitive with regard to other
evolutionary algorithms, e.g. genetic algorithms, with the advantage of
requesting less input parameters and using fewer generations in one of the
studied cases. |
|
Title: |
GROUP
DECISION MAKING BASED ON THE LINGUISTIC 2-TUPLE MODEL IN HETEROGENEOUS
CONTEXTS |
Author(s): |
F.
Herrera and L. Martínez |
Abstract: |
Lot of activities carried out in the enterprise implies
Group Decision Making processes. In Group Decision Making is difficult
that all experts have an exact knowledge about the problem. At the
begining, Group Decision Making problems manage uncertainty with real
values within a predefined range, soon interval valued approaches were
proposed and more recently fuzzy-interval valued and linguistic approaches
have obtained successfull results. In this paper, we shall deal with Group
Decision Making problems in which the experts can express their knowledge
over the alternatives using different types of information: numerical,
interval valued, fuzzy-interval valued or the linguistic one, that is
called Heterogeneous Information. The main problem to deal with
heterogeneous information is: how to aggregate it?. The aim of the
contribution is to develop an aggregation method able to combine all
different types of information in the decision process. To do so, we shall
use the the linguistic 2-tuple representation model. |
|
Title: |
USING
ARTIFICIAL NEURAL NETWORKS TO PROVE HYPOTHETIC CAUSE-ANDEFFECT RELATIONS:
A METAMODEL-BASED APPROACH TO SUPPORT STRATEGIC DECISIONS |
Author(s): |
Christian
Hillbrand and Dimitris Karagiannis |
Abstract: |
Decision models which are based on recent management
approaches often integrate cause-and-effect relations in order to identify
critical operational measures for a strategic goal. Designers of Decision
or Executive Support Systems implementing such a model face the problem
that many of the supporting indicators are of non-financial nature (e.g.:
customer satisfaction, efficiency of certain business processes, etc.) and
cannot be easily quantified as a consequence. Since
fuzzy-logic-applications provide numerous specific approaches in this
area, our interest focuses on another issue which arises in this context:
Due to this lack of numeric assessability of many lag indicators, the
interdependencies between those figures cannot be formally described like
between financial ratios. In this work, we propose an approach to overcome
some shortcomings of many DSS/ESS which force their users to make unproven
assumptions about existing interrelations: Because the accuracy of these
hypotheses is one of the key quality issues of a decision model we provide
a framework to evaluate and prove hypothetic cause-and-effect relations by
the use of Artificial Neural Networks. |
|
Title: |
SUPPORTING
THE OPTIMISATION OF DISTRIBUTED DATA MINING BY PREDICTING APPLICATION RUN
TIMES |
Author(s): |
Shonali
Krishnaswamy, Seng Wai Loke and Arkady Zaslavsky |
Abstract: |
There is an emerging interest in optimisation strategies for
distributed data mining in order to improve response time. Optimisation
techniques operate by first identifying factors that affect the
performance in distributed data mining, computing/assigning a “cost”
to those factors for alternate scenarios or strategies and then choosing a
strategy that involves the least cost. In this paper we propose the use of
application run time estimation as solution to estimating the cost of
performing a data mining task in different distributed locations. A priori
knowledge of the response time provides a sound basis for optimisation
strategies, particularly if there are accurate techniques to obtain such
knowledge. In this paper we present a novel rough sets based technique for
predicting the run times of applications. We also present experimental
validation of the prediction accuracy of this technique for estimating the
run times of data mining tasks. |
|
Title: |
STRATEGIC
POSITION OF FIRMS IN TERMS OF CLIENT’S NEEDS USING LINGUISTIC AND
NUMERICAL INFORMATION THROUGH A NEW MODEL OF SOFM |
Author(s): |
Raquel
Flórez López |
Abstract: |
The analysis of the strategic position of firms working in a
specific market is very useful to understand the strengths and weakness of
each company and to develop successful competitive positions [Porter,
1986]. In that way, there are many variables that influent the relative
situation of companies, more of them expressed in linguistic terms (‘strong’,
‘weak’, ‘leadership’, etc). Even when classical statistical
techniques, like Principal Component Analysis or Factorial Analysis, are
very robust in mathematical terms, they do not allow integrating this sort
of ‘fuzzy’ information in the model, reducing its efficiency.
Additionally, these methods consider very restrictive initial hypotheses
that used not to be fulfilled by data, not obtaining a global map over the
final situation of enterprises but partial representations based on
general combinations of them (factors). The employment of the Fuzzy Sets
Theory and specially the 2-tuple fuzzy linguistic method to combine both
numerical and linguistic information, together to the Artificial Neural
Net known as Self Organizing Feature Map [Kohonen, 1990] permits to
improve the whole positioning, obtaining an only final map that considers
all disposable data in an efficient way and lets observe the relative
distance among firms. |
|
Title: |
A
CASE-BASED EXPERT SYSTEM FOR ESTIMATING THE COST OF REFURBISHING
CONSTRUCTION BUILDINGS |
Author(s): |
Farhi
Marir , Frank Wang and Karim Ouazzane |
Abstract: |
CBRefurb is a case-based reasoning (CBR) system for the
strategic cost estimation for building refurbishment. This domain is
characterised by many uncertainties and variation. Its cost estimation
involves large amount of interrelated factors whose impact is difficult to
assess. This paper report on the problems faced by the building cost
information Services (BCIS) databases and several rule-based expert
systems to tackle this complex cost estimation problem and, the design and
evaluation of CBRefurb system implemented using ReMind Shell. CBRefurb
imitates the domain expert in its approach of breaking down the whole
building work into smaller work (building items) by organising the
refurbishment cases as a hierarchical structure composed of cases and
subcases. The process of estimation imitates the expert by considering
only these pieces of previous cases of similar situation (or context). For
this purpose, CBRefurb defines some of the building and its component (or
items) features as a global context and local context information used to
classify cases and subcases into context cases and subcases, and to
decompose the cost estimation problem into adaptable subproblems. This is
followed by a two indexing schemes to suit the hierarchical structure of
the case and the problem decomposition and to allow classification and
retrieval of contextual cases. CBRefurb features consolidate the aim of
the project that is allowing multiple retrieval of appropriate pieces of
the refurbishment which are easier to adapt, reflecting the expert method
of estimating cost for complex refurbishment work. |
|
Title: |
DATA
MINING MECHANISMS IN KNOWLEDGE MANAGEMENT SYSTEM |
Author(s): |
I-Heng
Meng, Wei-Pang Yang, Wen-Chih Chen and Lu-Ping Chang |
Abstract: |
Data Mining and Knowledge Management are hot topics in
business and academic domain in recent years. Data Mining means
discovering interesting knowledge and patterns from large amounts of data.
There are different models in Data Mining: association rule, sequential
pattern, classification, clustering, outlier mining, and collaborative
filtering. In this thesis, the data mining mechanisms are applied for
knowledge management system and result in a better knowledge environment.
The intelligent search engine, Collaborative prediction, virtual bookshelf
and knowledge map are implemented by data mining mechanisms. |
|
Title: |
CONTROLLING
AND TESTING A SPACE INSTRUMENT BY AN AI PLANNER |
Author(s): |
MD.
R-Moreno, M. Prieto, D. Meziat, J. Medina and C. Martin |
Abstract: |
The PESCA instrument has been designed and built with the
purpose of studying the Solar Energetic Particles and the Anomalous Cosmic
Rays. It will be part of the Russian PHOTON satellite payload that is
scheduled to be launched in December of 2002. The instrument comprises two
different blocks: the PESCA Instrument Amplification and Shaping
Electronics (PIASE), for the amplification and analog to digital
conversion, and the PESCA Instrument Control and Acquisition System
(PICAS), for the control of the whole instrument and manages the
communication with the satellite. An Electrical Ground Support Equipment
(EGSE) software has been implemented using AI planning techniques to
control and test the PESCA instrument and the communication process with
the satellite. The tool allows complete and autonomous control,
verification, validation and calibration of the PESCA instrument. |
|
Title: |
A
TRAINING ENVIRONMENT FOR AUTOMATED SALES AGENTS TO LEARN NEGOTIATION
STRATEGIES |
Author(s): |
Jim
R. Oliver |
Abstract: |
Automated negotiation by artificial adaptive agents (AAAs)
holds great promise for electronic commerce, but many practical issues
remain. Consider the case of a vendor that wishes to deploy a system of
AAAs for negotiating with customers, which could be either human or
machine. One disadvantage of earlier systems is the agent learning
environment requires complete information about both sides involved in the
negotiation, but a vendor will not have such private information about
each customer’s preferences and negotiating strategies. We propose a
computerized training environment that minimizes the information
requirements about the opposing side. In our approach, customers are
grouped into market segments. General characteristics of the segment are
inputs to a simulation of multiple customers. The vendor’s agents learn
general negotiation strategies for customers in each segment under the
direction of a genetic algorithm. We describe a general system
architecture, develop a prototype, and report on a set of experiments. The
results provide preliminary evidence that this is a promising approach to
training AAAs. |
|
Title: |
A
DENSITY-BASED APPROACH FOR CLUSTERING SPATIAL DATABASE |
Author(s): |
Abdel
Badee Salem, Taha ElAreef, Marwa F. Khater and Aboul Ella Hassanien |
Abstract: |
Many applications require the management of spatial data.
Clustering large spatial databases is an important problem, which tries to
find the densely populated regions in the feature space to be used in data
mining knowledge discovery, or efficient information retrieval. In this
paper, we present a clustering algorithm which is based on a density-based
approach that has been proven in its ability in processing very large
spatial data sets Density-based approach requires only one input parameter
and supports the user in determining an appropriate value for it. The
applied algorithm is designed to discover clusters of arbitrary shape and
noise. We experimented the algorithm using a sample of 452 points
representing the latitude, the longitude, the depth and the magnitude of
the earthquake. The algorithm works for k-dimensional data, we tried 2, 3
and 4-dimensional data sets. Our objective was to cluster these data
points to study the earthquake behaviour in each cluster. |
|
Title: |
SIMPLE
DECISION SUPPORT SYSTEM BASED ON FUZZY REPERTORY TABLE |
Author(s): |
J.J.
Castro-Schez, L. Jimenez, J. Moreno and L. Rodriguez |
Abstract: |
This paper shows how fuzzy repertory table technique (Castro
et al., 2001) can be used as a simple decision support system for helping
to a manager of a company when he is faced with a choice in which the
options are clear (for instance, the choice of a suppliers from among all
existing suppliers, or the choice of sell one product from among all
existing posiblities). The manager must analyse each option making use of
his knowledge, with the aim to highlight its characteristic qualities
which are admirable in themselves or useful for our purpose and also its
defective qualities. Next, the manager choices the more advantageous
opcion according to this information. When the possible options are clear,
the analysis implies make comparisons among the several options. Thus, the
manager find out the characteristic and defective qualities associated to
each option. With the suggested method in this paper, we identity the
relevant information (characteristic and defective) associated to each
option and recommend one option according to this information. |
|
Title: |
A
MULTI-CRITERIA DECISION AID AGENT APPLIED TO THE SELECTION OF THE BEST
RECEIVER IN A TRANSPLANT |
Author(s): |
Aïda
Valls, Antonio Moreno and David Sánchez |
Abstract: |
In this paper we describe an agent that applies a new
multi-criteria decision methodology to analyse and rank a list of possible
receivers for a particular organ. The ranking obtained is of great help
for the Hospital Transplant Co-ordinator who has to make the final
decision of which patient receives the organ. The agent that we have
designed and implemented can be used in any other similar problem in which
we have a list of alternatives that are evaluated with several qualitative
preference criteria. |
|
Title: |
NEURAL
NETWORKS FOR B2C E-COMMERCE ANALYSIS SOME ELEMENTS OF BEST PRACTICE |
Author(s): |
Alfredo
Vellido |
Abstract: |
The proliferation of Business-to-Consumer (B2C) Internet
companies that characterised the late ‘90s seems now under threat. A
focus on customers’ needs and expectations seems more justified than
ever and, with it, the quantitative analysis of customer behavioural data.
Neural networks have been proposed as a leading methodology for data
mining. They can be specially useful to deal with the vast amount of
information usually generated in the Internet context. In this brief
paper, a few guidelines for the application of neural networks to the
analysis of the on-line customer market are proposed. |
|
Title: |
PROOF
RUNNING TWO STATE-OF-THE-ART PATTERN RECOGNITION TECHNIQUES IN THE FIELD
OF DIRECT MARKETING |
Author(s): |
Stijn
Viaene, Bart Baesens, Guido Dedene, Jan Vanthienen and Dirk Van den Poel |
Abstract: |
In this paper, we synthesize the main findings of three
repeat purchase modelling case studies using real-life direct marketing
data. Historically, direct marketing — more recently, targeted web
marketing — has been one of the most popular domains for the exploration
of the feasibility and the viable use of novel business intelligence
techniques. Many a data mining technique has been field tested in the
direct marketing domain. This can be explained by the (relatively)
low-cost availability of recency, frequency, monetary (RFM) and several
other customer relationship data, the (relatively) well-developed
understanding of the task and the domain, the clearly identifiable costs
and benefits, and because the results can often be readily applied to
obtain a high return on investment. The purchase incidence modelling cases
reported on in this paper were in the first place undertaken to trial run
state-of-the-art supervised Bayesian learning multilayer perceptron (MLP)
and least squares support vector machine (LS-SVM) classifiers. For each of
the cases, we also aimed at exploring the explanatory power (relevance) of
the available RFM and other customer relationship related variable
operationalizations for predicting purchase incidence in the context of
direct marketing. |
|
Title: |
MEDICAL
DATA BASE EXPLORATION THROUGH ARTIFICIAL NEURAL NETWORKS |
Author(s): |
Lucimar
F. de Carvalho, Candice Abella S. Dani, Hugo T. de Carvalho, Diego Dozza,
Silvia M. Nassar and Fernando M. de Azevedo |
Abstract: |
The objective of this work is the consideration and
implementation of some basic premises used in the learning process in
Artificial Neural Networks (ANN`s). Initially the net will be trained with
the algorithm of competitive learning through the Kohonen Self-Organizable
Map to, starting from the result, be compared with the Active X
Neusciences simulator. The chosen domain for the implementation of the
learning algorithms was the application in the Clinical Diagnosis of the
Convulsive Crises, based on the International Classification League
Against Epilepsy ILAI/81 (COMMISSION, 1981). According to the results of
the simulator, the base of training of the net, the net showed a
satisfactory performance in 77,7% of the neurons used in the
classification of patterns. Only 22,3% of the neurons of the net didn't
obtain a high index of convergence. Through the implementation of the
standard algorithm of Kohonem and using the 2x2 configuration, in other
words, four exit neurons, the test set of the net obtained an index of
classification of 100%. |
|
Title: |
EVALUATING
EMS VALUE - THE CASE OF A SMALL ACCOUNTANCY FIRM |
Author(s): |
Carlos
J. Costa and Pedro Antunes |
Abstract: |
This paper discusses the evaluation of Electronic Meeting
Systems (EMS). More specifically, it tackles the problem of evaluating the
perceived organizational value of these systems. EMS lay down one sub area
of research crossing Computer Supported Cooperative Work (CSCW) and Group
Support Systems (GSS) in particular and information systems in general.
Based on these multiple perspectives, we developed an evaluation grid for
EMS. The evaluation grid identifies several EMS components as well as
different levels of organizational impact. Our hypothesis is that with
this grid it is possible to analyse and evaluate the organisational, group
and individual impact of EMS. The paper also presents an application of
the grid to a real organization: an accountancy firm. |
|
Title: |
USING
CELLULAR AUTOMATA IN TRAFFIC MODELING |
Author(s): |
Monica
Dascalu, Sergiu Goschin and Eduard Franti |
Abstract: |
The paper presents a traffic simulator intended to be used
in Bucharest, Romania, in order to solve usual traffic problems and obtain
better traffic management performances with the same basic route network.
The simulator makes short time traffic predictions starting from data
extracted from real traffic. Usually, traffic predictors use statistic
methods instead of simulation techniques. The advantage of a performant
simulation over statistic prediction comes mainly from its ability to
treat the untypical situations, exactly the ones that need a precise
prediction. The traffic simulator is based on cellular automata model, a
very simple and regular massive parallel model, which is able to make real
time computations in such complex situations that the traffic simulations
imply. The cellular automata simulator has been adapted to the topology
given by the Bucharest city center map and its performances were tested in
various real situations. The simulation proved to be very performant in
cases like two-lane streets intersections, narrowing due to accidents or
street repairs etc. |
|
Title: |
THE
AEX METHOD AND ITS INSTRUMENTATION |
Author(s): |
Sabine
Delaitre, Alain Giboin and Sabine Moisan |
Abstract: |
We aim at elaborating a decision support system to manage
concrete experience, using Artificial Intelligence methods, such as
Case-Based Reasoning. We target any organization that wishes to capture
and exploit its employees’ experience. This paper focuses on a key
point: the method to obtain the system memory. We present AEX, an
experience feedback method that we developed and instrumented for risk
managers to help them share their experience and to support their critical
tasks (e.g., intervention). The elaboration of AEX was based on the
analysis and modeling of the risk managers’ real activity (esp., their
decision-making and knowledge management processes). The instrumentation
of AEX resulted in a computer tool based on a corporate memory. The paper
reviews the AEX method, and illustrates and discusses its use through a
scenario related to Forest Fire Fighting Management. The paper also
describes how the method was instrumented, focusing on the feasibility of
the instrumentation. Perspectives on the future of the method and of its
instrumentation are presented. This paper proposes an experience feedback
method centered on a corporate memory, regarded as an active
organizational memory (Sorli,1999) which favors organizational learning
for individuals and groups as in (VanHeijst,1996). We present this method,
named AEX, and its instrumentation to facilitate risk management. AEX aims
at capturing and re-using the experience from a specific risk management
activity of an organization to learn lessons and to improve this activity
(Lagadec,1997), (Greenlee,1998). Our method focuses on the intervention
part in risk management like in (Xanthopoulos,1994), (Avesani,1993) but by
reusing the experience itself which is regarded as a new way to assist
emergency management (Huet,1999). During intervention, people (called
managers in the following) have to decide about the actions to undertake. |
|
Title: |
IMPROVING
ACCESS TO MULTILINGUAL ENTERPRISE INFORMATION SYSTEMS WITH USER MODELLING
CONTEXT ENRICHED CROSS-LANGUAGE IR |
Author(s): |
Alberto
Díaz, Pablo Gervás and Antonio García |
Abstract: |
The enterprise systems for the processing and retrieval of
textual information are usually based on the techniques used for the
Internet search engines, incorporating natural language techniques, graph
theory, as well as traditional information retrieval instruments. This
paper presents a simultaneous integration of two additional processes into
this task in order to improve the performance of this type of systems:
user modeling and cross language information retrieval. The different
situations that can appear in a multilingual framework are presented, as
well as the techniques and resources to be applied to them. We describe
the user modeling task and we defend its usefulness to a multilingual
information system through the presentation of a prototype that has been
applied to the electronic newspaper domain. |
|
Title: |
HIGH
DIMENSIONAL DATA CLUSTERING USING SOFM AND K-MEANS ALGORITHMS |
Author(s): |
Tarek
F. Gharib, Mostafa G. Mostafa and Mohammed F. Tolba |
Abstract: |
In this paper we present an algorithm that cluster
multidimensional data in two stages. The algorithm uses, in the first
stage, a self-organizing feature map (SOFM) neural network for finding
prototypes of the classes’ centroids. Then, in the second stage, a
partitive clustering k-means algorithm is used to cluster the original
data using the centroids obtained from the SOFM. The most important
benefit of this procedure is that computational load decreases
considerably, making it possible to cluster large data sets and to
consider several different preprocessing strategies in a limited time. We
present a comparison between the performance of direct clustering of data,
using a K-means algorithm, and the clustering of the same data using the
proposed algorithm. The results show that, using SOFM as a preprocessing
step for the k-means has greatly increased the accuracy and decreased the
computations. Also, we show the effect of using different metric distances
on the clustering accuracy. |
|
Title: |
NATURAL
LANGUAGE INTERFACE TO KNOWLEDGE MANAGEMENT SYSTEMS |
Author(s): |
Melanie
Gnasa and Jens Woch |
Abstract: |
In this paper, we present linguistic techniques required for
natural language driven knowledge management interfaces. We describe two
significant aspects of such an interface: First, how the user input is
handled to provide an unrestricted natural language user interface, and
second, how the gathered knowledge should be preprocessed, classified and
thus prepared for a natural language interactive retrieval. A framework of
grammatical structures (supertags) is associated with the elements of an
ontology (of the respective domain). This combination of ontologies and
supertagging describes a novel and very robust way in parsing of
nontrivial user utterances and allows natural language feedback
generation. |
|
Title: |
LEARNING
TO TEACH DATABASE DESIGN BY TRIAL AND ERROR |
Author(s): |
Ana
Iglesias, Paloma Martínez, Dolores Cuadra, Elena Castro and Fernando
Fernández |
Abstract: |
The definition of effective pedagogical strategies for
coaching and tutoring students according to their needs in each moment is
a high handicap in ITS design. In this paper we propose the use of a
Reinforcement Learning (RL) model, that allows the system to learn how to
teach to each student individually, only based on the acquired experience
with other learners with similar characteristics, like a human tutor does.
This technique avoids to define the teaching strategies by learning action
policies that define what, when and how to teach. The model is applied to
a database design ITS system, used as an example to illustrate all the
concepts managed in the model. |
|
Title: |
KNOWLEDGE
MANAGEMENT IN MANUFACTURING TECHNOLOGY AN A.I. APPLICATION IN THE INDUSTRY |
Author(s): |
Michael
S.M. and Deepak Khemani |
Abstract: |
Traditional manufacturing plants rely on an engineering
department, which acts as an interface between the R&D experts, and
the shop floor managers to ensure that the best engineering solutions are
available for problems encountered in the shop floor. This paper focuses
on enhancing the effectiveness of the engineering department by the use of
knowledge management and information technology. This paper discusses the
processes introduced to facilitate knowledge management. This paper also
discusses the use of one discipline of artificial intelligence, case based
reasoning, in providing an information technology solution where domain
knowledge is weak and tends to be lost when experts leave the plant. |
|
Title: |
AUGMENTED
DATA MINING OVER CLINICAL DATABASES USING LEARNING CLASSIFIER SYSTEMS |
Author(s): |
Manuel
Filipe Santos, José Neves, António Abelha, Álvaro M. Silva and Fernando
Rua |
Abstract: |
The scheduling of Intensive Care Unit (ICU) resources is an
important issue in any health center and particularly at the ICU of the
Santo António’s Hospital (located in OPorto, the biggest Portuguese
city north of Lisbon), due the limitations of equipment and the extents of
the waiting lists for surgeries. This motivated the construction of a
Medical Decision Support System (MDSS) for simulation of predictive
scenarios and automatic configuration of the ICU equipment, based on
Knowledge Discovery from Data (KDD) and Data Mining (DM) techniques. A
Learning Classifier System (LCS) was applied for DM proposes, to predict
the length of stay (short or prolonged) and the outcome of patients with a
particular diagnostic. The obtained model was induced using 18 input
variables and a database of 487 patients, attaining a maximal accuracy of
72%. The interpretation of this model revealed to be very accessible and
it is programmed its deployment in the ICU. In this paper it will be
introduced the knowledge discovery overall process, the preliminary
results so for obtained and, finally, will be pointed out some scenarios
for future work and drawn some critics on the work that has been carried
out so far. |
|
Title: |
USING
MULTI-AGENT SYSTEM FOR DYNAMIC JOB SHOP SCHEDULING |
Author(s): |
Min-Jung
Yoo and Jean-Pierre Müller |
Abstract: |
Today’s industries need more flexible scheduling systems
able to produce new valid schedule in response to the modifications
concerning orders, production processes and deliveries of materials. This
paper introduces a multi-agent system applied to a job shop dynamic
scheduling problem in which new production orders or deliveries arrive
continuously and affect the already scheduled plan. We have solved the
problem by: i) coupling reactive and pro-active agent behavior; and ii)
implementing a stochastic method - simulated annealing - into agent’s
behavior. The job shop scheduling system is implemented using various
types of agents whose interactions make the global state of the system
move from a solution to another by continuously adapting to the changes
from the environment. In this perspective, the interactions between the
agents representing the client job orders, the production centers and the
material stocks result in the assignment of operations and the plan for
stock movements. Our experimental results show that, by modifying the
classical agent-based message scheme, the integration of stochastic
approach and multi-agent based technology could improve dynamic scheduling
problems for a small to medium size problem space. |
|
Title: |
THE
APPLICATION OF ARTIFICIAL NEURAL NETWORKS FOR HEAT ENERGY USE PREDICTION |
Author(s): |
Leszek
Kieltyka, and Robert Kucêba |
Abstract: |
This article presents the method of predicting heat network
use using the Intelligent System of Prediction (ISP). ISP uses neuron
networks generated by the BrainMaker and Neuronix software. The measured
effect of the applied methods is the results of prediction as well as
accuracy of prognosis in the use of heat networks in a regional
perspective. |
|
Title: |
KNOWLEDGE-BASED
IMAGE UNDERSTANDING A RULE-BASED PRODUCTION SYSTEM FOR X-RAY SEGMENTATION |
Author(s): |
Linying
Su, Bernadette Sharp and Claude Chibelushi |
Abstract: |
Image Understanding (IU) concerns the issues of how to
interpret images. Knowledge-based image understanding studies the theory
and techniques of computational image understanding, which uses explicit,
independent knowledge about the images, such as their context and objects
in them, as well as knowledge about the imaging system. Two related
disciplines, Artificial Intelligence (AI) and Image Processing (IP), can
contribute significantly to image understanding. A rule-based production
system is a widely used knowledge representation technique, which may be
used to capture various kinds of knowledge, such as perceptual,
functional, and semantic knowledge, in image understanding system. This
paper addresses some issues of knowledge-based approach to image
understanding, presents a rule-based production system for X-ray
segmentation, and proposes its expansion of incorporation with multiple
knowledge sources. Here we just present the segmentation part of our
research project, which aims at applying knowledge-based approach to
interpret X-ray images of bone and to identify the fractured regions. |
|
Title: |
A
TAXONOMY FOR INTER-MODEL PARALLELISM IN HIGH PERFORMANCE DATA MINING |
Author(s): |
Ling
Tan, David Taniar and Kate A. Smith |
Abstract: |
This paper categorizes data mining inter-model parallelism
into two types, namely constructive inter-model parallelism and predictive
inter-model parallelism. In constructive inter-model parallelism, we
present single algorithm with parametric difference and with data
difference, and multiple algorithms with parametric difference. And in
predictive inter-model parallelism, we present multiple models of similar
size and of different size. |
|
Area 3 - INFORMATION SYSTEMS ANALYSIS AND SPECIFICATION
Title: |
SOME
REFLECTIONS ON IS DEVELOPMENT AS OPERATOR OF ORGANISATIONAL CHANGE |
Author(s): |
Ana
Almeida and Licinio Roque |
Abstract: |
We discuss the role of IS development within the context of
organisational learning. We present a theorisation about IS development
based on the framework provided by Activity Theory and the Expansive
Learning model. We propose that the activity of IS development can be
understood as an operator of organisational change through a process of
expansive learning. We reflect about some of the consequences for IS
research and practice. That there can be three explicit perspectives over
the context of IS development. That these different perspectives can be
combined through the use of explicit models of context and mediators. That
IS as organisational change operator could be managed through the
coevolution of models of context and mediators, trough a set of purposeful
activities as methodological movements: diagnostic, innovation, creation,
evaluation, adaptation and generalisation. |
|
Title: |
ANALYSIS
OF THE RELATION BETWEEN THE PRESCRIPTIVE AND DESCRIPTIVE APPROACHES OF THE
INFORMATION SYSTEM PLANNING |
Author(s): |
Jorge
Luis Nicolas Audy |
Abstract: |
The information system planning area presents today several
challenges to managers and researchers. This has happened because modern
information technologies are quickly changing the way they affect and
impact the company’s competition. In this context, the Information
Systems Strategic Planning (ISSP) becomes a critical issue on the area of
information system management. For years, ISSP models have been oriented
towards a prescriptive approach. New approaches for ISSP, in a descriptive
line, may contribute to the search of solutions for current challenges.
This article’s goal is to analyze the relations between descriptive and
prescriptive approaches on ISSP elaboration, looking for an explanatory
model regarding the implementation of the plan created. The study is based
on a qualitative research, and cases studies the main research method. As
a contribution, it develops an analysis of cases studies results and
presents the proposed model. |
|
Title: |
ANALYSING
COMMUNICATION IN THE CONTEXT OF A SOFTWARE PRODUCTION ORGANISATION |
Author(s): |
M.
Cecilia C. Baranauskas, Juliana P. Salles and Kecheng Liu |
Abstract: |
While quality has been widely stressed in literature as a
goal of the software design methodologies, quality as a result of the
interaction among the actors involved in the design and development
processes has not received the same attention. This work aims to
investigate the software production process by addressing the
communication among work groups in the organisation. Our focus is on
understanding the communication process that takes place among the groups,
considering that the computational artefact emerges as a result of the
communicational acts issued between people with different roles in the
process. We base our understanding of communication in semiotic
foundations, to propose a framework for analysing communication in the
whole process of system design and development. The design process of a
real organisation that produces commercial software illustrates our main
ideas. |
|
Title: |
BUSINESS
MODELLING WITH UML: DISTILLING DIRECTIONS FOR FUTURE RESEARCH |
Author(s): |
Sergio
de Cesare, Mark Lycett and Dilip Patel |
Abstract: |
The Unified Modelling Language (UML) was originally
conceived as a general-purpose language capable of modelling any type of
system and has been used in a wide range of domains. However, when
modelling systems, the adoption of domain-specific languages can enable
and enhance the clarity, readability and communicability amongst modellers
of the same domain. The UML provides support for extending the language
for defining domain-specific meta-elements. This paper approaches the UML
from a business perspective and analyses its potential as a business
modelling language. The analysis proceeds along two complementary paths: a
critical study of UML diagrams and a description of UML extensibility
mechanisms for the definition of a business profile. |
|
Title: |
THE
SEMANTICS OF REIFYING N-ARY RELATIONSHIPS AS CLASSES |
Author(s): |
Mohamed
Dahchour and Alain Pirotte |
Abstract: |
Many data models do not directly support n-ary
relationships. In most cases, they are either reduced to some of their
binary projections or directly translated into an n-ary “relationship
relation” in the relational model. This paper addresses the reification
of an n-ary relationships into a new class with n binary relationship and
studies the preservation of semantics in the translation. It shows that
some semantics may be lost unless some explicit constraints are added to
the binary schema. |
|
Title: |
UPDATING
DATA IN GIS: HOW TO MAINTAIN DATABASE CONSISTENCY? |
Author(s): |
H.
Kadri-Dahmani and A. Osmani |
Abstract: |
Preserving the consistency of a geographical database
requires the development of tools for integrating or removing data and
making several versions of any geographical objects coexist without
altering neither the global consistency, nor the informational power of
the database. To construct such tool, two problems must be solved: how to
organize the data for efficient updating and how to maintain the database
consistency during and after the integration of updates. The latter may be
automated in the form of an Integration Module. One of its main tasks is
to detect conflicts which perturb the consistency of the database during
the update integration. This paper describes the Integration Module we
have designed and its role in the general updating process we propose for
GIS. |
|
Title: |
A
PROPOSAL FOR THE INCORPORATION OF THE FEATURES MODEL INTO THE UML LANGUAGE |
Author(s): |
Ivan
Mathias Filho, Toacy C. de Oliveira and Carlos J.P. de Lucena |
Abstract: |
Feature modeling is one of the most successful techniques in
use to promote the reuse of software artifacts as of the initial stages of
development. The purpose of this paper is to present a proposal for the
incorporation of feature modeling into the UML language, which has become
a standard for the modeling of object-oriented software. The precise and
careful specification of this proposal, which uses formal and informal
techniques such as graphic notation, natural language and formal language,
will permit the building of tools that provide support to the development
of feature models for families of applications, and the instantiation of
members of these families based on these models. Moreover, the use of the
XMI standard in the representation of the feature models will facilitate
their integration with the many CASE tools already existing on the market. |
|
Title: |
ONTOLOGIES
SUPPORTING BUSINESS PROCESS RE-ENGINEERING |
Author(s): |
Alexandra
Galatescu and Taisia Greceanu |
Abstract: |
The paper motivates the ontologies for the automation of a
Business Process Re-engineering (BPR) methodology (in particular, for the
continuous and incremental improvement of the business processes using TQM
- Total Quality Management). It describes and exemplifies an upper-level
ontology with linguistic features and its application to three particular
ontologies: BPR, domain and communication ontology. The main benefits from
ontologies for BPR are: (1) they facilitate the (virtual) team work by
providing a common vocabulary and understanding, (2) the ontological
axioms support the organization and formalization of the BPR specific
knowledge, as well as the inferences upon it, (3) an upper-level ontology
helps for the integration of BPR and domain specific information and
knowledge. |
|
Title: |
CONCEPTUAL
ARCHITECTURE FOR THE ASSESSME NT AND IMPROVEMENT OF SOFTWARE MAINTENANCE |
Author(s): |
Félix
García, Francisco Ruiz, Mario Piattini and Macario Polo |
Abstract: |
The management of software processes is a complex activity
due to the great number of different aspects to be considered. For this
reason it is useful to establish a conceptual architecture which includes
all the aspects necessary to be able to manage this complexity. The
fundamental element in all conceptual architecture is constituted by
meta-data, which, organized in different levels of modeling, can be used
to manage effectively the complexity of the software processes and
especially the maintenance process (Pigosky, 1996). In this study we
present a conceptual architecture of 4 levels to represent and manage the
assessment and improvement of software process by means of the definition
of the appropriate models and meta-models. This architecture is based on
the standard MOF (Meta object Facility) proposed by the Object Management
Group (OMG,2000). In particular this architecture includes all the
necessary aspects for carrying out the assessment and improvement of the
Software Maintenance Process (SMP) and allows us to represent the
different data and meta-data used in its management by means of the
modeling of concepts at different levels of abstraction: meta-models of
generic processing, models of software processes, concrete software
processes (in our case, that of the assessment of other processes) and
instances of carrying out a specific process. As a support to this
architecture we present MANTIS- Metamod, a tool for the modeling of
software processes based on the concepts discussed previously. MANTIS-
Metamod is a component of MANTIS, an integral environment for the
management of the SMP, including its assessment and improvement. |
|
Title: |
REUSABLE
COMPONENT EXTRACTION FROM INTELLINGENT NETWORK MANAGEMENT APPLICATIONS |
Author(s): |
Dániel
Hoványi |
Abstract: |
One of the most important issues targeted by the software
industry in the last few decades is the problem of reusability. In spite
of the efforts being made, the results are not satisfactory. This paper
discusses this issue within a limited scope, analysing the possibilities
for reuse within the telecommunication and intelligent network services
domain, and presenting a process for the extraction of reusable components
from existing applications. The first section of the paper gives an
overview of the context, the second section presents our process for
component extraction, whereas the third section shows some examples of the
results of the process. |
|
Title: |
SEMANTIC
AUGMENTATION THROUGH ONTOLOGY FOR XML INTEGRATION SERVER |
Author(s): |
Zaijun
Hu |
Abstract: |
XML is widely used in the industry for information
integration and exchange due to its flexible, open, extensible features as
well as strong support by the leading software firms. One purpose of the
integration is to provide a unified search service for different data
sources. In this paper we will present an ontologybased semantic
augmentation method that can be used to build a semantic graph for
effective and efficient search. We will define the fundamental structure
of the ontology, the semantic augmentation process and the basic software
components that compose the system. |
|
Title: |
DESIGNING
BUSINESS PROCESSES AND COMMUNICATION STRUCTURES FOR E-BUSINESS USING
ONTOLOGY-BASED ENTERPRISE MODELS WITH MATHEMATICAL MODELS |
Author(s): |
Henry
M. Kim and K. Donald Tham |
Abstract: |
Organizations are apprehensive about developing e-business
systems because the endeavor is novel. If ebusiness is considered as the
conduct of business using the Internet—a network of networks—then
ebusiness systems design can be represented as a network design problem.
This paper outlines an approach for analysis and design of business
process and communications structure networks for e-business. Network
design alternatives are generated by applying best practices and design
principles to business requirements, using ontology-based enterprise
models. Alternatives then are modeled mathematically for analysis and
comparison. Domains relevant for e-business systems design are described,
formally and systematically, using this approach. These formal
descriptions are general axioms, used to logically and mathematically
infer prescriptions for specific design problems. These descriptions and
prescriptions are sharable and reusable. The mathematical models are
developed using known algorithms, heuristics, and formulae. Therefore,
fidelity of prescriptions based on these models can be objectively
justified. Due to these characteristics, models developed using this
technique are especially useful for developing novel e-business systems.
An example application of this technique is presented, and research
questions addressed using the approach are discussed. |
|
Title: |
USING
ATOM3 AS A META-CASE TOOL |
Author(s): |
Juan
de Lara and Hans Vangheluwe |
Abstract: |
In this paper we present AToM3, A Tool for Multi-formalism
and Meta-Modelling, and show how it can be used to generate CASE tools.
AToM3 has a meta-modelling layer which allows one to model formalisms
(simulation formalisms, software modelling notations, etc.) and is able to
generate custom tools to process (create, edit, simulate, optimize, etc.)
models expressed in these formalisms. AToM3 relies on graph rewriting
techniques and graph grammars to express such model processing. AToM3 has
been designed and used mostly for modelling and simulation of physical
systems. In this paper we show that it can also be used to describe tools
for analysis, design and synthesis of software. We demonstrate this by
creating tools for structured analysis and design, and by defining some
graph grammars to automatically transform Data Flow Diagrams into
Structure Charts and to ’optimize’ these models. |
|
Title: |
FRAMEWORKS
– A HIGH LEVEL INSTANTIATION APPROACH |
Author(s): |
Toacy
C. de Oliveira, Ivan Mathias Filho and Carlos J.P. de Lucena |
Abstract: |
Object-oriented frameworks are currently regarded as a
promising technology for reusing designs and implementations. However,
developers find there is still a steep learning curve when extracting the
design rationale and understanding the framework documentation during
framework instantiation. Thus, instantiation is a costly process in terms
of time, people and other resources. These problems raise a number of
questions including: “How can frameworks be instantiated more quickly
and with greater ease? How can the same high-level design abstractions
that were used to develop the framework be used during framework
instantiation instead of using source code as is done currently? How can
we capture the designers’ knowledge of the framework in order to expose
reuse points using high level abstractions?” In this paper we present a
systematic approach to the framework instantiation process that captures
and exposes high level abstractions using the Feature Model which is then
mapped to an extended UML notation and a domain specific language for
instantiation process control. On top off all these techniques there we
have developed an environment that uses XMI as backend representation and
guides the reuser through the process. |
|
Title: |
AUTOMATING
THE CODE GENERATION OF ROLE CLASSES IN OO CONCEPTUAL SCHEMAS |
Author(s): |
Vicente
Pelechano, Manoli Albert, Eva Campos and Oscar Pastor |
Abstract: |
In this work, we present an automatic code generation
process from conceptual schemas. This process incorporates the use of
design patterns in OO-Method, an automatic software production method,
which is built on a formal object-oriented model called OASIS. Our
approach defines a precise mapping between conceptual patterns, design
patterns and their implementation. Design patterns make code generation
process easy because they bring the problem space to the solution space.
In order to understand these ideas, we introduce a complete code
generation of conceptual schemas that have player/role relationships. This
proposal can be incorporated into CASE tools, making the automation of the
software production process feasible. |
|
Title: |
A
FUNCTIONAL SIZE MEASUREMENT METHOD FOR EVENT-BASED OBJECTORIENTED
ENTERPRISE MODELS |
Author(s): |
Geert
Poels |
Abstract: |
The effective management of IS-related processes requires
measuring the functional size of information systems. Functional size
measurement is usually performed using the Function Points Analysis
method. Earlier attempts to apply Function Point counting rules to
object-oriented systems met with serious problems because the implicit
model of functional user requirements in Function Points Analysis is hard
to reconcile with the object-oriented paradigm. The emergence of a new
generation of functional size measurement methods has changed this
picture. The main implementation of this generation, COSMIC Full Function
Points, explicitly defines a generic model of functional user requirements
onto which artifacts belonging to any IS specification or engineering
methodology can be mapped. In this paper we present specific COSMIC-FFP
mapping rules for methodologies that take an event-based approach to
information system engineering. In particular we show that the
event-oriented nature of the COSMIC-FFP measurement rules provides for a
natural mapping of concepts. To illustrate the mapping rules we use
MERODE, a formal event-based object-oriented methodology for systems
development in information processing intensive domains. The mapping rules
presented are confined to the enterprise layer in a MERODE IS
architecture. |
|
Title: |
THE
CONTEXT ENGINEERING APPROACH |
Author(s): |
Licínio
Roque and Ana Almeida |
Abstract: |
We present a brief overview of the activities that
constitute our practice as IS developers and proceed to organise them in a
Context Engineering framework to accomplish the co-evolution of context
and mediators, as a way to proactively drive organisational change. This
approach proposes the explicit consideration of the context as object of
design. That goal has been illustrated with excerpts from a business
modelling case that proposes to build trust through anonymity in the
context of online retailing. We use the value net tool for strategic
analysis and UML collaborations as possible models for context. |
|
Title: |
SEQUENCE
CONSTRAINTS IN BUSINESS MODELLING AND BUSINESS PROCESS MODELLING |
Author(s): |
Monique
Snoeck |
Abstract: |
Separation of concerns is one of the main principles for
achieving maintainability and adaptability of software systems. In
particular, when analysing business rules it is important to separate
business process aspects from essential business rules. Current
object-oriented analysis methods offer little support for this. In this
paper we explore the problem of sequence constraints on business events.
Some of these constraints are the result of the way the business is
organised whereas other are essential for the business. In addition we
investigate how to ensure the compatibility between business rules and
business processes. |
|
Title: |
A
TOOL FOR ASSESSING THE CONSISTENCY OF WEBSITES |
Author(s): |
Sibylle
Steinau, Oscar Díaz, Juan J. Rodríguez and Felipe Ibánez |
Abstract: |
Usability is becoming an increasingly important design
factor for web sites. However time and budget contstraints for web
projects prevents the hiring of usability professionals to conduct tests
that are costly and time consuming to perform. A number of automatic
usability assessment tools have been developed most of which offer reports
on a per-page basis. However, they fail to provide inter-page assessments
to test, for example the consistency of the site. Consistency refers to
the extent to which a set of pages share a common layout. This work
presents CAT, a Consistency Analysis Tool that, besides providing static,
page-based usability measures, strives to assess the consistency of a
website using Java and XSLT. The tool is based on a consistency model
which is updated every time a page has been processed. Consistency testing
involves collating the page with this model, reporting mismatches with the
consistency attributes and adapting the model as new features are
encountered for the first time. |
|
Title: |
THE
GOLD MODEL CASE TOOL: AN ENVIRONMENT FOR DESIGNING OLAP APPLICATIONS |
Author(s): |
Juan
Trujillo, Sergio Luján-Mora and Enrique Medina |
Abstract: |
The number of On-Line Analytical Processing (OLAP)
applications in the market has dramatically increased in the last years.
Most of these applications provide their own multidimensional (MD) models
to represent main MD properties, thereby making the design totally
dependent of the target commercial OLAP application. In this paper, we
present a Computer-Aided Software-Engineering (CASE) operational
environment to accomplish the design of an OLAP applications totally
independent of the target commercial OLAP tool. The designer uses a
Unified Modeling Language (UML) compliant approach to represent MD
properties at the conceptual level. Once the conceptual design is
finished, the CASE tool semi-automatically generates the corresponding
implementation into the target commercial OLAP tool. Therefore, our
approach frees conceptual design from implementation issues. |
|
Title: |
AN
INTERNATIONAL STUDY OF BENCHMARKING SPREAD AND MATURITY |
Author(s): |
Mohamed
Zairi and Majed Al-Mashari |
Abstract: |
Developing best practice through benchmarking features as a
critical activity in the business world as it is a vital approach for
sharing and transferring knowledge. Companies across the globe have
embraced these concepts but have done so with a varied level of success.
Some have managed to create huge market place advantages whilst others
have fared less favourably. The purpose of this research is to establish
the level of benchmarking activity and application globally. The
information gathered included both the hard and soft issues associated
with benchmarking and following analysis, attempted to evaluate the level
of benchmarking maturity reached across different industry fields and size
of operation. This global survey helps understand what leads to effective
benchmarking and development of best practices. |
|
Title: |
TAMING
PROCESS DEVIATIONS BY LOGIC BASED MONITORING |
Author(s): |
Ilham
Alloui, Sorana Cîmpan and Flavio Oquendo |
Abstract: |
In a context of continuous change, monitoring processes and
their interactions is a key issue for computing and reacting to the
deviations that might occur with respect to a defined model that serves as
reference. We consider that deviations of real processes with respect to
their models have to be managed and that process centred logic-based
monitoring represents a suitable approach to handle such a problem. This
paper presents a logic-based process monitoring system (language and
execution mechanisms) that focuses on the detection of deviations between
an actual process execution and its expected behaviour. The approach
addresses monitoring of individual process activities as well as the
interactions among them. The work presented has been developed and
validated in the framework of an ESPRIT IV LTR project. |
|
Title: |
APPLYING
DOMAIN MODE LING AND SECI THEORY IN KNOWLEDGE MANAGEMENT FOR INFORMATION
SYST EMS ANALYSIS |
Author(s): |
Akihiro
Abe |
Abstract: |
Cases of the application of knowledge management in
information systems fields are limited except for trouble-shooting,
project management, and software quality improvement. This paper describes
the knowledge management framework by referring to the domain modeling
method and the SECI theory with regard to improvement in the quality of
the information systems analysis process. The process of proposed
knowledge management is divided into four phases of knowledge conversion
and is discussed in terms of the IT and methodology needed to support each
phase of knowledge conversion. Taking the transportation and delivery
scheduling system domain as an example, basic structures of a domain model
and effectiveness/characteristics of the knowledge management framework
are shown. The obtained domain model that is stored in the knowledge
repository has been refined and enhanced repeatedly through on-trial
evaluations in actual information system analyses and has almost reached a
practical level. |
|
Title: |
IF
YOU WISH TO CHANGE THE WORLD, START WITH YOURSELF |
Author(s): |
Ilia
Bider and Maxim Khomyakov |
Abstract: |
During the past ten years, requirements on functionality of
business applications have been slowly changing. This shift consists of
moving from traditional command based applications to inherently
interactive applications of workflow and groupware type. For modeling new
kind of business applications, the authors suggest an approach to defining
interaction that is not based on explicit communication. In this approach,
interaction is realized via active relationships that can propagate
changes from one object to another. Based on this idea, which comes from
the previous research work of the authors, the paper discusses the issues
of introducing “harnesses” on the interactive behavior, finding the
right place for the end-users in the model, and modeling distribution of
tasks between different users. |
|
Title: |
ON
THE USE OF JACKSON STRUCTURED PROGRAMMING (JSP) FOR THE STRUCTURED DESIGN
OF XSL TRANSFORMATIONS |
Author(s): |
Guido
Dedene |
Abstract: |
This paper develops a new original application of Jackson
Structured Programming to the structured design of XSL Translations sheets
in XML-technology. The approach is illustrated by means of an XML version
of Jackson’s Sorted Movements File example, with several design
variations. The method proposed in this paper is an XSLT-design technique
which can be implemented as an XSLT-generator. |
|
Title: |
A
FRAMEWORK FOR THE DYNAMIC ALIGNMENT OF STRATEGIES |
Author(s): |
S.
Hanlon and L. Sun |
Abstract: |
As IT evolves and business changes, the role of information
systems and information technology becomes important in redefining the
boundaries within which businesses operate. Current literature on the
alignment of information technology strategy, information systems strategy
and business strategy, and the conceptual models available, provide a
structured, iterative approach in assisting businesses to rethink their
strategic positions. A synergy is created between the organisation and its
management processes, corporate and business strategy, the IT platform and
IS architecture. This paper presents a holistic framework which consists
of two main models: Strategic Alignment Web (SAW), Strategic Alignment
Evaluation Web (SAEW), to assist a process of alignment of IT/IS strategy,
corporate/business strategy, the organisational structure, governance and
information management and people, culture and resources. A case study has
been used to demonstrate the use of the techniques to create these models
and is followed by the critical analysis of the results. |
|
Title: |
INFERRING
ASPECTS OF THE ORGANIZATIONAL STRUCTURE THROUGH WORKFLOW PROCESS ANALYSIS |
Author(s): |
Cirano
Iochpe and Lucinéia Heloisa Thom |
Abstract: |
Any organizational structure can be characterized by a set
of structural aspects or parameters. Organizations differ from one another
in the values their structural aspects assume, respectively. In addition,
business authors argue that every organization is structured according to
its main business processes. Since workflow processes represent in
computer systems both, static and dynamic aspects of business processes,
it is possible to infer that one can identify the values of main
structural aspects of an organization through the analysis of its workflow
processes. This paper reports partial results of an ongoing investigation
that aims at identifying workflow process subschemas that are dependent
upon structural aspects of organizations. The benefit of explicitly
representing the relationship between the organizational structure and its
workflow processes is twofold. On the one hand, it can provide business
professionals with a complementary tool for better understanding the
organization. On the other hand, it can provide workflow system designers
with an additional tool that can help them understand business processes
during requirements analysis, reducing information assessment errors that
may occur during interviews due to either language conflicts or cultural
resistance by professionals of the organization. |
|
Title: |
A
KNOWLEDGE OBJECT ORIENTED SYSTEM FOR HIGH THROUGHPUT COLLECTION AND
ANALYSIS OF DATA |
Author(s): |
Huiqing
Liu and Tecksin Lim |
Abstract: |
An approach, KOOP (Knowledge Object Oriented Programming)
that can integrate, personalize and automate a set of processes based on
the business logic has been developed. This technology processes high
throughput data in heterogeneous and distributed environment. In addition,
KOOP has the ability to trap the explicit, implicit and hidden knowledge
embedded in the business flows. |
|
Title: |
MANAGING
ENTERPRISE COMMUNICATION NETWORKS TO IMPROVE THE REQUIREMENTS ELICITATION
PROCESS |
Author(s): |
Juan
M. Luzuriaga, Rodolfo Martínez and Alejandra Cechich |
Abstract: |
Although researchers noted the importance of effective
communication among stakeholders, it continues to be a challenge for
requirements engineering. Even more, communication can be considered as a
management tool since communication allows organisation’s personnel to
produce a cohesive enterprise view. Communication facilitates commitment
by avoiding defining confronting goals, and it also contributes to make
organisation’s processes more flexible. Communication is present
everywhere and it also constitutes a source of power. In this paper, we
present a process for managing communication during the requirements
elicitation phase. Our process would help get well-defined requirements by
using knowledge inside organisations. This is a starting point to develop
a requirements elicitation strategy based on communication skills,
organisation knowledge, and quality attributes. |
|
Title: |
INTRODUCING
BUSINESS PROCESS AUTOMATION IN DYNAMIC BANKING ACTIVITIES |
Author(s): |
Maria
Nikolaidou and Dimosthenis Anagnostopoulos |
Abstract: |
In a competitive environment as the Banking Sector, there is
a constant need to monitor, evaluate and refine business activities.
Business Process Automation is an effective tool towards this direction,
facilitating the improved performance of business activities and
enterprise wide monitoring and coordination. Business Process Automation
is performed through the use of Workflow Management Systems. In this
paper, we present the Workflow Management System implemented to satisfy
the needs of the Loan Monitoring Department of a medium sized Bank. Loan
Monitoring is a typical banking procedure, which includes activities
concerning loan approval, collection of delinquent loan installments and
initiation of appropriate legal claims. The Loan Monitoring mechanism
employed is a significant factor determining profits. Relevant activities
are influenced by frequently altered and subjective criteria and are often
performed in cooperation with external business partners, as legal firms
and brokers. The Loan Management System, built to support the Loan
Monitoring Department, facilitates the specification of an organizational
structure, the description of business processes and the construction of
dynamic workflows. It provides flexibility during business process
description, allowing the modification of a workflow while it is running
and operates in a distributed environment over the Internet. In the paper,
we also discuss the experience obtained using the system during the last
two years and its impact on loan monitoring processes. |
|
Title: |
INCORPORATING
KNOWLEDGE ENGINEERING TECHNIQUES TO REQUIREMENTS CAPTURE IN THE MIDAS WEB
APPLICATIONS DEVELOPMENT PROCESS |
Author(s): |
A.
Sierra-Alonso, P. Cáceres, E. Marcos and J. E. Pérez-Martínez |
Abstract: |
Web technology has had a great impact in the last years. In
consequence, a lot of Web applications are being developed. At first,
these applications were developed without using any Software Engineering
method. However, because of the special characteristics of this type of
applications, a lot of research has focused on Web Software Engineering.
In most cases, Web Engineering is based on techniques and processes from
traditional Software Engineering. These inherited methods and processes
have been adapted to the new needs of Web environments. Nevertheless, the
Software Engineering techniques can be insufficient to solve some
problems. For example, the domain knowledge acquisition to acquire and
understand the requirements when those requirements and knowledge are only
in the “experts’ mind”. However, this problem has been widely
treated in the Knowledge Engineering field. Thus, Web Engineering would
incorporate and profit from Knowledge Engineering techniques. In this
paper, we propose to use knowledge acquisition techniques from Knowledge
Engineering to capture the requirements. In these cases, the mixture of
the use cases with knowledge acquisition techniques will give us not only
the correct requirements but also elements of the software architecture. |
|
Title: |
HYPERCLASSES |
Author(s): |
Slim
Turki and Michel Léonard |
Abstract: |
A hyperclass is a large class, formed from a subset of
conceptual classes of the global schema of a database, forming a unit with
a precise semantics. It describes a particular field of competence on the
global schema. A hyperclass is defined by its set of member classes, its
root class and its associated acyclic and oriented navigation graph. The
objects of a hyperclass are called hyperobjects. Upon a hyperclass, it's
possible to define hypermethods. A hypermethod is defined over several
classes of a hyperclass. It can use different hyperclass attributes and
invocate class methods. A priority of our work is to preserve hypermethods
from dysfunctions due to evolution operations: hypermethods must still
functional and must still give the same result. The hyperclass concept
provides a powerful kind of independence between the methods defined over
the hyperclass and the schema of the hyperclass. |
|
Title: |
LINKING
MOBILE NETWORK SERVICES TO INTERNET MAIL |
Author(s): |
Hans
Weghorn, Carolin Gaum and Daniel Wloczka |
Abstract: |
As state of technology, today business people typically are
connected to several heterogeneous message services, while the e-mail
system more and more establishes as the centre of all these. The approach
described here shows a method to integrate mobile telephony services with
the email system of a user. Incoming messages, which can be text-based
packets or voice calls, are accepted by an automatic desktop system, and
the received messages directly are transposed to the e-mail system of the
target user. The entire system is constructed on base of a software agent,
which communicates through a special hardware adapter with the mobile
phone device. |
|
Title: |
INTER-ORGANIZATIONAL
WORKFLOW MANAGEMENT IN VIRTUAL HEALTHCARE ENTERPRISES |
Author(s): |
Tauqir
Amin and Pung Hung Keng |
Abstract: |
Virtual enterprise provides an attractive model for
autonomous healthcare organizations to synergize and leverage the
strengths of each others. We present in this paper the design concept and
working principle of an inter-organizational workflow process management
system for virtual healthcare enterprises. We propose an extended WFMC
process Meta model and a new concept of Partial View of Virtual Process
for representation and enactment of cross organizational processes. Using
this approach, local processes of participating organizations can be
integrated with other external processes quickly. The joining and
disjoining a virtual enterprise by an organization is also transparent to
all except the directly interacting organizations. |
|
Title: |
SURVEY,
ANALYSIS AND VALIDATION OF INFORMATION FOR BUSINESS PROCESS MODELING |
Author(s): |
Nuno
Castela, José Tribolet, Arminda Guerra and Eurico Lopes |
Abstract: |
Business processes modeling became a fundamental task for
organizations. To model business processes is necessary to know all the
activities as well as consumed and produced informational resources. From
this knowledge, abstractions are constructed, which allow elaborating a
high-level business process model. This modeling process, which goes from
the survey of the activities of the organizational units to the
construction of a business model, follows a bottom-up approach. However,
the majority of the existing business processes modeling tools follow a
top-down approach, more adjusted to the To Be modeling, what makes the
development of the As Is modeling more difficult. These tools start from
the high-level business processes models, which became detailed to a more
granular level through the decomposition in activities, the opposite of
the necessary for the As Is modeling. This document establishes a
methodology for the survey, analysis and validation of the information
necessary for As Is business processes modeling, conjugating top-down and
bottom-up approaches, in an iterative and articulated way. |
|
Title: |
FD3:
A FUNCTIONAL DEPENDENCIES DATA DICTIONARY |
Author(s): |
M.
Enciso and A. Mora |
Abstract: |
In this paper, we propose the use of a Functional
Dependencies Data Dictionary (FD3) to facilitate the integration of
Heterogeneous Systems. The heart of this approach is the notion of
Functional Dependence (FD).We present a formal language, the Extended
Functional Dependencies (EFD) Logic suitable to carry out integration
tasks. The set of FDs contained in the data model is directly translated
into a EFD logic theory. FD3 integrates the EFD logic sub-theories in a
unique and unified EFD logic theory. EFD axiomatic system eases the design
of automatic integration process. To communicate the information of the
EFD logic theory, we introduce a High Level Functional Dependencies (HLFD)
Data Model which is used in a similar way as the Entity/Relationship
Model. The HLFD data model can be deduced automatically from EFD logic
theory. |
|
Title: |
BEYOND
OBJECT ORIENTED DESIGN PATTERNS |
Author(s): |
Javier
Garzás and Mario Piattini |
Abstract: |
Nowadays, due to experience acquired during years of
investigation and development of Object Oriented systems, numerous
techniques and methods that facilitate their design are available to us.
In this article we present a compilation, analysis and relationship of the
object oriented design knowledge, as well as this can facilitate a new
base for the study, so we will be able to learn how to apply the
knowledge. |
|
Title: |
MEDIATED
COMMUNICATION IN GROUPWARE SYSTEMS |
Author(s): |
Luis
A. Guerrero, Sergio Ochoa, Oriel Herrera and David A. Fuller |
Abstract: |
In the creation of groupware systems, a good design of
communication mechanisms, required by the group in order to carry out its
work, is essential, since there are many other design aspects that depend
on it. These aspects have a major impact on the success or failure of the
collaborative application. This paper presents a computer-mediated
communication taxonomy for groupware systems, as well as an architectural
pattern to support the design and construction of the communication
mechanisms required by the groupware systems, bearing in mind the other
application’s design aspects. |
|
Title: |
AN
EXECUTION MODEL FOR PRESERVING CARDINALITY CONSTRAINTS IN THE RELATIONAL
MODEL |
Author(s): |
Harith
T. Al-Jumaily, Dolores Cuadra and Paloma Martínez |
Abstract: |
An active database is an extension of a passive DBMS with
triggers. For this reason, the execution model of this database is quite
complicated, because its performance depends on the active behaviour of
triggers. The triggers are autonomously activated in the moment that some
specific events of are produced causing some actions in the database. It
is necessary to control their activities since in some moments they
produce unwanted actions and in many times a confluent takes place among
them because of the disordered activities. In this work we have defined an
execution model of the triggers involved in controlling the cardinalities
restrictions of an ER schema trying to avoid the loss of semantics. |
|
Title: |
TOWARDS
A NEW BUSINESS PROCESS ARCHITECTURE |
Author(s): |
Takaaki
Kamogawa and Masao J.Matsumoto |
Abstract: |
The authors present a new business process architecture.
Enterprises have improved so far their business processes mainly for
companies themselves, say, intra-company improvement. But only
intra-company improvement is no longer enough for survive themselves. They
will have to extensively focus on improving and innovating their business
processes across themselves and trading partners including a consortium.
This paper discusses several business process models from business process
evolution viewpoint based on the case study. If enterprises can use
information technology (in short, IT) such as B2B marketplace as the
enablers for their business processes, the business processes are reshaped
and redesigned more efficiently. As a result of business process redesign
towards the new business process architecture the authors propose,
enterprises can solve several issues like stock surplus level, poor
customer response, and long lead-time to customers. |
|
Title: |
REQUIREMENTS
SPECIFICATION MODEL IN A SOFTWARE DEVELOPMENT PROCESS INSIDE A PHYSICALLY
DISTRIBUTED ENVIRONMENT |
Author(s): |
Rafael
Prikladnicki, Fernando Peres, Jorge Audy, Michael da Costa Móra and
Antônio Perdigoto |
Abstract: |
The purpose of this paper is to propose a software
development model, centered in the requirements specification phase,
adapted to the research and development characteristics in the e-business
area, where the users and development teams are found in a physically
distributed environment (United States and Brazil). The results of a case
study are also presented here, in development of a specific software for
DELL Computers e-business website. This research is classified as an
explanatory study, where the main research method was the case study. As
result, the proposed model is presented and described, adding a planning
phase to the software development process, based on the concept proposed
by the Unified Process and the UML language. Some aspects are also
discussed, such as the partnership of a great worldwide company in
computing (DELL Computers) and a great Brazilian university (PUCRS), in
the research and development conjoined project context. |
|
Title: |
INTEGRATED
PLANNING OF INFORMATION SYSTEMS AND CONTINGENCY AND RECOVERY |
Author(s): |
Leonilde
Reis and Luís Amaral |
Abstract: |
This article intends to emphasize a group of inherent
concerns to the Contingency and Recovery Planning when integrated with the
Information Systems Planning. It begins with the approaching to the
positioning of Information Systems Planning and Contingency and Recovery
activities in the organizational context, afterwards it propose an
approach to developing in an integrated way the planning of that
activities evidencing the outcome of this integration. Finally, it
mentions the characteristics and advantages of this new approach, weaving
considerations about the inherent concerns to ensure the business
continuity. |
|
Title: |
STEMMING
PROCESS IN SPANISH WORDS WITH THE SUCCESSOR VARIETY METHOD. METHODOLOGY
AND RESULT |
Author(s): |
Manuela
Rodríguez-Luna |
Abstract: |
It is shown a detailed study about the stemming process in
the Spanish language with the Successor Variety Method. This study conveys
an important advance in the information retrieval techniques, and
specifically in this language. These techniques, which are still in
experimentation, require a deep study of grammatical, morphologic, syntax
and semantic characteristics of words that compound this language. There
are several techniques and methods that apply process to make a more
efficient information retrieval, and these methods have obtained good
results in the English language. These and successive investigations will
help to get more suitable and expert process for the information retrieval
in the Spanish language. So, it is been used different techniques which
remove the words from the document, then they are analysed, and their
roots are extracted. In this way, the Successor Variety Method has been
used to obtain specific results with the Spanish language, and observe the
behaviour of the information retrieval by using roots. The type of text is
a literary one, that contains a higher diversity of terms. |
|
Title: |
MODELLING
AND PERFORMANCE ANALYSIS OF WORKFLOW MANAGEMENT SYSTEMS USING TIMED
HIERARCHICAL COLOURED PETRI NETS |
Author(s): |
Khodakaram
Salimifard and Mike Wright |
Abstract: |
In this paper a modelling methodology for workflow
management systems based on coloured Petri nets is proposed. Using an
integration method, processes and resources are modelled at the same
abstraction level. A process is decomposed into task structures, whilst
human resources are considered at role level. Activity based costing is
combined with classical temporal analysis of workflow. The suitability of
the method has been tested using an application example |
|
Title: |
USING
SEMANTIC ANALYSIS AND NORM ANALYSIS TO MODEL ORGANISATIONS |
Author(s): |
Andy
Salter and Kecheng Liu |
Abstract: |
Organisations can be represented in the form of human agents
and their patterns of behaviour, which can be modelled using the method of
Semantic Analysis. Norms establish how and when an instance of a pattern
of behaviour (or affordance) will occur. The norms are identified using
the method of Norm Analysis, which is comprised of the techniques of
responsibility analysis, information identification, trigger analysis and
norm specification. This paper addresses how the results of Semantic
Analysis and Norm Analysis may be used to produce a semantic model
representing the behaviour of the organisation. The methods are
illustrated using an example of borrowing books from a library. |
|
Title: |
DEVELOPING
QUICK ITERATIVE PROCESS PROTOTYPING FOR PROJECT MANAGEMENT: LINKING ERP
AND BPE |
Author(s): |
Ryo
Sato and Kentaro Hori |
Abstract: |
ERP (enterprise resource planning package) needs a
methodology that assures newly designed business process can be really
implemented with an ERP. Such methodology should consider and integrate
several issues. They are the model of business process, that of components
in an ERP, their mutual relationship, and implementation procedure to
integrate and to realize the business process effectively. The quick
iterative process prototyping (quick IPP, for short) is a methodology that
integrates such affluent information resources, aiming for the design of
business process with ERP and related tools. Based on the experience to
develop quick IPP for MRP (material requirements planning), this paper
shows one for project management. TOC (theory of constraints) based design
of a business process for project management will be focused on. It is
shown how a business process of project management with ERP can be
engraved from R/3's information resources, and what and how information
tools are used. |
|
Title: |
USING
HOT-SPOT-DRIVEN APPROACH IN THE DEVELOPMENT OF A FRAMEWORK FOR MULTIMEDIA
PRESENTATION ON THE WEB |
Author(s): |
Khalid
Suliman Al-Tahat, Sufian Bin Idris, T. Mohd. T. Sembok and Mohamed Yousof |
Abstract: |
Frameworks can be seen as generative since they are intended
and well suited to be used as the foundation for the development of a
number of applications in the domain captured by the framework. A
framework defines a high-level language with which applications within a
domain are created through specialization. Specialization takes place at
points of predefined refinement that are called hot spots. Hot spots are
the generic and flexible aspects and parts of a framework that can be
easily adapted to a specific need. Specialization is accomplished through
inheritance or composition. Well-designed framework offers the
domain-specification hot spots and the desired flexibility to adapt those
hot spots. Hot spots are shown by hook methods and hook classes, where
metapatterns express how the required flexibility, represented by the hot
spots, is gained in a particular framework. We have adopted the hot-spot
approach in the development of a framework for multimedia presentation on
the Web. The adoption of this approach has helped us in enhancing the
flexibility and extensibility of the framework. This paper describes the
use of a hot-spotdriven approach in the development of a framework for
multimedia presentation on the Web as well as our experience in using hot
spots, design patterns, and metapatterns. |
|
Area 4 - INTERNET COMPUTING AND ELECTRONIC COMMERCE
Title: |
MANAGING
XML-LINK INTEGRITY FOR STRUCTURED TECHNICAL DOCUMENTS |
Author(s): |
Abraham
Alvarez, Youssef Amghar and Richard Chbeir |
Abstract: |
Structured and hypermedia documents in manufacturing
enterprises are interlinked by different kinds of links. These documents
play a crucial role describing the “steps out” of a product in an
industrial context. Nowadays, one of the problems is to preserve the
referential link integrity of a document’s fragment. In shared
environments, users achieve creation, edition, storing and querying
manipulations. The challenge is to keep the links references in a “coherent
state” after these kinds of manipulations. An example of classical
impact is the infamous “Error 404 file not found”. The main objective
of this paper is to provide a generic relationship validation mechanism to
remedy this shortcoming and focuses in referential link integrity aspects.
Thereby, some standard features of XML and specifically XLL specification
(X-Link & X-pointer), are used in this work as a support for integrity
management. To illustrate these concepts, we have chosen the technical
documentation as type of document. |
|
Title: |
USABILITY
AND ACCESSIBILITY IN THE SPECIFICATION OF WEB SITES |
Author(s): |
Marta
Fernández de Arriba and José A. López Brugos |
Abstract: |
Most web sites present a serious lack of usability and
accessibility. The tools used for their development mix the contents of
the pages with their presentation, making their maintenance difficult and
costly. This paper discusses a markup language for the specification of
usable and accessible web sites (WUAML) based on XML technology, detailing
its structure and syntax, target requirements achieved and benefits
perceived. |
|
Title: |
STAGED
IMPLEMENTATION OF E-BUSINESS NETWORKS THROUGH ERP |
Author(s): |
Colin
G. Ash and Janice M. Burn |
Abstract: |
Internet technologies offer an ERP based organisation the
opportunity to build interactive relationships with its business partners.
The Siemens case of e-Business evolution is viewed in terms of buy-side
and sell-side solutions and services that inter-relate. e-Business
solutions were seen to evolve in six stages with increasing business value
and network complexity; from infrastructure to e-marketplaces. By viewing
the Siemens case as a staged implementation, it may easily be evaluated in
terms of the attributes of the “virtual organizing” model. |
|
Title: |
AN
ENTERPRISE IT SECURITY DATA MODEL |
Author(s): |
Meletis
A. Belsis, Anthony N. Godwin and Leon Smalov |
Abstract: |
The issues relating to security have wider scope than the
technical details of computer system structure. In larger organisations
the security problems become more complex since there are more
requirements for data access management. The information associated with
security management needs to be well organised. In this paper we present
the first step in creating an Enterprise IT Security Data Model. The aim
of this research is to create a model that is able to capture the
information needed by the security management in the provision of adequate
security for an Enterprise. |
|
Title: |
INTERNET
TECHNOLOGY AS A BUSINESS TOOL |
Author(s): |
Sebastián
Bruque |
Abstract: |
Since the beginnings of the computing era people have
suggested that the implementation of Information Technology (IT) would
have a positive effect on firm performance. Indicators like productivity,
profitability and market share could be improved by these tools. Among
these technologies, firms can now use TCP/IP or Internet technologies for
strategic purposes but, until now, their real effects have been unclear.
In this work we try to provide some evidence about the impact of the
Internet on competitive advantage in a Spanish Industry. Also, we give
some insights that can help to understand the role of the Internet as a
competitive weapon in modern firms. |
|
Title: |
ITHAKI:
FAIR N-TRANSFERABLE ANONYMOUS CASH |
Author(s): |
Magdalena
Payeras Capellà, Josep Lluís Ferrer Gomila and Llorenç Huguet Rotger |
Abstract: |
Transferability is one of the ideal features of electronic
cash systems. Transferability means that users can transfer a coin
multiple times without verification by a TTP. In anonymous transferable
systems, enough identifying information must be presented in payments to
allow the identification of double spenders. Digital transferable coins
grow in size after each transaction due to the accumulation of this
identifying information. We present an N-transferable system in where the
transferability is limited to N transfers. Ntransferability limits the
highest size of coins and avoids the existence of a great number of
illegal copies of coins being transferred among users. However,
N-transferability may suppose a discrimination of the Nth receiver of the
coin if he cannot choose between transferring anonymously the coin or
depositing it in an identified stage. We present an on-line subprotocol
for anonymous exchange of N-transferred coins. The system is anonymous and
users are identified only if they double spend a coin or in case of
illegal activities (anonymity revocation). The presented scheme fulfils
the specific requirements of transferable systems and avoids specific
attacks by coalitions of users. |
|
Title: |
VIRTUAL
MALL OF E-COMMERCE WEB SITES |
Author(s): |
M.
F. Chen and M. K. Shan |
Abstract: |
In recent years, a lot of E-commerce web sites appear
quickly. Through the Internet, we get a great quantity of product
information on the WWW. Some meets our needs, but most doesn’t.
Recommender systems help finding out what we really interest. Although
many existing E-commerce web sites suggest products to customers, few
suggest browsing paths. In this paper, we propose a virtual mall that
E-commerce web sites can join as a shop. Via techniques of data mining,
shops in the mall own the abilities of recommending products and browsing
paths. And the recommendations cross shops in the mall. |
|
Title: |
RETHINKING
THE STRATEGY OF AMAZON.COM |
Author(s): |
Michael
S. H. Heng |
Abstract: |
This is an opinion paper arguing for a strategic shift in
the business model of Amazon.com. The serious challenge facing Amazon.com
is that it is not able to convince the investment community that it is
able to generate profits in the long run. The doubt of investors is well
grounded. This paper argues that Amazon should make a strategic shift to
operate as a provider of technical services and business consulting in the
area of business-to-consumer e-commerce. At the same time it should reduce
the range of the items sold on-line to, say, books and CDs, and treat this
part of its business as a kind of research and development activity. Its
avant garde status as e-commerce innovator and its track record in
customer satisfaction have tremendous market value and can serve as an
"open sesame" to enter the huge market of e-commerce consulting.
Its continuing survival and (hopefully) future profitability hold deeper
implications for other dot.com companies and B2C e-commerce. |
|
Title: |
THE
DESIGN OF AN XML E-BUSINESS APPLICATIONS FRAMEWORK |
Author(s): |
I.
Hoyle, L. Sun and S. J. Rees |
Abstract: |
The business-to-business e-market is expected to grow
considerably over the next few years. This growth will require on-line
transactions and collaborations involving small to medium sized
enterprises, which can take advantage of system integration previously
centred on the major organisations. The Internet is considered to be the
supporting technology, and its language XML (Extensible Markup Language)
will take an important part of e-business applications development. In
order to fully utilise the capabilities of XML and improve the efficiency
for business data transactions, a XML e-Business application framework
(XEBAF), which is based upon free XML technology such as XML Document
Model Application Programming Interface (DOM API), XML schema definition
language, Extensible Style Language (XSLT) and XPath, has been designed.
XML Schema and XSL stylesheet modelling techniques have been proposed. To
use this framework, the business data and associated rules can be defined
and structured in a flexible and reusable manner. |
|
Title: |
E-COMMERCE
BUSINESS PRACTICES IN THE EU |
Author(s): |
Hamid
Jahankhani and Solomon A. Alexis |
Abstract: |
This paper describes and evaluates the e-commerce issues
addressed by the EU legislative framework (Directives), examines the
extent of their influence on e-commerce business practices and the factors
that drive e-commerce in the EU. The countdown has started for the
implementation and compliance of the various EU initiatives and although
opinion is divided over the regulation merits, the issues of privacy,
consumer protection and progress will continue to be in the limelight.
Various consumer groups have argued that the EU initiatives provide
greater protection for online shoppers, while the critics claim that they
halt and hinder the growth of the e-economy in Europe by making it complex
and risky for smaller firms to trade online. Although, many European
on-line businesses are aware of the EU Directives relating to e-commerce,
they have not put into place plans to act in accordance with them. Today
there are no less than fifteen different initiatives and although the
Directives pose problems for businesses that operate in the Euro-zone,
there is an intense effort to minimise the perceive disadvantages that
they can cause. What the EU has done for e-commerce in the last decade has
not been done anywhere else in the world. |
|
Title: |
TOWARDS
EXTENDED PRICE MODELS IN XML STANDARDS FOR ELECTRONIC PRODUCT CATALOGS |
Author(s): |
Oliver
Kelkar, Joerg Leukel and Volker Schmitz |
Abstract: |
The extending interorganizational electronic business
(business-to-business) means system to system communication between
business partners without any manual interaction. This has led to the
development of new standards for the exchange of electronic product
catalogs (e-catalogs), which are the starting point for
business-to-business trading. E-catalogs contain various information about
products, essential are price information. Prices are used for buying
decisions and following order transactions. While simple price models are
often sufficient for the description of indirect goods (e.g. office
supplies), other goods and lines of business make higher demands. In this
paper we examine what price information is contained in commercial XML
standards for the exchange of product catalog data. For that purpose we
bring the different implicit price models of the examined catalog
standards together and provide a generalized model. |
|
Title: |
HIERARCHICAL
VISUALIZATION IN A SIMULATION-BASED EDUCATIONAL MULTIMEDIA WEB SYSTEM |
Author(s): |
Juan
de Lara and Manuel Alfonseca |
Abstract: |
This paper presents a system that generates web documents
(courses, presentations or articles) enriched with interactive simulations
and other hypermedia elements. Simulations are described using an object
oriented continuous simulation language called OOCSMP. This language is
complemented by two higher language layers (SODA-1L and SODA-2L). SODA-1L
describes pages or slides, while SODA-2L builds courses, articles or
presentations. A compiler (C-OOL) has been programmed to generate Java
applets for the simulation models and HTML pages for the document pages.
The paper focus on some new capabilities added to OOCSMP to handle
different graphic detail levels of the system being simulated. Different
views are shown as cascade windows, whose multimedia elements can be
arranged and synchronized with the simulation execution. The new
capabilities have been tested by extending a previously developed course
on electronics. |
|
Title: |
NORMATIVE
SERVICES FOR SELF-ADAPTIVE SOFTWARE TO SUPPORT DEPENDABLE ENTERPRISE
INFORMATION SYSTEMS |
Author(s): |
A.
Laws, M. Allen and A. Taleb-Bendiab |
Abstract: |
The development and application of software engineering
practices over the last thirty years have undoubtedly resulted in the
production of significantly improved software, yet the majority of modern
software systems remain intrinsically fragile. Nowhere is this more
apparent than in those systems that attempt to model real world
situations. Here, the abstractions and assumptions made in attempting to
capture the unbounded, unspecifiable richness of the real world in the
finite and static medium of software, inevitably result in systems that
are deeply riven with uncertainty. Such systems remain highly vulnerable
to environmental change and consequently require continuing adaptation. In
this paper, the authors concede to the inevitability of such uncertainty
and argue that the key to the problem lies in the adaptability of the
software system. Moreover, given the problems inherent in manual software
adaptation, it is further contended that imbuing the software system with
a degree of autonomy presents a seductive means to cope with adversity and
uncertainty. For theoretical support of this claim, the authors make
recourse to the field of cybernetics, an area well-versed in the problems
of adaptive systems. Further support is drawn from the emerging disciple
of self-adaptive software, which seeks to devolve some of the
responsibility for maintenance activity to the software itself through the
use of a federated normative systems approach and systems’
self-awareness. The paper presents a brief review of the recent work in
self-adaptive software and an overview of multi-agent systems. These
notions are then combined using the managerial cybernetics of Beer’s
Viable System Model (VSM) as a conceptual guide, to underpin the
development of normative services for the control and management of
complex adaptive software. The paper continues with the presentation of a
framework and design for intelligent adaptive systems and concludes by
providing support for this approach through examples drawn from an
on-going research project in evolving dependable services provision. |
|
Title: |
DIGITAL
TIMESTAMPS FOR DISPUTE SETTLEMENT IN ELECTRONIC COMMERCE: GENERATION,
VERIFICATION, ANDRENEWAL |
Author(s): |
Kanta
Matsuura and Hideki Imai |
Abstract: |
Digital notary is receiving more and more attention as a
social infrastructure in the network society. Digital time-stamping is a
simple but very important form of the notary, and can be used to provide
long-term authenticity. Its function is essential for dispute settlement
in electronic commerce. This importance motivated us to have a clear
survey on how to generate/verify digital time-stamps. In order to make
longer use of digital time-stamping functions, we may have to update or
renew our timestamps. The renewal must be completed before the expiry of
the current time-stamp. However, the congestion problem of the renewal
request has not yet been studied well. So the survey is followed by an
investigation of renewal feasibility. The investigation uses a simple
model in which the service provider’s mind and the users’ mind are
independently represented by two different parameters. The condition for
accepting renewal requests without queueing is discussed by using the two
parameters. Another issue will occur when we submit high-dimensional
contents to be time-stamped and the components of the contents are updated
independently in later time; memory requirement on the client depends on
the submission strategy. A discussion on different strategies is given as
well. |
|
Title: |
AUTOMATIC
VERIFICATION OF SECURITY IN PAYMENT PROTOCOLS FOR ELECTRONIC COMMERCE |
Author(s): |
M.
Panti, L. Spalazzi, S. Tacconi and S. Valenti |
Abstract: |
In order to make secure transactions over computer networks,
various cryptographic protocols have been proposed but, because of
subtleties involved in their design, many of them have been shown to have
flaws, even a long time after their publication. For this reason, several
automatic verification methods for analyzing these protocols have been
devised. The aim of this paper is to present a methodology for verifying
security requirements of electronic payment protocols by means of NuSMV, a
symbolic model checker. Our work principally focus on formal
representation of security requirements. Indeed, we propose an extension
of the correspondence property, so far used only for authentication, to
other requirements as confidentiality and integrity. These are the basic
security requirements of payment protocols for electronic commerce. We
illustrate as case study a variant of the SET protocol proposed by Lu
& Smolka. This variant has been formally verified by Ly & Smolka
and considered secure. Conversely, we have discovered two attacks that
allow a dishonest user to purchase a good debiting the amount to another
user. |
|
Title: |
A
METHOD FOR WIS CENTERED ON USERS GOALS |
Author(s): |
Nathalie
Petit |
Abstract: |
A Web information system (WIS) (Lowe, 1999) is an
information system (IS) which enables users, via the Web, to access
complex data and sophisticated interactive services. Examples include
large e-commerce sites and organizational IS based on an intranet. The
method here will be applied to an educational intranet. An intranet, like
the Internet, is a system used by different users with different levels of
knowledge and goals. Hence, there exists a diversity of goals, of
information formalisation, which leads to the difficulty of modeling the
total amount of information and processing accessible from the intranet.
The method described here will therefore concern itself with users’
goals in order to model the intranet. |
|
Title: |
A
SEMI-UNIVERSAL E-COMMERCE AGENT |
Author(s): |
Aleksander
Pivk and Matjaz Gams |
Abstract: |
A universal e-commerce agent should provide commerce
functions through the Internet by accessing arbitrary e-commerce sites and
on this basis offer intelligent information services on its own. Our
system ShinA (SHoppINg Assistant) is a semi-automatic e-commerce agent. It
can enter an arbitrary e-commerce site by observing a human user
performing the first query. By understanding key concepts of the first
query, ShinA performs following queries by other users. In a consequence,
a user enters ShinA by asking for a particular item, and ShinA provides
all relevant e-commerce options from various providers. The major
advantage of ShinA is a semi-automatic creation of a wrapper around a
particular e-commerce site demanding minimal human
interaction/corrections. ShinA is a successor of the EMA (EMployment
Agent) prototype, which performs similar functions in job queries over the
Internet. |
|
Title: |
QUOTES:
A NEGOTIATION TOOL FOR INDUSTRIAL E-PROCUREMENT |
Author(s): |
A.
Reyes-Moro, J.A. Rodríguez-Aguilar, M. López-Sánchez, J. Cerquides and
D. Gutierrez-Magallanes |
Abstract: |
The sourcing process of multiple goods or services usually
involves complex negotiation (via telephone, fax, etc) that includes
discussion of product features as well as quality, service and
availability issues. Currently, this is a high-cost process due to the
scarce use of tools that streamline this communication and assist
purchasing managers’ decision-making. With the advent of internet-based
technologies, it becomes feasible the idea of an affordable tool that
enables to maintain an assisted, fluid, on-line dialog at virtually no
cost and wherever your providers are. Consequently, several commercial
systems to support on-line negotiations become available. However there is
still a need that these systems incorporate effective decision support
techniques. This article presents Quotes as iSOCO’s e-solution for
strategic sourcing that incorporates Artificial Intelligence (AI) based
techniques that successfully address previous limitations within a single
and coherent framework |
|
Title: |
AN
AUTOMATED APPROACH TO QUALITY-AWARE WEB APPLICATIONS |
Author(s): |
Antonio
Ruiz, Rafael Corchuelo and Amador Durán |
Abstract: |
Multi–Organisational Web–Based Systems (MOWS) are
becoming very popular in the Internet world. It would be desirable for
such systems to be quality–aware, and this feature turns the problem of
selecting the web services of which they are composed into a complex task.
The main reason is that the catalog of available services that meet a set
of functional requirements may be quite large, and it is subject to
unexpected changes due to the inherent volatility of the Internet. Thus,
such systems need an infrastructure able to select the best services at
run time so that they can deliver their functionality at the best possible
quality level. In this article, we present a proposal that aims at being
the core of a run–time system able to support the construction of
quality–aware MOWS. |
|
Title: |
IM@GIX |
Author(s): |
Carlos
Serrão and Joaquim Marques |
Abstract: |
This paper describes the usage of specific web application
designed for the Electronic Commerce of Digital Still Images using some of
the state-of-the-art technologies in the field of digital imaging, namely
copy right protection. Still images are not only traded using this
platform but also value is added to the images through the means of
cataloguing, metadata and watermark insertion. This work is performed
using some auxiliary tools, which are also referred in the present paper.
This paper also proposes and discusses a methodology for streamlining the
production of still digital image content. The methodology proposed and
described here, called DIGIPIPE, encompasses several steps which range
from the simple image digitalization until the image trading, without
omitting the image copy rights protection procedures. |
|
Title: |
PROFILE
NEGOTIATION REQUIREMENTS IN A MOBILE MIDDLEWARE SERVICE ENVIRONMENT |
Author(s): |
Markus
Sihvonen |
Abstract: |
The MExE service environment is a standardized execution
environment for downloadable applications in 3rd generation mobile phones
and networks. The key feature that enables the downloading of services is
profile platform negotiation in the service environment. The profile
platform description is composed of a user profile, capability and content
information. Based on the profile platform description of a mobile phone
and network, downloadable services are tailored to mobile phones. The
research problem of the paper is the profile negotiation requirements of
an MExE service environment. The research is based on the constructive
method of the related publications and technologies and the results are
derived from the abstractive analysis of the available material. The
primary conclusions of the study are that both MExE user equipment and a
MExE application must have registers, which describe a platform of MExE
user equipment and the requirements for the same platform imposed by a
downloadable application. The registers must be accessible via URI and
storable in a multiple locations and retrievable whenever needed. |
|
Title: |
INTELLIGENT
AGENT-BASED FRAMEWORK FOR MINING CUSTOMER BUYING HABITS IN E-COMMERCE |
Author(s): |
Qiubang
Li and Rajiv Khosla |
Abstract: |
Predicting and profiling customer buying habit in e-commerce
is no doubt a competitive advantage for ebusiness sponsor. This paper
adopts collaborative filtering and association rule to set up an
agent-based framework to predict customer’s buying habit. The
applications of the model range from customizing shopping malls to
customizing banking and finance products on the Internet. |
|
Title: |
THE
"SHARED DATA APPROACH" TO THE INTEGRATION OF DYNAMIC BUSINESS
ENTERPRISES |
Author(s): |
Trevor
Burbridge, Jonathan Mitchener and Ben Strulo |
Abstract: |
The bright new world of e-Commerce requires the business
information systems of diverse enterprises to effectively inter-operate.
This inter-operation must be continuously maintained within highly dynamic
communities. New practices and products are required to span the gap
between the complex requirements of business integration and the desire to
implement systems that are low in development costs and evolve flexibly
with the business rather than hinder its development and strategy. We
present a novel approach to this inter-operation based on sharing state
rather than passing messages. |
|
Title: |
AN
E-SERVICE INFRASTRUCTURE FOR INNOVATION EXPLOITATION AND TECHNOLOGY
TRANSFER: THE DILEMMA PROJECT |
Author(s): |
Anastasia
Constantinou, Vassilios Tsakalos, Philippos Koutsakas, Dimitrios
Tektonidis and Adamantios Koumpis |
Abstract: |
Mediation services in the domains of innovation exploitation
and technology transfer (partially based on information supply) are very
often provided for free since they are subsidised by a public body. This
situation often influences the quality of the services provided and in the
longer term undermines the very existence of these activities (taking also
into account that public funding is not for ever). The main characteristic
of the DILEMMA system presented in the paper is its flexibility to combine
information entities and provision procedures in order to build compound
service (i.e. meta-service) and service packages (bundles), oriented to
serve clients with highly differentiated needs related to content,
functionality and costs. The facilitation of service management, based
upon user behavioural patterns, forms the basis for assessing the added
value of the service. A billing architecture enables the usage-driven
charging of the service. |
|
Title: |
AN
E-COMMERCE MODEL FOR SMALL AND MEDIUM ENTERPRISES |
Author(s): |
F.
J. García, I. Borrego, M. J. Hernández, A. B. Gil and M. A. Laguna |
Abstract: |
In this work a solution for the entry of the Small and
Medium Enterprises (SME) on the virtual commerce bandwagon is presented.
The proposed e-commerce model introduces new roles in the virtual commerce
sphere, where a SME plays an active situation because it controls its own
business in stead of yielding this responsibility to third-party
enterprises, which are specialised on Information System. This e-commerce
model consists of three main elements: a) A trading area, based on product
catalogues (e-catalogue); b) SMEs represented by a software
catalogue-designer tool that allows the definition, publication and update
of a catalogue of products; and c) A virtual commerce site, represented by
an e-commerce specialised web server, which provides end-users with
services such as searching for a product in any published catalogue,
shopping cart management, selling certificates, navigation help, and so on
through a uniform and intuitive interface. This paper is devoted to
present the main agents of this model, which defines a new perspective in
the ecommerce area that can be view as a hybrid model B2B/B2C between the
supplier and the e-commerce server (B2B dimension) and between the server
and the end-users (B2C dimension). |
|
Title: |
DISTRIBUTED
ONLINE DOCTOR SURGERY |
Author(s): |
Hamid
Jahankhani and Pari Jahankhani |
Abstract: |
This paper reports on redesign of the existing manual system
of a Doctor Surgery, to a computerised system, which takes the advantage
of the latest technologies and allows the patients to have better
interaction with the system. The Doctor surgery plays a major role in
human life, over the years we have seen the drastic changes in the
treatment of patient in surgery, however we haven't really seen much
changes on structure of the system as a whole. Many surgeries still use a
manual paper based system for their transaction. The recent rapid
development in web technology and growth of distributed processing seems
to be only applicable for commercial business and field such as medical
treatment seems to have fallen behind in the technology and as
consequence, inefficient and ineffective services provided to the
patients. The new prototype system has been designed using Object Oriented
Methodology and implemented by using mainly JAVA (RMI, SQL, SERVLET and
other Java packages) for creating the communication server and the web
site. Also, for the end user interface of the database in the surgery
ORACLE 7 and Developer 2000 application was used. The implementation of
the system allows the patient to carry out appointment transaction
(create, query, delete) and communicate with the doctor via the web site,
which is connected to the oracle server in the surgery. The web site
provides all the necessary details and information about the surgery and
practice. The final prototype utilises distributed technology and built
upon the research carried out. |
|
Title: |
ACCESSING
AND USING INTERNET SERVICES FROM JAVA-ENABLED HANDHELD WIRELESS DEVICES |
Author(s): |
Qusay
H. Mahmoud and Luminita Vasiu |
Abstract: |
The Java 2 Micro Edition does not support all the Java
language and virtual machine features, either because they are too
expensive to implement or their presence would impose security issues. For
example, there is no support for TCP sockets, object serialization, and
consequently there is no support for Remote Method Invocation (RMI)
either. The benefit of wireless applications, however, becomes apparent
and useful for organizations when you can access critical business data
and Internet resources efficiently from anywhere you go. In this paper we
give a brief overview of the Java 2 Micro Edition (J2ME), we then discuss
the network programming model and explain how to invoke CGI scripts and
servlets from devices with limited resources, and finally we discuss our
proposed mediator-based approach for accessing Internet services and
distributed object systems from Java-enabled wireless handheld devices. |
|
Title: |
E-PROCUREMENT
IN A RURAL AREA |
Author(s): |
Mike
Rogers, Thomas Chesney and Scott Raeburn |
Abstract: |
E-procurement has great potential to become a major way of
transacting in rural areas such as the Scottish Borders where the variety
of shops and services offered is limited. The Scottish Borders Council has
selected e-procurement as a means to reduce costs, improve process and
service delivery, meet Government targets for electronic purchasing and,
not least, use e-procurement to assist local businesses in this
economically depressed region to become e-commerce capable, thus improving
their ability to grow and create wealth and employment in the region. This
paper reports the results of research conducted to establish whether
recruiting suppliers to e-commerce is likely to be problematic for the
Council. Findings show that in terms of connectivity, the Scottish Borders
appear ahead of the rest of the UK which is encouraging for the Borders
economy and for the Council’s e-procurement strategy, although no data
is available on actual usage of the Internet, only that the firms have a
connection. Fewer of the smaller firms are using the Internet as a
marketing tool, that is they do not have even a web site with information
only content. The use of ecommerce among Council suppliers is well below
the UK average. A number of factors appear to be holding them back. These
are discussed. |
|
Title: |
A
SYSTEM BASED ON PREFERENCES FOR AID TO THE PURCHASE DECISION |
Author(s): |
Irene
Luque Ruiz, Enrique López Espinosa, Gonzalo Cerruela García and Miguel
Ángel Gómez-Nieto |
Abstract: |
The systems of business through Internet have among its
objectives the sale of its products, developing for it Web sites more or
less complex in which is shown the products publicity and is permitted to
carry out the purchase of these products. The great apparition of these
systems does that the user have problems in deciding which is the best
offering from among all the purchase options found in the network. In this
paper a system of aid to the purchase decision is described based on the
purchase preferences of the user. This system facilitates to the user the
purchase process advising him the possible purchase sites of the desired
products based on a specific set of preferences selected dynamically by
the own user during the purchase process. |
|
Title: |
DESIGN
AND IMPLEMENTATION OF A MESSAGE SERVICE HANDLER FOR EBXML |
Author(s): |
Eun-Jung
Song, Ho-Song Lee and Taeck-Geun Kwon |
Abstract: |
We have implemented an engine of Message Service Handler
(MSH) in order to support Transport, Routing and Packaging (TRP) for
Electronic Business XML (ebXML) message service which is a communication
protocol for exchange of eXtensible Markup Language (XML) based messages.
Our java-based MSH engine has a simple and unified interface with Remote
Method Invocation (RMI) that plays a role of a basic component for a new
electronic commerce infrastructure. Due to the sophisticated RMI
technology, applications can be loosely-coupled and run remotely with the
MSH; that means e-Commerce business application can be efficiently
implemented without awareness of messaging infrastructure. Using our ebXML
TRP Specification (version 1.0) compliant MSH engine, we have also
implemented an application for submission of a paper, that is an instance
of XML-based document; the application is required for exchange of
reliable message. |
|
Title: |
DESIGN
REQUIREMENTS FOR MOBILE AGENT SYSTEMS |
Author(s): |
Luminita
Vasiu and Alan Murphy |
Abstract: |
Mobile agents is an emerging technology that is gaining in
the field of distributed computing. In the last few years, there has been
an enthusiastic interest in mobile agents and several platforms have been
developed. This paper discusses the design requirements for mobile agent
systems at two levels: systemlevel and language level. System-level issues
like the provision of agent mobility, global naming and security are
mainly encountered in the development of the runtime environments for
mobile agents. Language-level issues, such as agent programming models and
primitives arise in providing support for mobile agent programming, mainly
at the library level. The paper identifies the design requirements at both
levels and illustrates the different ways developers are addressing them. |
|
Title: |
THIN
SERVERS - AN ARCHITECTURE TO SUPPORT ARBITRARY PLACEMENT OF COMPUTATION IN
THE INTERNET |
Author(s): |
J.C.
Diaz y Carballo, A. Dearle and R.Connor |
Abstract: |
The Internet is experiencing an overwhelming growth that
will have a negative impact on its performance and quality of service. In
this paper we describe a new architecture that offers a better use of
Internet resources and help improve security at the server nodes. The Thin
Server Architecture aims to dynamically push code and data in arbitrary
locations throughout the Internet. We give implementation techniques
related to code mobility, the choices we have made for our architecture,
our on-going implementation work, and future directions. |
|
Title: |
MANAGING
SECURITY IN ELECTRONIC BUSINESS |
Author(s): |
Kaiyin
Huang and Kaidong Huang |
Abstract: |
Managing security is a complex social and technological
task. To develop an effective security policy, the distinction between
internal and external organisation is important since the exercise of
power is various; the distinction between technical and human functions is
necessary since they require different implementations. These distinctions
form a framework of inter-organisational systems (FIOS) and the security
issues can be organised into four sub-areas: safety in communication,
safety in IT resources, securing human resources, and protection from
business environment. |
|
Title: |
A
GLOBAL MODEL OF ELECTRONIC COMMERCE |
Author(s): |
Claudine
Toffolon and Salem Dakhli |
Abstract: |
Information technology is drastically revolutionizing the
global economy and society. In particular, all business activities are
nowadays information based. Electronic commerce emergence enabled by the
Web technology is among the most important transformations of economy
which reshape the ways people and firms shop and carry out business
value-adding activities. The term “new economy” has been coined by
academics and practitioners to stress the importance of potential changes
in economy induced by electronic commerce. Nevertheless, some authors
assert that the contribution of electronic commerce to economic prosperity
is negligible and little more than a bubble. We think that a model of
electronic commerce is needed in order to understand its scope, its
components and assess its real impacts on economy. In this paper, we
propose a global model of electronic commerce which provide instruments to
address three main aspects. On the one hand, its identifies actors of
electronic commerce, their roles and their relationships. On the other
hand, it explains the differences between electronic commerce and
traditional commerce. Finally, it permits analyzing the new economic rules
governing electronic commerce. |
|
Title: |
SRIUI
MODEL: A DESIG N CENTRIC APPROACH TO USER INTERFACES FOR SHOPPING CARTS
WITH EMPHASIS ON INTELLIGENCE |
Author(s): |
C.
Chandramouli |
Abstract: |
The seamless application of information and communication
technology from its point of origin to its end is the most important
aspect of e-commerce. The transition from normal marketing to online
marketing is the most difficult step taken by any company, which intends
to promote its business online. With this giant step, most companies make
a few fundamental mistakes while designing user interfaces. There is a
large difference between retailing "off the shelf" and retailing
online both in terms of sales and in terms of lack of sales. As the World
Wide Web matures, it is very hard to design user interfaces that both
satisfy the user in terms of functionality and at the same time give him a
pleasurable and satisfying experience when he makes online purchases. The
paper suggests a model of implementing user interfaces termed as the SRIUI
model, which stands for static response intelligent user interface model.
Revenue returns in most cases are seen only when design of user interfaces
are intuitive to the user and at the same time, innovative. These demands
make the design and realization of a shopping cart interface on the web
cumbersome and difficult. The SRIUI model uses inherent principles of
browsers but goes one step further allowing designers to use inherent
principles utilized in designing commercial software. The intelligence of
a shopping cart interface is defined and the basic factors governing ease
of use and design principles for shopping carts are discussed in the
paper. The paper compares the SRIUI model with the present day shopping
carts and gives them a rating to relate to the intelligence of the
interface. |
|
Title: |
THE
CONCEPTS OF GRATITUDE, DELEGATION AND AGREEMENT IN ECENVIRONMENTS |
Author(s): |
Paulo
Novais, Luís Brito and José Neves |
Abstract: |
Logic presents itself as a major tool in the development of
formal descriptions for agent-based systems. Indeed, Logic Programming and
specially Extended Logic Programming provides a powerful tool for the
development of such systems. Electronic Commerce (EC) poses new challenges
in the areas of Knowledge Representation and Reasoning and formal
modelling, where specific agent architectures are mandatory. Although
logic has been successfully used in the areas of argumentation (specially,
legal argumentation), the preceding reasoning process (pre-argumentative
reasoning) is rarely stated. In EC scenarios, such course of action takes
into account human like features such as gratitude, delegation and
agreement (embedded with temporal considerations, clause precedence and
incomplete information) leading to feasible EC systems. |
|
Title: |
INTEGRATING
MOBILE AGENT INFRASTRUCTURES IN OPERATIONAL ERP SYSTEMS |
Author(s): |
Apostolos
Vontas, Philippos Koutsakas, Christina Athanasopoulou, Adamantios Koumpis,
Panos Hatzaras, Yannis Manolopoulos and Michael Vassilakopoulos |
Abstract: |
In this paper we present our most recent work carried out in
the wider context of the IST-ADRENALIN project, to facilitate formation
and lifecycle management of networked enterprises. The projects focus is
in designing and building an execution “kernel” for mobile agent
applications written in Java, using the Aglets platform and integrating it
with any existing ERP system. For integrating mobile agent infrastructures
in operational ERP systems, we developed a generic model, which we call
Mobile Agent Model, and which encompasses two parts concerning the Agent
Logic and the Agent Proprietary Data, analysed in more detail in this
paper. This model is branch independent and builds on the Adrenalin
Company concept, where the fractal and the Information supply chain
concepts are combined, introducing that every process, activity and
resource can be defined with the “triangle” of executor – controller
- co-ordinator tasks, supporting characteristics of self-similarity,
self-organisation, self-optimisation and dynamic organisational behaviour. |
|
Title: |
XEON
– AN ARCHITECTURE FOR AN XML ENABLED FIREWALL |
Author(s): |
Andrew
Blyth, D Daniel Cunliffe and Iain Sutherland |
Abstract: |
This paper outlines a firewall architecture for the secure
exchange of information using the extensible mark up language (XML). The
architecture can be used to create a virtual private network suitable for
an ecommerce application, allowing secure communication over the Internet.
This paper identifies the elements required to build an XML enabled
firewall that will i) ensure the secure communication of data, and ii)
validate the data to ensure data integrity. The architecture addresses the
issue of information integrity using the Document Type Definition and
additional rules applied by a proxy. |
|
Title: |
THE
WEBOCRACY PROJECT |
Author(s): |
Peter
Burden |
Abstract: |
An account is presented describing a European Union funded
project to provide support for local democracy via the World Wide Web. The
proposed system, known as the Webocrat system, is designed to provide all
the necessary functionality to enhance the effectiveness of local
government both in terms of efficient delivery of services and as a
democratic institution. |
|
Title: |
INTRODUCTION
TO INFORMATION TECHNOLOGY AND ITS EFFECTS ON ORGANISATIONAL CONTROL |
Author(s): |
Rahim
Ghasemiyeh and Feng Li |
Abstract: |
The explosive growth in IT capabilities and extensive use of
IT systems has satisfied the increasing desire of organizations to gain
competitive advantage. Advances in information technology and its inherent
devices have led to profound changes in organizational rules. This study
attempts to build an introduction to evaluate IT effects on control as a
functions of management. Modern organisations, based on using modern
technology, have to make major modifications to the entire organisations.
This study begins by examining concepts of control in organisations
following a short review of past work on IT effects on concept of
hierarchy. Then present a comparison of conventional control and new
attitude in traditional and modern organization. At the end it provides a
discussion of new condition in regards of IT advance and its effects on
trust and control. |
|
Title: |
LOGIC
AND PROBABILISTIC BASED APPROACH FOR DOCUMENT DATA MODELING |
Author(s): |
Mourad
Ouziri and Christine Verdier |
Abstract: |
We propose in this paper to jointly use the description
logic (DL) and probabilistic approaches for documents data modelling. The
description logic is used to generate a first database schema by reasoning
capabilities over the conceptual part of the documents. The schema we get
is normalized. For this, we add to the DL knowledge base, which represents
the documents, some ontological assertions specified in the DL formalism.
Then, probabilistic calculus are used to optimise the database schema by
computing some statistical measurements over the extensional part of the
documents |
|
Page Updated on 06-05-2003
Copyright © Escola Superior de Tecnologia
de Setúbal, Instituto Politécnico de Setúbal |