Area 1 - Databases and Information Systems
Integration |
Title: |
RANDOM SAMPLING ALGORITHMS FOR LANDMARK WINDOWS OVER DATA STREAMS |
Author(s): |
Zhang Longbo, Li Zhanhuai,Yu Min, Wang Yong and Jiang Yun
|
Abstract: |
In many applications including sensor networks,
telecommunications data management, network monitoring and financial
applications, data arrives in a stream. There are growing interests in
algorithms over data streams recently. This paper introduces the problem
of sampling from landmark windows of recent data items from data streams
and presents a random sampling algorithm for this problem. The presented
algorithm, which is called SMS Algorithm, is a stratified multistage
sampling algorithm for landmark window. It takes different sampling
fraction in different strata of landmark window, and works even when the
number of data items in the landmark window varies dramatically over
time. The theoretic analysis and experiments show that the algorithm is
effective and efficient for continuous data streams processing. |
|
Title: |
A PROTOTYPE FOR TRANSLATING XSLT INTO XQUERY |
Author(s): |
Ralf Bettentrupp, Sven Groppe, Jinghua Groppe, Stefan Böttcher and
Le Gruenwald |
Abstract: |
XSLT and XQuery are the languages developed by
the W3C for transforming and querying XML data. XSLT and XQuery have the
same expressive power and can be indeed translated into each other. In
this paper, we show how to translate XSLT stylesheets into equivalent
XQuery expressions. We especially investigate how to simulate the match
test of XSLT templates by two different approaches which use reverse
patterns or match node sets. We then present a performance analysis that
compares the execution times of the translation, XSLT stylesheets and
their equivalent XQuery expressions using various current XSLT
processors and XQuery evaluators. |
|
Title: |
INVESTIGATING THE IMPROVEMENT SPACE OF SOFTWARE DEVELOPMENT
ORGANISATIONS |
Author(s): |
Joseph Trienekens, Rob Kusters, Frans van Veen, Dirk Kriek, Daniel
Maton and Paul Siemons |
Abstract: |
Actual results of software process improvement
projects happen to be quite dissapointing in practice. Although many
software development organisations have adopted improvement models such
as CMMI, it appears to be difficult to improve software development
processes in the right way, e.g. tuned to the actual needs of the
organisation and taking into account the environment (e.g. the market)
of an organisation. This paper presents a new approach to determine the
direction of improvement for an organisation. This approach is based on
literature research as well as an empirical investigation among eleven
software development organisations in The Netherlands. The results of
the research show that software development organisations can be
classified and can be positioned on the basis of their internal and
external entropy, c.q. the level of (dis)order in the business system
and its environment. Based on a possible imbalance between the internal
and external entropy, directions for software process improvement can be
determined. As such the new approach can complement and improve the
application of current software process improvement methodologies, e.g.
CMMI |
|
Title: |
PERHAPS A RECIPE FOR CHANGE? - WILL E-VOTING HAVE THE DESIRED
EFFECT? |
Author(s): |
Mark Liptrott |
Abstract: |
This work is a progress report and briefly
describes the main findings from the literature review of the research
into electronic voting as it identifies factors which affect the
decision-making processes of the English local authorities which are
offered the opportunity to trial electronic voting. The analysis is
based on Rogers’ diffusion of innovations theory framework. A key result
found that in a voluntary situation where there is one overarching
organization trying to introduce an innovation to an agency
organization, Rogers diffusion of innovations theory framework requires
modification. |
|
Title: |
SPLITTING FACTS USING WEIGHTS |
Author(s): |
Liga Grundmane and Laila Niedrite |
Abstract: |
The typical data warehouse report is dynamic
representation of some objects behavior or changes of objects’
properties. If this behavior is changing, it is difficult to make such
reports in an easy way. It is possible to use fact splitting to make
this task simpler and more comprehensible for users. In the presented
paper two solutions of splitting facts by using weights are described.
One of the possible solutions is to make the proportional weighting
accordingly to splitted record set size. It is possible to take into
account the length of the fact validity time period and the validity
time for each splitted fact record. |
|
Title: |
IMPLEMENTING A HIGH LEVEL PUB/SUB LAYER FOR ENTERPRISE INFORMATION
SYSTEMS |
Author(s): |
Mario Antollini, Mariano Cilia and Alejandro Buchmann |
Abstract: |
Enterprise application interactions based on
events has been receiving increasing attention. It is based on the
exchange of small pieces of data (called events) typically using the
publish/subscribe interaction paradigm. Most pub/sub notification
services assume a homogeneous namespace and do not support the
interaction among heterogeneous event producers and consumers. In this
paper we briefly describe the concept-based approach as a high-level
dissemination mechanism for distributed and heterogeneous event-based
applications. We focus on the design and implementation issues of such a
mechanism and show how it can be integrated on research prototypes or
products and platforms. |
|
Title: |
CREATING AND MANIPULATING CONTROL FLOW GRAPHS WITH MULTILEVEL
GROUPING AND CODE COVERAGE |
Author(s): |
Anastasis A. Sofokleous, Andreas S. Andreou and Gianna Ioakim |
Abstract: |
Various researchers and practitioners have
proposed the use of control flow graphs for investigating software
engineering aspects, such as testing, slicing, program analysis and
debugging. However, the relevant software applications support only low
level languages (e.g. C, C++) and most, if not all, of the research
papers do not provide any details about the implementation of the
control flow graph, leaving it to the reader to imagine either that the
author is using third party software for creating the graph, or that the
graph is constructed manually (by hand). The same holds for code
coverage tools as well. In this paper, we extend our previous work on a
dedicated program analysis architecture and we describe a tool for
automatic production of the control flow graph that offers advanced
capabilities, such as vertices grouping, code coverage based on a given
set of inputs and enhanced user interaction. |
|
Title: |
COMBINING BUSINESS ACTIVITY MONITORING WITH THE DATA WAREHOUSE FOR
EVENT-CONTEXT CORRELATION - EXAMINING THE PRACTICAL APPLICABILITY OF
THIS BAM APPROACH |
Author(s): |
Gabriel Cavalheiro, Ajantha Dahanayake and Richard Welke |
Abstract: |
Business Activity Monitoring (BAM) is a term
introduced by the Gartner Group to define systems that serve to provide
real-time access to critical business performance indicators to improve
speed and effectiveness of business operations. Despite the emphasis of
BAM on the provision of low latency views on enterprise performance,
literature on BAM also indicates the technical feasibility of a BAM
approach, which adds context from historical information stored in a
data warehouse to real-time events detected by BAM system so as to help
enterprises improving understanding of current monitoring scenarios.
However, at this point, there is a lack of studies that discuss the use
of this approach to tackle real-world business problems. To improve
practical understanding of the potential applicability of this BAM
approach, this paper will present a synthesis of existing research on
BAM and data warehouse to provide an objective basis for proposing
feasible business scenarios for applying the combination of both
technologies. This study reveals that the noted BAM approach empowers
operational managers to respond in a more precise manner to the
occurrence of events by enabling a better understanding of the nature of
the detected event. |
|
Title: |
A GUI FOR DATA MART SCHEMA ALTERATION |
Author(s): |
Nouha Bouaziz, Faiez Gargouri and Jamel Feki |
Abstract: |
This paper is interested in the graphical
manipulation of data mart schemes described in XML and issued from a
generation module of multidimensional models. This manipulation is
performed through a set of operations we have defined. These operations
consist in adding, deleting and renaming the multidimensional elements.
|
|
Title: |
ELIMINATION OF TIME DEPENDENCE OF INFORMATION VALIDITY BY
APPLICATION OF RFID TECHNOLOGY |
Author(s): |
Vladimir Modrak and Viaceslav Moskvic |
Abstract: |
Following article deals with certain aspects of
data acquisition for MRP, ERP and MES type of information systems from a
shop floor level. Problems of time dependence of data validity are
discussed and method of their elimination by application of radio
frequency identification technology (RFID) is suggested. |
|
Title: |
WEB KNOWLEDGE MANAGEMENT FOR SMALL AND MEDIUM-SIZE ENTERPRISES -
WEBTOUR: A CASE STUDY FROM THE TOURISM SECTOR |
Author(s): |
María M. Abad-Grau, Francisco Araque, Rosana Montes, M. Visitación
Hurtado and Miguel J. Hornos |
Abstract: |
The current enterprise world has become global
and complex. Knowledge management is a key to have a competitive
advantage as it allows detecting in advance customer trends and market
evolution. While knowledge management systems are usually unaffordable
for small or even medium-size enterprises, a tool to be shared between
them is a more realistic solution. The system, based on client/server
architecture with a web interface, is able to provide top Information
Technology (IT) solutions for a low cost so that small and medium
business can also use these systems to acquire competitive advantage. We
have developed a solution for a IT enterprise providing an on-line
reservation system for tourist small lodgings and travel agencies. It
consists of a Data Warehouse (DW) and a Decision Support System (DSS)
which is currently being offered as a value-added service for providers
and customers.
|
|
Title: |
FORMAL VERIFICATION OF AN ACCESS CONCURRENCY CONTROL ALGORITHM FOR
TRANSACTION TIME RELATIONS |
Author(s): |
Achraf Makni, Rafik Bouaziz and Faïez Gargouri |
Abstract: |
We propose in this paper to formally check the
access concurrency control algorithm proposed in (Bouaziz, 2005). This
algorithm is based on the optimistic approach and guarantee strong
consistency for the transaction time relations. The specification of our
model under PROMELA language allowed us to ensure the feasibility of the
validation. We then could, using the SPIN model checkers, avoid errors
of type blocking and check safety properties specified by temporal logic
formulas. |
|
Title: |
PARALLEL QUERY PROCESSING USING WARP EDGED BUSHY TREES IN
MULTIMEDIA DATABASES |
Author(s): |
Lt.S.Santhosh Baboo, P.Subashini and K.S.Easwarakumar |
Abstract: |
The paper focuses on parallelization of queries
execution on a shared memory parallel database system. In this paper, a
new data structures, named Warp edged Bushy trees, is proposed for
facilitating compile time optimization. The warp edged bushy tree is a
modified version of bushy trees [1], which provides better response time
than bushy trees, during query processing |
|
Title: |
DESIGNING IMAGING SOLUTIONS FOR AN ORGANIZATION |
Author(s): |
Prasad N. Sivalanka |
Abstract: |
In a business climate where organizations are
looking for ways to cut costs and increase Productivity, document
imaging systems are providing the most dramatic impact. Efficient
management of that paper is crucial to the success of any organization
business community where 90% of corporate information resides in paper
documents. A process driven document management system is necessary that
converts paper documents into electronic documents for easy filing,
retrieval and storage. This paper addresses the above issue which was
implemented at one of our large financial clients. |
|
Title: |
SCALABLE UPDATE PROPAGATION IN PARTIALLY REPLICATED, DISCONNECTED
CLIENT SERVER DATABASES |
Author(s): |
Liton Chakraborty, Ajit singh and Kshirasagar Naik |
Abstract: |
Modern databases allow mobile clients, that
subscribe to replicated data, to process the replica forgoing continuous
connectivity, and to receive the updates while connected to the server.
Based on the overlap in client interest pattern, the server can do
update processing for manageable number of data-groups instead of
per-client basis, and hence decouple the update processing cost from the
client population. In this paper, we propose an efficient update
propagation method that can be applied to a relational database system
irrespective of its inherent data organization. We present
computationally efficient algorithms for group design and maintenance
based on a heuristic function. We provide experimental results that
demonstrate that our approach achieves a significant increase in overall
scalability over the client-centric approach. |
|
Title: |
DATA MANAGEMENT SYSTEM EVALUATION FOR MOBILE MESSAGING SERVICES |
Author(s): |
David CC Ong, Rytis Sileika, Souheil Khaddaj and Radouane Oudrhiri |
Abstract: |
A mobile messaging revolution for the mobile
phone industry started with the introduction of the Short Messaging
Service (SMS), which is limited to 160 characters of conventional text.
This revolution has become more significant with the additional
improvements in mobile devices. They have become relatively powerful
with extra resources such as additional memory capacity and innovative
features such as colour screen, photo camera, etc. Now Multimedia
Messaging Service (MMS) takes full advantage of these capabilities by
providing longer messages with embedded sound, image and video
streaming. This service presents a new challenge to mobile platform
architects particularly in the data management area where the size of
each MMS message could be up to 100,000 bytes long. This combined with a
high volume of requests managed by these platforms which may well
exceeded 250,000 requests per second, means that the need to evaluate
competing data management systems has becoming essential. This paper
presents an evaluation of SMS and MMS platforms using different data
management systems and recommends the best data management strategies
for these platforms. |
|
Title: |
COMBINING THE DATA WAREHOUSE AND OPERATIONAL DATA STORE |
Author(s): |
Ahmed Sharaf Eldin Ahmed, Yasser Ali Alhabibi and Abdel Badeeh M.
Salem |
Abstract: |
Many of small business organizations tend to
combine the operational data stores (ODS) and data warehousing (DW) in
one structure in order to save the expenses of building two separate
structures for each of them. The purpose of this paper is investigating
the expected obstacles that may affect organizations that try to combine
the ODS and DW in one structure. Both the analytical and comparative
analysis are used to investigate the obstacles and drawbacks that have
been faced in combining the ODS and DW in one structure. |
|
Title: |
CONVERTING TIME SERIES DATA FOR INFORMATION SYSTEMS INTEGRATION |
Author(s): |
Li Peng |
Abstract: |
Most enterprises have an autonomous and
heterogeneous information system. The same data may be diversely
represented in different information systems. The core of solutions for
integrating heterogeneous data sources is data conversion. One of the
major issues of data conversion is how to convert data that contains
temporal information. In this paper I propose a method to effectively
convert time-series data appearing in enterprises. The concept of
calendar is integrated into the proposed method. The method is based on
a generalized representing form for data. The converting operations and
processes are defined and presented. |
|
Title: |
ALGORITHMS FOR INTEGRATING TEMPORAL PROPERTIES OF DATA IN DATA
WAREHOUSING |
Author(s): |
Francisco Araque, Alberto Salgueroa, Cecilia Delgadob, Eladio
Garvíb and José Samosb |
Abstract: |
One of the most complex issues of the
integration and transformation interface is the case where there are
multiple sources for a single data element in the enterprise data
warehouse. While there are many facets to the large number of variables
that are needed in the integration phase, what we are interested in is
the temporal problem. It is necessary to solve problems such as what
happens when data from data source A is available but data from data
source B is not. This paper presents our work into data integration in
the Data Warehouse on the basis of the temporal properties of the data
sources. Depending on the extraction method and data source, we can
determine whether it will be possible to incorporate the data into the
Data Warehouse. We shall also examine the temporal features of the data
extraction methods and propose algorithms for data integration depending
on the temporal characteristics of the data sources and on the data
extraction method. |
|
Title: |
ANALYSIS-SENSITIVE CONVERSION OF ADMINISTRATIVE DATA INTO
STATISTICAL INFORMATION SYSTEMS |
Author(s): |
Mirko Cesarini, Mariagrazia Fugini and Mario Mezzanzanica |
Abstract: |
In this paper we present a methodological
approach to develop a Statistical Information System (SIS), out of data
coming from administrative archives of the Public Administrations. Such
archives are a rich source of information, but an attempt to use them as
sources for statistical analysis reveals errors and incompatibilities
among each other that do not permit their usage as a statistical and
decision support basis. These errors and incompatibilities are usually
undetected during administrative use, since they do not affect their
day-by-day use in the Public Administrations, however they need to be
fixed before performing any further aggregate analysis. The proposed
methodological approach encompasses the basic aspects involved in
building a SIS out of administrative data, such as design of an
integration model for different and heterogeneous data sources,
improvement of the overall data quality, removal of errors that might
impact on the correctness of statistical analysis, design of a data
warehouse for statistical analysis, and design of a multidimensional
database to develop indicators for decision support. We present a case
study, the AMeRIcA Project, where the methodological approach has been
applied starting from administrative data of a Municipality and of a
Province in Northern Italy. |
|
Title: |
SYNCHRONIZATION AND MULTIPLE GROUP SERVER SUPPORT FOR KEPLER |
Author(s): |
K. Maly, M Zubair, H. Siripuram and S. Zunjarwad |
Abstract: |
In the last decade literally thousands of
digital libraries have emerged but one of the biggest obstacles for
dissemination of information to a user community is that many digital
libraries use different, proprietary technologies that inhibit
interoperability. Kepler framework addresses interoperability and gives
publication control to individual publishers. In Kepler, OAI-PMH is used
to support "personal data providers" or "archivelets".". In our vision,
individual publishers can be integrated with an institutional repository
like Dspace by means of a Kepler Group Digital Library (GDL). The GDL
aggregates metadata and full text from archivelets and can act as an
OAI-compliant data provider for institutional repositories. The basic
Kepler architecture and it working have been reported in earlier papers.
In this paper we discuss the three main features that we have recently
added to the Kepler framework: mobility support for users to switch
transparently between traditional archivelets to on-server archivelets,
the ability of users to work with multiple GDLs, and flexibility to
individual publishers to build an OAI-PMH compliant repository without
getting attached to a GDL. |
|
Title: |
A MULTIDIMENSIONAL APPROACH TO THE REPRESENTATION OF THE
SPATIO-TEMPORAL MULTI-GRANULARITY |
Author(s): |
Concepción M. Gascueña, Dolores Cuadra and Paloma Martínez |
Abstract: |
Many efforts have been devoted to the treatment
of spatial data in databases both in traditional database systems and
decision support systems or On-Line Analytical Processing (OLAP)
technologies in datawarehouses (DW). Nevertheless, many open questions
concerning this kind of data still remain. The work presented in this
paper is focused on dealing with the spatial and temporal granularity
within a logical multidimensional model. The spatial data representation
through a multidimensional model clarifies the understanding of the data
analysis subject and it allows discovering special behavior hardly
detected without it. We propose an extension of the Snowflake model to
gather the spatial data and to show our proposal to represent the
spatial evolution through the time in an easy and intuitive way. We
represented the temporal and spatial multi-granularity with different
levels in the hierarchies of dimensions, and we present a typology of
hierarchies to include more semantics in the Snowflake scheme. |
|
Title: |
A DISCRETE PARTICLE SWARM ALGORITHM FOR OLAP DATA CUBE SELECTION |
Author(s): |
Jorge Loureiro and Orlando Belo |
Abstract: |
Multidimensional analysis supported by Online
Analytical Processing (OLAP) systems demands for many aggregation
functions over enormous data volumes. In order to achieve query
answering times compatible with the OLAP systems’ users, and allowing
all the business analytical views required, OLAP data is organized as a
multidimensional model, known as data cube. The materialization of all
the data cubes required for decision makers would allow fast and
consistent answering times to OLAP queries. However, this also imply
intolerable costs, concerning to storage space and time, even when a
data warehouse had a medium size and dimensionality - this will be
critical on refreshing operations. On the other hand, given a query
profile, only a part of all subcubes are really interesting. Thus, cube
selection must be made aiming to minimize query (and maintenance) costs,
keeping as an constraint the materializing space. That is a complex
problem: its solution is NP-hard. Many algorithms and several
heuristics, especially of greedy nature and evolutionary approaches,
have been used to provide an approximate solution. To this problem, a
new algorithm is proposed in this paper: particle swarm optimization
(PSO). According to our experimental results, the solution achieved by
the PSO algorithm showed a speed of execution, convergence capacity and
consistence that allow electing it to use in data warehouse systems of
medium dimensionalities. |
|
Title: |
A FRAMEWORK FOR ASSESSING ENTERPRISE RESOURCES PLANNING (ERP)
SYSTEMS SUCCESS:AN EXAMINATION OF ITS ASPECT FOCUSING ON CONTEXTUAL
INFLUENCES |
Author(s): |
Princely Ifinedo |
Abstract: |
Enterprise resource planning (ERP) systems are
diffusing globally. It is important for adopting firms to assess the
success of their software. However, in general, studies have shown that
often firms investing heavily in information systems (IS) sometimes do
not assess the success of their systems for a variety of reasons
including a lack of knowledge about what to assess. Similarly, research
in the area of IS success evaluations is varied, offering little succour
to practitioners. Specifically, ERP systems success assessment in the
literature is just beginning to surface. This paper discussing our
effort regarding extending an existing ERP systems success model.
Essentially, two new relevant success factors or dimensions not included
in the previous model were incorporated and tested empirically. We used
structural equation modeling for our study. The findings of the study
are discussed and implications for both practice and research are
highlighted. |
|
Title: |
A NEW APPROACH TO IMPLEMENT EXTENDED TRANSACTION MODELS IN J2EE |
Author(s): |
Xiaoning Ding, Xiangfeng Guo, Beihong Jin and Tao Huang |
Abstract: |
Extended transaction model (ETM) is a powerful
mechanism to ensure the consistency and reliability of complicated
enterprise applications. However, there is few implementation of ETM in
J2EE. The existing research is deficient in supporting range and
requires some special database supporting. This paper explores the
obstacle which prevents J2EE from supporting ETMs, and argues it is
because of the limitation of J2EE XAResource interface and underlying
databases. To overcome the obstacle, we propose a new approach, which
processes concurrency control inside J2EE application server instead of
in database. Furthermore, we implement TX/E service in JBoss to validate
the approach, which is an enhanced J2EE transaction service supporting
extended transaction models. Compared to existing work, TX/E supports
user-defined transaction models and does not require any special
database supporting. |
|
Title: |
DATABASES AND INFORMATION SYSTEMS INTEGRATION USING CALOPUS : A
CASE STUDY |
Author(s): |
Prabin Kumar Patro, Pat Allen, Muthu Ramachandran, Robert
Morgan-Vane and Stuart Bolton |
Abstract: |
Effective, accurate and timely data integration
is fundamental to the successful operation of today’s organizations. The
success of every new business initiative relies heavily on data
integration between existing heterogeneous applications and databases.
For this reason, when companies look to improve productivity, reduce
overall costs, or streamline business processes, data integration should
be at the heart of their plans. Integration improves exposure and, by
extension, the value and quality of information to facilitate workflow
and reduce business risk. It is an important element of the way that the
organization’s business process operates. Data integration technology is
the key to pulling organization data together and delivering an
information infrastructure that will meet strategic business
intelligence initiatives. This information infrastructure consists of
data warehouses, interface definitions and operational data stores. Data
integration should include capabilities such as data warehousing,
metadata integration, ongoing updates and reduced maintenance, access to
a wider variety of data sources, and design and debugging options. In
this paper we will discuss about data integration and a case study on
Database Integration at Leeds Metropolitan University using the Calopus
system. |
|
Title: |
SUPPORTING E-PLACEMENT: ACHIEVEMENTS IN THE ITALIAN WORKFARE
PROJECT |
Author(s): |
Mariagrazia Fugini, Piercarlo Maggiolini and Krysnaia Nanini |
Abstract: |
This paper presents the basic developments and
architectural issues of an Italian “Borsa Continua Nazionale del Lavoro”
(BCNL), an eGovernment project aimed at developing a Portal for Services
to Employment. It consists in a network of nodes structured at three
main levels: the National level, managed by the Ministry of Welfare; the
Regional level, in which Regions are grouped in local federations in
order to interoperate, and the Provincial level, again structured as a
federation of local domains. These federations are the structural tool
able to support both proactive and reactive policies directed to enhance
job-placement. The paper describes each level and the cooperation
occurring both in the various domains and among levels. Advantages and
drawbacks of this architecture are discussed. Finally, the paper
describes the basic issues related to security and privacy in the
environment, in particular presenting cooperative federated
authentication.
|
|
Title: |
CURRENT TRENDS IN DATA WAREHOUSING METHODS AND TECHNOLOGIES |
Author(s): |
Vera Ivanova |
Abstract: |
Data Warehousing (DW) methods and technologies
are in a new stage of their evolution and of their amalgamation with the
enterprise businesses, they serve. The main goals of this work are to
identify, review and analyze the latest trends in DW. A systematic
approach is followed to recognize, define and analyze the most important
trends. The approach is based on the trends’ corresponding role and
value in the business processes and intelligence (BI). For this purpose
we start with updated definitions of DW and BI and then consider the
generalized Architecture of today’s DW. We then “drill down” to analyze
the DW problems and trends in their solving for data quality provisions,
regulatory compliance, infrastructure consolidation, and
standardization, corporate performance optimization and metadata
management. This in-depth logical analyzing approach results in
comprehensible conclusions to be considered on the important early
phases of DW projects, as it is well known that early project decisions
carry impacts for the whole DW system life span. |
|
Title: |
EFFICIENT MECHANISM FOR HANDLING MATERIALIZED XML VIEWS |
Author(s): |
Jessica Zheng, Anthony Lo, Tansel Özyer and Reda Alhajj |
Abstract: |
Materialized views provide an effective and
effective mechanism to improve query performance. The necessity to keep
consistency between materialized views and the underlying data raises
the problem of when and how to update views efficiently. This paper
addresses the issue of deferred incremental update on materialized XML
view. The proposed approach mainly extends our previous work on
materialized object-oriented views. The overlap between XML and the
object-oriented paradigm has been the main driving motivation to conduct
the study described in this paper. We modified and adapted the later
approach to meet XML requirements. |
|
Title: |
A FORMAL TOOL THAT INTEGRATES RELATIONAL DATABASE SCHEMES AND
PRESERVES THE ORIGINAL INFORMATION |
Author(s): |
A. Mora and M. Enciso |
Abstract: |
In this work we face on with the main problem
of the database design process in a collaborative environment: the users
provide different models representing a part of the global model and we
must integrate these database sub-shemes to render a unified database.
The problems arise when the users' specifications do not match propertly
or, in the worst case, they represent contradictory information. In the
literature, the different approaches use a selected canonical language
to translate all the sub-shemes and to carried out the integration
process. It seems to be widely accepted to select the Entity/Relational
(E/R) model as canonical language. Nevertheless it was not conceived as
a formal language and its use produces several troubles: it is not easy
to identify equivalent specifications, the information are represented
in several levels (attributes, table, constraints, etc) that must be
integrate as a whole, etc. All these problems are presented because the
E/R was conceived to be a high level specification language and not to
design automated integration methods based on it. In this work we
propose an automated method to integrate relational database sub-schemes
based on a formal language. The extraction, integration and generation
tasks are carried out efficiently using the SLfd, logic (Substitution
Logic for functional dependencies). We have selected this logic because
it is appropriated to management the functional dependencies in a
automatic way. Logic is present in all the stages of our proposed
architecture: analysis, design, model transformation, integration, data
preservation, etc. The integration tool interacts automatically with the
DBMS (we use Oracle 9i), uses the logic in a transparent mode to deduce
the unified view and provides a web-interface to facilitate user
participation. The collaborative tool infers the information system
knowledge from local Oracle schemes and renders an integrated Oracle
database scheme. The integration process uses the information of the
{\em Structural functional dependencies (FDs)} (FDs inferred from all
the database subschemes) and the {\em Environment FDs} (FDs provided by
the designers) and it renders a unique database model fulfilling all the
FDs. The tool carries out an integration of the schemes and an
integration of the data itself, providing a new database with a common
structure and containing all the information provided in the original
subschemes. |
|
Title: |
A FRAMEWORK FOR SEMANTIC RECOVERY STRATEGIES IN CASE OF PROCESS
ACTIVITY FAILURES |
Author(s): |
Stefanie Rinderle, Sarita Bassil and Manfred Reichert |
Abstract: |
During automated process execution semantic
activity failures may frequently occur, e.g., when a vehicle
transporting a container has a breakdown. So far there are no applicable
solutions to overcome such exceptional situations. Often the only
possibility is to cancel and roll back respective process instances what
is not always possible and more often not desired. In this paper we
contribute towards the system-assisted support of finding forward
recovery solutions. Our framework is based on the facility to
(automatically) perform dynamic changes of single process instances in
order to deal with the exceptional situation. We identify and formalize
factors which influence the kind of applicable recovery solutions. Then
we show how to derive possible recovery solutions and how to evaluate
their quality with respect to different constraints. All statements are
illustrated by well-studied cases from different domains. |
|
Title: |
DEPLOYMENT OF ONTOLOGIES IN BUSINESS INTELLIGENCE SYSTEMS |
Author(s): |
Carsten Felden and Daniel Kilimann |
Abstract: |
The consideration of integrated structured and
unstructured data in management information systems requires a new kind
of metadata management. Ontologies constitute a possibility to solve the
resulting problems. Process models which describe the development of
ontologies and can be utilised in the context of management information
systems, are discussed. |
|
Title: |
ACTIVE MECHANISMS FOR CHECKING PARTICIPATION CONSTRAINTS IN UML |
Author(s): |
Djamel Berrabah, Charles-François Ducateau and Faouzi Boufarès |
Abstract: |
Among the multiple efforts devoted to face the
problems of database modeling, we find the automation of the database
design process using CASE tools. Often, these tools do not take into
account all the information that is presented in a conceptual schema.
The problem is that the relational elements obtained during these
processes do not coincide completely with the conceptual elements, which
produces some semantic losses. The idea is to enrich these tools and to
improve them in order to solve some problems of modeling. The goal of
this work is to propose an efficient approach to generate mechanisms
that preserve the participation constraints defined in a conceptual
schema during its transformation into a relational schema. These
mechanisms are active during the maintenance of databases. If any
operation brings about an inconsistent Database (DB) state it will be
rejected and the data of the DB will not change. |
|
Title: |
INCLUSION OF TIME-VARYING MEASURES IN TEMPORAL DATA WAREHOUSES |
Author(s): |
Elzbieta Malinowski and E. Zimányi |
Abstract: |
Data Warehouses (DWs) integrate data from
different source systems that may have temporal support. However,
current DWs only allow to track changes for measures indicating the time
when a specific measure value is valid. In this way, applications such
as fraud detection cannot be easily implemented since they require to
know the time when changes in source systems have occurred. In this
work, based on the research related to Temporal Databases, we propose
the inclusion of time-varying measures changing the current role of the
time dimension. First, we analyze the availability of temporal types in
the different source systems integrating a DW. Then, we study different
scenarios that show the usefulness of inclusion of different temporal
types. Further, since measures can be aggregated before being inserted
into DWs, we discuss the issues related to different time granularities
between source systems and DWs, and in addition, measure aggregation in
the presence of valid time. |
|
Title: |
CONTINUOUS RANGE QUERY PROCESSING FOR NETWORK CONSTRAINED MOBILE
OBJECTS |
Author(s): |
Dragan Stojanovic, Slobodanka Djordjevic-Kajan, Apostolos N.
Papadopoulos and Alexandros Nanopoulos |
Abstract: |
In contrast to regular queries that are
evaluated only once, a continuous query remains active over a period of
time and has to be continuously evaluated to provide up to date results.
We propose a method for continuous range query processing for different
types of queries, characterized by mobility of objects and/or queries
which follow paths in an underlying spatial network. The method assumes
an available 2D indexing scheme for indexing spatial network data. An
appropriately extended R*-tree provides matching of queries and objects
according to their locations on the network or their network routes. The
method introduces an additional pre-refinement step which generates main
memory data structures to support efficient, incremental reevaluation of
continuous range queries in periodically performed refinement steps. |
|
Title: |
INTEGRATION OF DATA SOURCES FOR PLANT GENOMICS |
Author(s): |
P. Larmande, C. Tranchant-Dubreuil, L. Regnier, I. Mougenot and T.
Libourel |
Abstract: |
The study of the function of genes, or
functional genomics, is today one of the most active disciplines in the
life sciences and requires effective integration and processing of
related information. Today's biologist has access to bioinformatics
resources to help him in his experimental research. In genomics, several
tens of public data sources can be of interest to him, each source
contributing a part of the useful information. The difficulty lies in
the integration of this information, often semantically inconsistent or
expressing differing viewpoints, and, very often, only available in
heterogenous formats. In this context, informatics has a role to play in
the design of systems that are flexible and adaptable to significant
changes in biological data and formats. It is within this framework that
this paper presents the design and implementation of an integrated
environment strongly supported by knowledge-representation and
problem-solving tools. |
|
Title: |
USING RELATIONAL DATABASES IN THE ENGINEERING REPOSITORY SYSTEMS |
Author(s): |
Erki Eessaar |
Abstract: |
Repository system can be built on top of the
database management system (DBMS). DBMSs that use relational data model
are usually not considered powerful enough for this purpose. In this
paper, we analyze these claims and conclude that they are caused by the
shortages of SQL standard and inadequate implementations of the
relational model in the current DBMSs. Problems that are presented in
the paper make usage of the DBMSs in the repository systems more
difficult. This paper also explains that relational system that follows
the rules of the Third Manifesto is suitable for creating repository
system and presents possible design alternatives. |
|
Title: |
MISTEL - AN APPROACH TO ACCESS MULTIPLE RESOURCES |
Author(s): |
Thomas Bopp, Thorsten Hampel, Robert Hinn, Jan Pawlowski and
Christian Prpitsch |
Abstract: |
Digital documents are widely spread around the
web in information systems of all kinds. The approach described in this
paper is to unify the access to documents and connect applications to
share, search and publish documents in a standardised way. The sample
implementation uses web services to integrate knowledge management, a
learning management system, and a digital library. |
|
Title: |
MINIMIZING THE COMPLEXITY OF DISTRIBUTED TRANSACTIONS IN CORPORATE
ARCHITECTURES WITH THE USE OF ASYNCHRONOUS REPLICATION |
Author(s): |
S. Poltronieri, S. de Paula and L. N. Rossi |
Abstract: |
In architectures of software usual in big
corporations, the use of the protocol “two-phase commit” for distributed
transactions presents inconveniences such as code complexity, long times
of answer for the final user and need of an ambient that allows complete
simultaneity. We present here an alternative model, based on
asynchronous replication, implemented with success in the University of
São Paulo as infrastructure of integration for its corporate systems,
which propitiates short transactions in the context of each database and
lower time of answer with no need of a complex ambient of high
availability. |
|
Title: |
MERGING, REPAIRING AND QUERYING INCONSISTENT DATABASES WITH
FUNCTIONAL AND INCLUSION DEPENDENCIES |
Author(s): |
Luciano Caroprese, Sergio Greco and Ester Zumpano |
Abstract: |
In this paper a framework for merging,
repairing and querying inconsistent databases is presented. The
framework, considers integrity constraints defining primary keys,
foreign keys and general functional dependencies. The approach consists
of three steps: i) merge of the source databases by means of integration
operators or general SQL queries, to reduce the set of tuples coming
from the source databases which are inconsistent with respect to the
constraints defined by the primary keys, ii) repair of the integrated
database by completing and/or cleaning the set of tuples which are
inconsistent with respect to the inclusion dependencies (e.g. foreign
keys), and iii) compute consistent answers over repaired databases which
could be still inconsistent with respect to the functional dependencies.
The complexity of merging, repairing and computing consistent answers
will be show to be polynomial and a prototype of a system integrating
databases and computing queries over possible inconsistent databases
will be presented. |
|
Title: |
USING GAZETTEERS TO ANNOTATE GEOGRAPHIC CATALOG ENTRIES |
Author(s): |
Daniela F. Brauner, Marco A. Casanova, Karin K. Breitman and Luiz
André P. Leme |
Abstract: |
A gazetteer is a geographical dictionary
containing a list of geographic names, together with their geographic
locations and other descriptive information. A geographic metadata
catalog holds metadata describing geographic information resources,
stored in a wide variety of sources, ranging from simple PCs to large
public databases. This paper argues that unique characteristics of
geographic objects can be explored to address the problem of automating
the generation of metadata for geographic information resources. The
paper considers federations of gazetteers and geographic metadata
catalogs and discusses in detail two problems, namely, how to use
gazetteers to automate the description of geographic information
resources and how align thesauri used by gazetteers. The paper also
argues why such problems are important in the context of the proposed
architecture. |
|
Title: |
DATA WAREHOUSES: AN ONTOLOGY APPROACH |
Author(s): |
Alexandra Pomares Quimbaya and José Abásolo |
Abstract: |
Although the dimensional design for data
warehouses has been used in a considerable amount of projects, it does
have limitations of expressiveness, particularly with respect to what
can be said about relations and attributes properties and restrictions.
We present a new way to design data warehouses, based on ontologies,
that overcomes many of these limitations. In the proposed architecture
descriptive ontologies are used to build the data warehouse and
taxonomic ontologies are used during data preparation phase. We discuss
the expressive power of Ontology approach showing a semantic comparison
with dimensional model both applied to a case study. |
|
Title: |
DATA COMPLIANCE IN PHARMACEUTICAL INDUSTRY - INTEROPERABILITY TO
ALIGN BUSINESS AND INFORMATION SYSTEMS |
Author(s): |
Néjib Moalla, Abdelaziz Bouras, Gilles Neubert, Yacine Ouzrout and
Nicolas Tricca |
Abstract: |
The ultimate goal in the pharmaceutical sector
is product quality. However this quality can be altered by the use of a
number of heterogeneous information systems with different business
structures and concepts along the lifecycle of the product.
Interoperability is then needed to guarantee a certain correspondence
and compliance between different product data. In this paper we focus on
a particular compliance problem, between production technical data,
represented in an ERP, and the corresponding regulatory directives and
specifications, represented by the Marketing Authorizations (MA). The MA
detail the process for manufacturing the medicine according to the
requirements imposed by health organisations such as Food and Drug
Administration (FDA) and Committee for Medicinal Products for Human use
(CHMP). The proposed approach uses an interoperability framework which
is based on a multi-layer separation between the organisational aspects,
business trades, and information technologies for each involved entity
into the communication between the used systems. |
|
Title: |
MULTIDIMENSIONAL SCHEMA EVOLUTION - INTEGRATING NEW OLAP
REQUIREMENTS |
Author(s): |
Mohamed Neji, Ahlem Nabli, Jamel Feki and Faiez Gargouri |
Abstract: |
Multidimensional databases cconstitute an
effective support for OLAP processes; in that sense they improve the
decision-making in enterprise information systems. These databases
evolve with the decision marker requirements and are sensitive to data
source changes. In this paper, we are interested in the evolution of the
datamart schema due to the raise of new requirements. Our approach
determines what functional datamarts will be able to cover a new
requirement, if any, and decides on a strategy of integration. This
leads either to the alteration of an existing data mart schema or, to
the creation of a new schema suitable for the new requirement. |
|
Title: |
ENABLING ROBUSTNESS IN EXISTING BPEL PROCESSES |
Author(s): |
Onyeka Ezenwoye and S. Masoud Sadjadi |
Abstract: |
To promote efficiency and the reuse of
software, Web services are being integrated both within enterprises and
across enterprises to create higher function services. BPEL is a
workflow language that can be used to facilitate this integration.
Unfortunately, the autonomous nature of Web services leaves BPEL
processes susceptible to the failures of their constituent services. In
this paper, we present a systematic approach to making existing BPEL
processes more fault tolerant by monitoring the involved Web services at
runtime, and by replacing delinquent Web services dynamically. To show
the feasibility of our approach, we developed a prototype implementation
that generates more robust BPEL processes from existing ones
automatically. The use of the prototype is demonstrated using an
existing loan approval BPEL process. |
|
Title: |
BUSINESS PROCESS EMBEDDED INFORMATION SYSTEMS - FOR FLEXIBILITY AND
ADAPTABILITY |
Author(s): |
Marc Rabaey, Eddy Vandijck, Koenraad Vandenborre, Herman Tromp and
Martin Timmerman |
Abstract: |
In this ever faster changing world,
organisations are faced with the need to have flexible processes. This
is only possible if these processes have full control over their
supporting information systems, which we propose to embed into the
business processes. |
|
Title: |
RELIABLE PERFORMANCE DATA COLLECTION FOR STREAMING MEDIA SERVICES |
Author(s): |
Beomjoo Seo, Michelle Covell, Mirjana Spasojevic, Sumit Roy, Roger
Zimmermann, Leonidas Kontothanassis and Nina Bhatti |
Abstract: |
The recent proliferation of streamingmedia
systems in bothwired andwireless networks challenges the network
operators to provide cost-effective streaming solutions that maximize
the usage of their infrastructure while maintaining adequate service
quality. Some of these goals conflict and motivate the development of
precise and accurate models that predict the system states under
extremely diverse workloads on-the-fly. However, many earlier studies
have derived models and subsequent simulations that are well-suited only
for a controlled environment, and hence explain a limited sets of
behavioral singularities observed from software component profiles. In
this study we propose a more general, procedural methodology that
characterizes a single system’s streaming capacity and derives a
prediction model that is applicable for any type of workload imposed on
the measured system. We describe a systematic performance evaluation
methodology for streaming media systems that starts with the reliable
collection of performance data, presents a mechanism to calibrate the
data for later use during the modeling phase, and finally examines the
prediction power and the limitations of the calibrated data itself. We
validate our method with two widely used streaming media systems and the
results indicate an excellent match of the modelled data with the actual
system measurements. |
|
Title: |
FILTERING UNSATISFIABLE XPATH QUERIES |
Author(s): |
Jinghua Groppe and Sven Groppe |
Abstract: |
Empty results of queries are a hint for
semantic errors in users’ queries, and erroneous and unoptimized queries
can lead to highly inefficient processing of queries. For manual
optimization, which is prone to errors, a user needs to be familiar with
the schema of the queried data and with implementation details of the
used query engine. Thus, automatic optimization techniques have been
developed and have been used for decades in database management systems
for the deductive and relational world. We focus on the satisfiability
problem for the queries formulated in the XML query language XPath. We
propose a schemabased approach to check whether or not an XPath query
conforms to the constraints given in the schema in order to detect
semantic errors, and in order to avoid unnecessary evaluations of
unsatisfiable queries. We present experimental results of our prototype,
which show the optimization potential of avoiding the evaluation of
unsatisfiable queries. |
|
Title: |
FUZZY XML MODEL FOR REPRESENTING FUZZY RELATIONAL DATABASES IN
FUZZY XML FORMAT |
Author(s): |
Alnaar Jiwani, Yasin Alimohamed, Krista Spence, Tansel Özyer and
Reda Alhajj |
Abstract: |
The Extensible Markup Language (XML) is
emerging as the dominant data format for data exchange between
application systems. Many translation techniques have been devised to
publish large amounts of existing conventional relational data in this
new format. There also exists a need to be able to represent imprecise
data in both relational databases and XML. This paper describes a fuzzy
XML schema model for representing a fuzzy relational database in XML
format. It outlines a simple translation algorithm to include fuzzy
relations and similarity matrices with their associated conventional
relation. |
|
Title: |
INTEGRATED UNIVERSITY INFORMATION SYSTEMS |
Author(s): |
Thomas Kudrass |
Abstract: |
In this position paper, we discuss the
integration of hetero¬geneous data¬¬bases with the example of a
university information system, based on previous experiences in the
implementation of some components. The paper argues the new
opportunities for universities resulting from database integration. We
develop the target architecture for an integrated information system
whose principle is the coupling of existing systems and the definition
of global views on them. The services defined on those views can be used
for high-level information services in the intranet of the university,
for internet presentations or for the definition of workflows in the
university administration. |
|
Title: |
THE BENEFITS OF ACCURATE, AND TIMELY DATA IN LEAN PRODUCTION
ENVIRONMENTS - RFID IN SUPPLY CHAIN MANAGEMENT |
Author(s): |
Vijay K. Vemuri |
Abstract: |
The usefulness of information systems
critically depends on the accuracy of the data contained within it.
Errors in capturing data into the information systems are particularly
vexing since these errors permeate the entire information system(s),
affecting every aspect of information use. The direct and indirect
consequences of unreliable data did not attract much attention as there
were few alternatives to reduce them. Newer technologies, especially
Radio Frequency Identification (RFID), are enabling virtual elimination
of data entry errors in inventory management. We investigate the effect
of accurate data on the performance of supply chains utilizing lean
production systems. Our simulation results indicate that time to fulfil
a purchase order (cycle time) is significantly reduced by improving the
quality of the inventory data. The simulation model we developed will
enable us to examine other performance characteristics of a supply
chain. We will also investigate the sensitivity of supply chain
performance due to changes in the parameters of the model. |
|
Title: |
DESIGN OF A MEDICAL DATABASE TRANSFORMATION ALGORITHM |
Author(s): |
Karine Abbas, Christine Verdier and André Flory |
Abstract: |
The aim of this article is to create a unique
medical record structure from the metabase of any medical record in any
care place. The work is divided into two parts : the first step consists
in creating a reference medical record model based on a graph structure
in which the first level is fixed and the other levels are changeable.
The second step is to provide transformation algorithms to translate the
legacy relational database (RDB) into the reference model to give a
unique medical record structure. In this second step is analysed the
correlation between the legacy RDB keys and the classification of the
keys into four types of relations: base relation, dependent relation,
inheritance relation and composite relation. |
|
Title: |
KEY FACTORS IN LEGACY SYSTEMS MIGRATION - A REAL LIFE CASE |
Author(s): |
Teta Stamati, Konstantina Stamati and Drakoulis Martakos |
Abstract: |
Although legacy systems migration as a subject
area is often overlooked in favour of areas such as new technology
developments and strategic planning of information technology, most
migration projects are considered ill-fated initiatives and a rate of
over 80% of these projects run over budget, frequently with system
functionality falling short of contract. Many practitioners consider
that the proposed theoretical migration approaches are myopic and do not
take into account a number of key factors that make a migration project
a really complex initiative. Our position is that throughout the life
cycle of a migration process, there are some critical factors that
initially play the role of the “drivers” and afterwards they became the
factors that hinder (“hinders”) the migration process. We consider these
key factors as Critical Success Factors (CSF) that must be carefully
considered. Furthermore, these key factors could be either overt or
covert factors. In each case, the migration engineers should consider
and analyse them very carefully prior to the initiation of the migration
process and a well-defined migration methodological plan should be
developed. The work presented is based on a real life initiative putting
emphasis on the key success factors revealing at the same time the
complexity of a migration process. Emphasis is put on the required
management view and planning effort, rather than on the mere
technological issues. |
|
Title: |
ENTERPRISE SYSTEMS MANAGEMENT AND INNOVATION - IMPACT ON THE
RESEARCH AGENDA |
Author(s): |
Charles Møller |
Abstract: |
This paper proposes that ERP-implementation
lead to a new post-implementation management challenge: Enterprise
Systems Management and Innovation. Enterprise Systems Management and
Innovation is a new concept that deals with the management of the
enterprise system as a resource, and as a potential for transforming the
organization by enabling innovative supply chain processes. The argument
is rooted in seven case studies, a survey on ERP adoption and a
retrospective analysis on the development of ES. This paper discuses the
emerging issues and the implications for management. The paper concludes
by outlining the impact on the ERP research agenda. |
|
Area 2 - Artificial Intelligence and
Decision Support Systems |
Title: |
DEVELOPMENT OF SUMMARIES OF CERTAIN PATTERNS IN MULTI-BAND
SATELLITE IMAGES |
Author(s): |
Hema Nair |
Abstract: |
This paper describes a system that is designed
and implemented for interpretation of some patterns in multi-band (RGB)
satellite images. Patterns such as land, island, water body, river,
fire, urban area settlements in remote-sensed images are extracted and
summarised in linguistic terms using fuzzy sets. Some elements of
supervised classification are introduced to assist in the development of
linguistic summaries. A few LANDSAT images are analysed by the system
and the resulting summaries of the image patterns are explained. |
|
Title: |
UTILIZATION OF CASE-BASED REASONING IN AUDITING - DETERMINING THE
AUDIT FEE |
Author(s): |
Robert Zenzerović |
Abstract: |
Case-based reasoning represents a method for
solving problems and decision making support which is based on the
previous business experience. It uses cases from the past to solve new
problems. Case can be defined as conceptualized piece of knowledge
representing the experience that teaches a lesson fundamental to
achieving the goals of the decision maker. Cases usually incorporate
input features and output features, where input features represent
important attributes of cases that effect decision making (situation
part of the case) and output feature which is outcome that depends on
the input features (solution part of the case). Case base reasoning
functions in a further way: Once, when target case is inputted in the
system with its input features system has to retrieve the most similar
cases from the case base. After the most similar case is retrieved from
the case base it can be used for finding interesting information. The
reasoner can then adjust and send a new probe with different features
for retrieving of new case, or the system can be designed to make
automatic adjustments in solution part of the old case on the base of
differences in situation part of the cases, providing the solution for
the new case. Many studies tried to explain types and impact of
different factors that determine audit fees. Mostly all authors
concentrate their research on the impact of following determinants:
auditee size, auditee complexity, auditee profitability, ownership
control, timing variables, auditor location and auditor size. In paper
all mentioned factors are described except auditor size and location
since these factors are not significant in Croatian audit service
market. All significant audit fee determinants will be appropriately
quantified in order to build a case based reasoning model for
determining audit fee for smaller and mid sized auditing firms in
Croatia but also for the same firms in the other countries too. |
|
Title: |
BUILDING COMPETITIVE ADVANTAGE VIA CRM BASED ON DATA WAREHOUSE AND
DATA MINING |
Author(s): |
Jiejun Huang, Wei Cui and Yanbin Yuan |
Abstract: |
Customer Relationship Management (CRM) is a new
business concept, providing a novel approach for managing the
relationships between a corporation and its customers towards maximal
profitability and sustainability. Data mining and data warehouse are the
useful information technologies, which provide powerful means for
extracting and utilizing the business information from historical data
resources and runtime data flows. This paper reviews the objectives,
functionalities, and development trends of CRM, discusses the
architecture, data model and development methodologies of CRM systems
based on data warehouse and data mining, then outlines the applications
of integrated CRM systems in decision making, including business
administration, marketing, customer service, customer management, and
credit evaluation. Eventually, it describes some problems and challenges
for further research. |
|
Title: |
VARIOUS PROCESS WIZARD FOR INFORMATION SYSTEMS - USING FUZZY PETRI
NETS FOR PROCESS DEFINITION |
Author(s): |
Jaroslav Prochazka, Jaroslav Knybel and Cyril Klimes |
Abstract: |
The new approach in information system
automation is process or workflow management. For unskilled user is
important, when the business processes of company are described. Then,
according to this description are users led correctly in their work. The
business (application) model can be caught in finite state machines and
its variations. Petri net can be used for process definition in process
wizard. Sometimes unclear state occurs, for its description can be fuzzy
logic IF-THEN rules used. We explain what process wizard is, what should
contain and outline how it could be implement in IS QI. We also
introduce Petri nets with fuzzy approach for process description. |
|
Title: |
DIAGNOSIS OF DEMENTIA AND ITS PATHOLOGIES USING BAYESIAN BELIEF
NETWORKS |
Author(s): |
Julie Cowie, Lloyd Oteniya and Richard Coles |
Abstract: |
The use of artificial intelligence techniques
in medical decision support systems is becoming more commonplace. By
incorporating a method to represent expert knowledge, such systems can
aid the user in aspects such as disease diagnosis and treatment
planning. This paper reports on the first part of a project addressing
the diagnosis of individuals with dementia. We discuss two systems:
DemNet and PathNet; developed to aid accurate diagnosis both of the
presence of dementia, and the pathology of the disease. |
|
Title: |
DECISION SUPPORT ON THE MOVE - MOBILE DECISION MAKING FOR TRIAGE
MANAGEMENT |
Author(s): |
Julie Cowie and Paul Godley |
Abstract: |
This paper describes research investigating
ways in which a mobile decision support system might be implemented. Our
view is that the mobile decision maker will be better supported if
he/she is aware of the Quality of the Data (QoD) used in deriving a
decision, and how QoD improves or deteriorates while he/she is on the
move. We propose a QoD model that takes into account static and dynamic
properties of the mobile decision making environment, uses multicriteria
decision analysis to represent the user’s decision model and to derive a
single QoD parameter, and investigates the use of powerful graphics to
relay information to the user. |
|
Title: |
FREQUENCY CALIBRATIONS WITH CONVENTIONAL TIME INTERVAL COUNTERS VIA
GPS TRACEABILITY |
Author(s): |
Juan José González de la Rosa, Isidro Lloret, Carlos García
Puntonet, Juan Manuel Górriz, Antonio Moreno, Matías Liñán and Victor
Pallarés |
Abstract: |
Calculation of the uncertainty in traceable
frequency calibrations is detailed using low cost instruments,
par-tially characterized. Contributions to the standard uncertainty have
been obtained under the assumption of uniform probability density
function of errors. Short term instability has been studied using
non-classical statistics. A thorough study of the noise processes
characterization is made with simulated data by means of our variance
estimators. The experiment is thought for frequencies close to 1 Hz. |
|
Title: |
SIMULATION MODELLING OF IRON ORE PRODUCTION SYSTEM AND QUALITY
MANAGEMENT |
Author(s): |
Jim E. Everett |
Abstract: |
Iron ore is railed several hundred kilometres
from open-cut mines inland, to port facilities, then processed to lump
and fines products, blended and the lump product re-screened ready for
shipment, mainly to Asia. Customers use the ore as principal feed in
steel production. Increasing demand and price, especially from China,
requires expansion of existing mines and planning of new operations.
Expansion planning of the operational logistics, maintaining acceptable
product quality, has been greatly helped by simulation modelling
described in this paper. |
|
Title: |
A DISTRIBUTED ALGORITHM FOR COALITION FORMATION IN LINEAR
PRODUCTION DOMAIN |
Author(s): |
Chattrakul Sombattheera and Aditya Ghose |
Abstract: |
This paper proposes a coalition formation
algorithm for cooperative agents in order to maximize the system's
profit. |
|
Title: |
METHOD FOR DRAWING UP A ROAD MAP THAT CONSIDERS THE SYNERGY EFFECT
AND RISK FOR IT INVESTMENT |
Author(s): |
Tadasuke Nakagawa, Shigeyuki Tani, Masaharu Akatsu and Norihisa
Komoda |
Abstract: |
IT governance lacks a comprehensive vision of
investment in two or more projects. It is necessary to decide the
priority levels that maximize the effects under constrained conditions.
It is a complex problem, because while sometimes a greater effect can be
obtained by introducing two or more measures at the same time, other
times the effect of two measures introduced at the same time might not
be significant. Although there is a synergy effect when two or more
measures are introduced, no method for drawing up an investmentdecision
road map has considered that effect. Therefore, we developed one. What a
decision-maker must think about when considering the introduction of two
or more measures, can be visualized by drawing up a comprehensive road
map that satisfies constraint conditions, such as the effectiveness of
the measure, budget, time, staff size, order of introduction, and the
synergy effect. Road map users can easily reach a consensus because the
map, by taking into account the constraint conditions and the investment
decisionmaking process, helps them logically explain the order in which
the measures should be introduced. |
|
Title: |
A SEMI-AUTOMATED QUALITY ASSURANCE TOOLBOX FOR DIAGNOSTIC
RADIOLOGICAL IMAGING |
Author(s): |
Christodoulos Constantinou, Andreas Grondoudis, Andreas
Christoforou, Christakis Constantinides and Andreas Lanitis |
Abstract: |
Magnetic Resonance (MRI), Computed Tomography
(CT) and Ultrasound (US) are three of the most commonly used clinical
imaging modalities. The aim of this study was to establish a Quality
Assurance program for MRI, CT and US scanners. A well-designed quality
assurance program is of utmost importance in the clinical setting,
because it indicates whether diagnostic imaging modalities meet the
minimum criteria of acceptable performance and because it helps
determine those scanner parameters that need adjustment in order to
ensure optimum performance. Quality assurance programs that rely on
manual data collection and analysis are tedious and time consuming and
are often abandoned due to the significant workload required for their
implementation. In this paper we describe an integrated software system
for automating the process of data collection and management in Quality
Assurance for diagnostic radiological imaging. The developed system is
comprised of two main units: The Image Processing Unit (IPU) and the
Data Management Unit (DMU). The IPU is used for analysing images from
different diagnostic modalities in order to extract measurements. The
IPU is dynamically linked to the DMU so that measurements are
transferred directly to the DMU. This process allows the generation of
quality assurance reports for all such modalities. Based on the proposed
system, it is possible to apply and monitor systematic quality assurance
programs for medical imaging equipment, ensuring compliance with
international standards. |
|
Title: |
THE USE OF THE NATURAL LANGUAGE UNDERSTANDING AGENTS WITH
CONCEPTUAL MODELS |
Author(s): |
Olegas Vasilecas and Algirdas Laukaitis |
Abstract: |
In this paper we present AI agents architecture
for user natural language interfaces in data exploration domain. We
present an evaluation of an intelligent interface when user tries to
explore corporate databases by means of natural language. More
specifically, we describe an experiment that evaluates IBM corporation
natural language toolbox in the data exploration domain. Unsatisfactory
results from that experiment triggered our research to improve user
interface with the natural language modality on architecture and
algorithm levels. We extend traditional natural language interfaces in
data exploration domain in two directions: 1) data conceptual modelling
is a keystone for successful intelligent interface and we present our
results and arguments for one of the most successful conceptual data
models – IBM financial services data model (FSDM), 2) we suggest to use
feedforward neural network as concepts indexes in the users natural
language interfaces. All presented concepts are realized as the open
source project JMining Dialog. |
|
Title: |
KNOWLEDGE ENGINEERING USING THE UML PROFILE - ADOPTING THE
MODEL-DRIVEN ARCHITECTURE FOR KNOWLEDGE-BASED SYSTEM DEVELOPMENT |
Author(s): |
Mohd Syazwan Abdullah, Richard Paige, Ian Benest and Chris Kimble |
Abstract: |
Knowledge engineering (KE) activities are
essential to the process of building intelligent systems; it conceptual
modelling is exploited so that the problem-solving techniques used may
be understood. This paper discusses platform independent conceptual
modelling of a knowledge intensive application, focusing on
knowledge-based systems (KBS) in the context of a model-driven
architecture (MDA). An extension to the Unified Modeling Language (UML),
using its profile extension mechanism, is presented. The profile
discussed in this paper has been successfully implemented in the
eXecutable Modelling Framework (XMF) – a Meta-Object-Facility (MOF)
based UML tool. The Ulcer Clinical Practical Guideline Recommendations
demonstrate the use of this profile; the prototype is implemented in the
Java Expert System Shell (Jess). |
|
Title: |
INCREMENTAL PROCESSING OF TEMPORAL OBSERVATIONS IN SUPERVISION AND
DIAGNOSIS OF DISCRETE-EVENT SYSTEMS |
Author(s): |
Gianfranco Lamperti and Marina Zanella |
Abstract: |
Observations play a major role in supervision
and diagnosis of discrete-event systems (DESs). In a distributed,
large-scale setting, the observation of a DES over a time interval is
not perceived as a totally-ordered sequence of observable labels but,
rather, as a directed acyclic graph, under uncertainty conditions.
Problem solving, however, requires generating a surrogate of such a
graph, the index space. Furthermore, the observation hypothesized so far
has to be integrated at the reception of a new fragment of observation.
This translates to the need for computing a new index space every time.
Since such a computation is expensive, a naive generation of the index
space from scratch at the occurrence of each observation fragment
becomes prohibitive in real applications. To cope with this problem, the
paper introduces an incremental technique for efficiently modeling and
indexing temporal observations of DESs. |
|
Title: |
QUALITY LEARNING OBJETCS MANAGEMENT - A PROPOSAL FOR E-LEARNING
SYSTEMS |
Author(s): |
Erla Morales, Ángela Barrón and Francisco García |
Abstract: |
Web development is promoting important
advantages for educational area specially e-learning systems. By one
side Learning Objects (LOs) aim the possibility to reuse specific
information and by the other side they can be interchanged though
different context and platforms according to the user’s needs. The
possibility to access, reuse and interchange information is a great
advantage to our information society but it is not enough. An urgent
necessity exists to guarantee the LOs quality content. There exists a
plethora of quality criteria to value digital sources but there are only
a few suggestions about how to evaluate LOs to structure quality
courses. This work proposes a quality learning object management for
e-learning systems. Our proposal consists on a system to evaluate LOs as
a continued process taking into account quality criteria related to
metadata information, especially the educational category, together with
a strategy to ensure a continued LOs quality contents. To achieve this,
we propose in a first place our own LOs definition to manage them in a
uniform way. After that, we suggest to relate LOs metadata information
with quality criteria trough an instrument which contains different kind
of categories and evaluation criteria. To promote a better reliability
results we suggest an evaluation strategy which consider experts and
users participation. |
|
Title: |
INTRODUCING INTELLIGENT AGENTS TO OUTSOURCING |
Author(s): |
Hemal Kothari, Bernadette Sharp, Luke Ho and Anthony Atkins |
Abstract: |
In the last few years, agent technology has
significantly emerged as a new paradigm for software developers to solve
complex problems. This paper extends the use of multi-agent systems into
a new domain of outsourcing. It highlights the various issues associated
with outsourcing decision making i.e. the complexity and the risks
involved in outsourcing. The paper outlines the HABIO framework which
proposes a tri-perspective approach focusing on the organisational,
information and business perspective to facilitate the outsourcing
decision-making and formulating an effective outsourcing strategy. The
main focus of this paper is to describe how agents can assist the
experts in their decision to support outsourcing. A call-centre scenario
illustrating a 3-layered agent architecture is proposed which aims to
capture the strategic, tactical, and communicational layers of
outsourcing and supports the experts in their outsourcing
decision-making.
|
|
Title: |
KNOWLEDGE MANAGEMENT FOR RAMP-UP - APPROACH FOR KNOWLEDGE
MANAGEMENT FOR RAMP-UP IN THE AUTOMOTIVE INDUSTRY |
Author(s): |
Sven Thiebus, Ulrich Berger and Ralf Kretzschmann |
Abstract: |
Enterprises in the automotive industry are
facing new challenges from increasing product diversification,
decreasing product life cycle times and permanent need for cost
reduction. The ramp-up as linking phase between development phase and
production phase has a crucial role for the success of a project. The
performance of a ramp-up depends on the maturity of the product and
manufacturing processes. Knowledge management is an extraordinary driver
for maturity of both product and manufacturing process. The existing
solutions for knowledge management show insufficient results. The new
approach bases on the cycle of organizational learning. The cycle
consists of four phases: socialization, externalization, combination and
internalization. The cycle of organizational learning is also known as
SECI cycle. It provides opportunities to improve ramp-up performance in
the automotive industry. Part of the new approach is a sophisticated
concept for a solution using Information Technology as enabler for
Knowledge management. |
|
Title: |
SADIM: AN AID SYSTEM FOR MANAGEMENT ENGINEERING DIAGNOSIS USING
KNOWLEDGE EXTRACTION AND MATCHING TECHNIQUES |
Author(s): |
Jamel Kolsi, Lamia Hadrich Belguith,Mansour Mrabet and Abdelmajid
Ben Hamadou |
Abstract: |
This paper describes an aid system of
management engineering diagnosis "Système d’Aide au Diagnostic
d’Ingénierie de Management " SADIM, the aim of which is to detect the
dysfunctions related to the enterprise management. This system allows
the acquisition of knowledge based on textual data (given in French)
related to the diagnosis, the matching and the assignment of witness
sentences to the key ideas that correspond to them. SADIM can also serve
as a part of a decision aid system as it includes carrying out diagnosis
which can helps experts and socio-economic management consultants to
take decisions that would make enterprises reach the required standards
through council interventions. |
|
Title: |
IMPLEMENTATION STRATEGIES FOR “EQUATION GURU” - A USER FRIENDLY
INTELLIGENT ALGEBRA TUTOR |
Author(s): |
Senay Kafkas, Zeki Bayram and Huseyin Yaratan |
Abstract: |
We describe the implementation strategies of an
intelligent algebra tutor, the “Equation Guru” (EG), which is designed
to help students learn the concepts of equation solving with one
unknown. EG provides a highly interactive and entertaining learning
environment through the use of Microsoft Agents. It consists of two main
parts. The first is the “Tutorial” part where students guided through
the steps of solving equations with one unknown. The second, “Drill and
Practice” part gives them a chance to practice their skills in equation
solving. In this part, equations are automatically geratated by EG, and
presented to the student. EG monitors the student’s performance and
adjusts the difficulty level of the equations accordingly. |
|
Title: |
DYNAMIC REPRESENTATION OF INFORMATION FOR A DECISION SUPPORT SYSTEM |
Author(s): |
Thierry Galinho, Michel Coletta, Patrick Person and Frédéric Serin |
Abstract: |
This paper presents a system designed to help
deciders manage cases of crisis. The system represents, characterises
and interprets the dynamic evolution of information describing a given
situation and displays the results of its analysis. The core of the
system is made up of three multiagent systems (MAS): one MAS for the
static and dynamic representation of the information (current
situation), the second MAS for dynamically regrouping sets of agents of
the former MAS and the upper MAS for matching results between the second
MAS and scenarios stored in the persistent memory of the system in order
to have a deeper analysis of the situation. The case based reasoning of
this last MAS sends its results to the user as a view of the current
situation linked to some views of similar situations. In this paper, we
will focus on the representation of information MAS. This MAS is dynamic
in order to be able to take into account the changes in the description
of the information. Current information is represented by a layer of
factual agents which is fed by the composite semantic features
constituting the atomic data elements of information. The aim of the set
of factual agents is both to be a real snapshot of the situation at any
time and to model the evolution. |
|
Title: |
DECISION SUPPORT SYSTEM FOR BREAST CANCER DIAGNOSIS BY A
META-LEARNING APPROACH BASED ON GRAMMAR EVOLUTION |
Author(s): |
Albert Fornells-Herrera, Elisabet Golobardes-Ribé, Ester
Bernadó-Mansilla and Joan Martí-Bonmatí |
Abstract: |
The incidence of breast cancer varies greatly
among countries, but statistics show that every year 720,000 new cases
will be diagnosed world-wide. However, a low percentage of women who
suffer it can be detected using mammography methods. Therefore, it is
necessary to develop new strategies to detect its formation in early
stages. Many machine learning techniques have been applied in order to
help doctors in the diagnosis decision process, but its definition and
application are complex, getting results which are not often the
desired. In this article we present an automatic way to build decision
support systems by means of the combination of several machine learning
techniques using a Meta-learning approach based on Grammar Evolution
(MGE). We will study its application over different mammographic
datasets to assess the improvement of the results. |
|
Title: |
DISCOVERING THE STABLE CLUSTERS BETWEEN INTERESTINGNESS MEASURES |
Author(s): |
Xuan-Hiep Huynh, Fabrice Guillet and Henri Briand |
Abstract: |
In this paper, we interested in finding the
different aspects existing in data sets via the evaluation of the
behavior of interestingness measures. This approach is an important step
in the process of post-processing the discovered knowledge in the form
of association rules. We used two data sets with different
characteristics of each and also investigated in the examination of the
two best rules data sets extracted from these two original data sets.
Our results are acceptable because of the high quantity of association
rules in the data sets, approximately $100000$ rules for
post-processing. Furthermore, this approach strongly participates in the
domain of knowledge quality research. |
|
Title: |
A FOUNDATION FOR INFORMED NEGOTIATION |
Author(s): |
John Debenham and Simeon Simoff |
Abstract: |
Approaches to the construction of agents that
are to engage in competitive negotiation are often founded on game
theory. In such an approach the agents are endowed with utility
functions and assumed to be utility optimisers. In practice the utility
function is derived in the context of massive uncertainties both in
terms of the agent's priorities and of the raw data or information. To
address this issue we propose an agent architecture that is founded on
information theory, and that manages uncertainty with entropy-based
inference. Our negotiating agent engages in multi-issue bilateral
negotiation in a dynamic information-rich environment. The agent strives
to make informed decisions. The agent may assume that the integrity of
some of its information decays with time, and that a negotiation may
break down under certain conditions. The agent makes no assumptions
about the internals of its opponent --- it focuses only on the signals
that it receives. It constructs two probability distributions over the
set of all deals. First the probability that its opponent will accept a
deal, and second that a deal will prove to be acceptable to it in time. |
|
Title: |
PREDICTING CARDIOVASCULAR RISKS - USING POSSUM, PPOSSUM AND NEURAL
NET TECHNIQUES |
Author(s): |
Thuy Nguyen Thi Thu and Darryl N. Davis |
Abstract: |
Neural Networks are broadly applied in a number
of fields such as cognitive science, diagnosis, and forecasting. Medical
decision support is one area of increasing research interest. Ongoing
collaborations between cardiovascular clinicians and computer science
are looking at the application of neural networks (and other data mining
techniques) to the area of individual patient diagnosis, based on
clinical records (from Hull and Dundee sites). The current research
looks to advance initial investigations in a number of ways. Firstly,
through a rigorous analysis of the clinical data, using data mining and
statistical tools, we hope to be able to extend the usefulness of much
of the clinical data set. Problems with the data include differences in
attribute presence and use across different sites, and missing values.
Secondly we look to advance the classification of referred patients with
different outcome through the rigorous use of POSSUM, PPOSSUM and both
supervised and unsupervised neural net techniques. Through the use of
different classifiers, a better clinical diagnostic support model may be
built. |
|
Title: |
PERSONALIZED INCENTIVE PLANS THROUGH EMPLOYEE PROFILING |
Author(s): |
Silverio Petruzzellis, Oriana Licchelli, Ignazio Palmisano,
Giovanni Semeraro, Valeria Bavaro and Cosimo Palmisano |
Abstract: |
Total reward management (TRM) is a holistic
practice that interprets the growing need in organizations for
involvement and motivation of the workers. It is oriented towards
pushing the use of Information Technology in supporting the improvement
of both organization and people performances, by understanding employee
needs and by designing customized incentives and rewards. Customization
is very common in the area of e-commerce, where application of profiling
and recommendation techniques makes it possible to deliver personalized
recommendations for users that explicitly accept the site to store
personal information such as preferences or demographic data. Our work
is focused on the application of User Profiling techniques in the Total
Reward Management context. In the Team Advisor project we experimented
the analogies Customer/Employee, Product, Portfolio/Reward Library and
Shop/Employer, in order to provide personalized reward recommendations
to line managers. We found that the adoption of a collaborative software
platform delivering a preliminary reward plan to the managers fosters
collaboration and actively supports the delegation of decision-making. |
|
Title: |
BENEFICIAL SEQUENTIAL COMBINATION OF DATA MINING ALGORITHMS |
Author(s): |
Mathias Goller, Markus Humer and Michael Schrefl |
Abstract: |
Depending on the goal of an instance of the
Knowledge Discovery in Databases (KDD) process, there are instances that
require more than a single data mining algorithm to determine a
solution. Sequences of data mining algorithms offer room for improvement
that are yet unexploited. If it is known that an algorithm is the first
of a sequence of algorithms and there will be future runs of other
algorithms, the first algorithm can determine intermediate results that
the succeeding algorithms need. The anteceding algorithm can also
determine helpful statistics for succeeding algorithms. As the
anteceding algorithm has to scan the data anyway, computing intermediate
results happens as a by-product of computing the anteceding algorithm's
result. On the one hand, a succeeding algorithm can save time because
several steps of that algorithm have already been pre-computed. On the
other hand, additional information about the analysed data can improve
the quality of results such as the accuracy of classification, as
demonstrated in experiments with synthetical and real data. |
|
Title: |
APPLICATION OF THE ROUGH SET METHOD FOR EVALUATION OF STRUCTURAL
FUNDS PROJECTS |
Author(s): |
Tadeusz A. Grzeszczyk |
Abstract: |
Main subject of the present paper is
presentation of the concept for application of rough set theory in
evaluation of structural funds projects. Author presents scheme of
classification algorithms based on rough set approach. This algorithm
can be used for the problem of project proposals classification. |
|
Title: |
EFFICIENT MANAGEMENT OF NON REDUNDANT RULES IN LARGE PATTERN BASES:
A BITMAP APPROACH |
Author(s): |
François Jacquenet, Christine Largeron and Cédric Udréa |
Abstract: |
Knowledge Discovery from Databases has more and
more impact nowadays and various tools are now available to extract
efficiently (in time and memory space) some knowledge from huge
databases. Nevertheless, those systems generally produce some large
pattern bases and then the management of these one rapidly becomes
untractable. Few works have focused on pattern base management systems
and researches on that domain are really new. This paper comes within
that context, dealing with a particular class of patterns that is
association rules. More precisely, we present the way we have
efficiently implemented the search for non redundant rules thanks to a
representation of rules in the form of bitmap arrays. Some experiments
show that the use of this technique increases dramatically the gain in
time and space, allowing us to manage large pattern bases. |
|
Title: |
INTEGRATING FUZZY LOGIC IN ONTOLOGIES |
Author(s): |
Silvia Calegari and Davide Ciucci |
Abstract: |
Ontologies have proved to be very useful in
sharing concepts across applications in an unambiguous way. Nowadays, in
ontology-based applications information is often vague and imprecise.
This is a well-know problem especially for semantics-based applications,
such as e-commerce, knowledge management, web portals, etc. In
computer-aided reasoning, the predominant paradigm to manage vague
knowledge is fuzzy set theory. This paper presents an enrichment of
classical computational ontologies with fuzzy logic to create fuzzy
ontologies. So, it is a step towards facing the nuances of natural
languages with ontologies. Our proposal is developed in the KAON
ontology editor, that allows to handle ontology concepts in an
high-level environment. |
|
Title: |
A KNOWLEDGE-BASED REVERSE DESIGN SYSTEM FOR DECLARATIVE SCENE
MODELING |
Author(s): |
Vassilios Golfinopoulos, Vassilios Stathopoulos, George Miaoulis
and Dimitri Plemenos |
Abstract: |
Declarative modeling allows the designer to
describe a scene without the need to define the geometric properties.
The MultiCAD architecture implements the declarative forward design,
accepting a declarative description and generating a set of geometric
solutions that meet the description. The aim of the presented work is to
settle the reverse design process through the RS-MultiCAD component in
order to extend MultiCAD declarative conception cycle to an automated
iterative process. The RS-MultiCAD receives a selected geometric
solution, which is semantically understood, permits the designer to
perform geometric and topological modifications on the scene, and
results a declarative description which embodies the designer
modifications. That declarative description leads to more promising
solutions by pruning the initial solution space. |
|
Title: |
AUTOMATIC IDENTIFICATION OF NEGATED CONCEPTS IN NARRATIVE CLINICAL
REPORTS |
Author(s): |
Lior Rokach, Roni Romano and Oded Maimon |
Abstract: |
Substantial medical data such as discharge
summaries and operative reports are stored in textual form. Databases
containing free-text clinical narratives reports often need to be
retrieved to find relevant information for clinical and research
purposes. Terms that appear in these documents tend to appear in
different contexts. The context of negation, a negative finding, is of
special importance, since many of the most frequently described findings
are those denied by the patient or subsequently “ruled out.” Hence, when
searching free-text narratives for patients with a certain medical
condition, if negation is not taken into account, many of the documents
re-trieved will be irrelevant. In this paper we examine the
applicability of machine learning methods for automatic identification
of negative context patterns in clinical narratives reports. We suggest
two new simple algorithms and compare their performance with standard
machine learning techniques such as neural networks and decision trees.
The proposed algorithms significantly improve the performance of
information retrieval done on medical narratives. |
|
Title: |
DESIGN AND IMPLEMENTATION OF A FUZZY EXPERT DECISION SUPPORT SYSTEM
FOR VENDOR SELECTION - CASE STUDY IN OIEC IRAN(OIL INDUSTERIAL
ENGINEERING AND CONSTRUCTION) |
Author(s): |
Maryam Ramezani and G. A.Montazer |
Abstract: |
Supplier selection and evaluation is a
complicated multi objective process with many uncertain factors. Sealed
bid evaluation is the most common approach for supplier selection
purpose in Iran. In this paper, a fuzzy expert decision support system
is developed for solving the vendor selection problem with multiple
objectives, in which some of the parameters are fuzzy in nature. Basic
important factors considered for supplier selection are price, quality
and delivery time. The designed system has been designed and implemented
and evaluated in a lead famous company and the results are discussed.
|
|
Title: |
COLLABORATIVE FILTERING BASED ON CONTENT ADDRESSING |
Author(s): |
Shlomo Berkovsky, Yaniv Eytani and Larry Manevitz |
Abstract: |
Collaborative Filtering (CF) is one of the most
popular recommendation techniques. It is based on the assumption that
people with similar tastes prefer similar items. One of the major
drawbacks of the CF is its limited scalability, as the complexity of the
CF grows linearly both with the number of available users and items.
This work proposes a new fast variant of the CF employed over
multi-dimensional content-addressable space. Our approach heuristically
decreases the computational effort required by the CF algorithm by
limiting the search process only to potentially relevant users.
Experimental results demonstrate that our approach is able to generate
predictions with high accuracy while significantly improving performance
in comparison with the traditional implementation of the CF. |
|
Title: |
SEMI INTERACTIVE METHOD FOR DATA MINING |
Author(s): |
Lydia Boudjeloud and François Poulet |
Abstract: |
Usual visualization techniques for
multidimensional data sets, such as parallel coordinates and
scatter-plot matrices, do not scale well to high numbers of dimensions.
A common approach to solve this problem is dimensionality selection. We
present new semi-interactive method for dimensionality selection to
select pertinent dimension subsets without losing information. Our
cooperative approach uses automatic algorithms, interactive algorithms
and visualization methods: an evolutionary algorithm is used to obtain
optimal dimension subsets which represent the original data set without
losing information for unsupervised tasks (clustering or outlier
detection) using a new validity criterion. A visualization method is
used to present the user interactive evolutionary algorithm results and
let him actively participate in evolutionary algorithm search with more
efficiency resulting in a faster evolutionary algorithm convergence. We
have implemented our approach and applied it to real data set to confirm
it is effective for supporting the user in the exploration of high
dimensional data sets and evaluate the visual data representation. |
|
Title: |
KNOWLEDGE-BASED MODELING AND NATURAL COMPUTING FOR COORDINATION IN
PERVASIVE ENVIRONMENTS |
Author(s): |
Michael Cebulla |
Abstract: |
In this paper we start with the assumption that
coordination in complex systems can be understood in terms of presence
and location of information. We propose a modeling framework which
supports an integrated view of these two aspects of coordination (which
we call knowledge diffusion). For this sake we employ methods from
ontological modeling, modal logics, fuzzy logic and membrane computing.
Especially we treat two extreme cases of knowledge diffusion: knowledge
processing with extensive semantic support and the exchange of
uninterpreted messages. In the first case systems behavior is considered
as multi-model transformation where aspects of situations are described
by knowledge bases which are manipulated according to transformation
rules from membrane computing. In the second case however we exploit the
special features of our architecture in order to integrate bio-inspired
coordination mechanisms which rely on the exchange of molecules (i.e.
uninterpreted messages). |
|
Title: |
A LOAD BALANCING SCHEDULING APPROACH FOR DEDICATED MACHINE
CONSTRAINT |
Author(s): |
Arthur M. D. Shr, Alan Liu and Peter P. Chen |
Abstract: |
The dedicated photolithography machine
constraint in semiconductor manufacturing is one of the new issues of
photolithography machinery due to natural bias. With this constraint,
the wafers passing through each photolithography process have to be
processed on the same machine. The purpose of the limitation is to
prevent the impact of natural bias. However, many scheduling polices or
modeling methods proposed by previous research for the semiconductor
manufacturing production did not discuss the dedicated machine
constraint. In this paper, we propose the Load Balancing (LB) scheduling
approach based on a Resource Schedule and Execution Matrix (RSEM) to
tackle this constraint. LB is to schedule each wafer lot at the first
photolithography stage to a suitable machine based on the load balancing
factors among machines. We describe the algorithm of our proposed LB
scheduling approach and RAEM in the paper. We also present an example to
demonstrate our approach and the result of the simulations to validate
our approach. |
|
Title: |
ONTOLOGY-DRIVEN INFORMATION INTEGRATION - NETWORKED ORGANISATION
CONFIGURATION |
Author(s): |
Alexander Smirnov, Tatiana Levashova and Nikolay Shilov |
Abstract: |
Distributed networks of independent companies
(networked organisations) are currently of high interest. This new
organisational form provides for flexibility, tolerance, etc. that are
necessary in the current market situation characterised by increasing
competition and globalisation. Configuration of a networked organisation
is a strategic task that requires intelligent decision support and
integration of various tasks constituting the configuration problem.
Achieving efficient integration of tasks is possible when it is done
taking into account semantics. The paper proposes an approach to this
problem based on ontology-driven knowledge integration. The knowledge in
the approach is presented using formalism of object-oriented constraint
networks. Such formalism simplifies problem formulation and
interpretation since most of the tasks in the areas of configuration and
management are constraint satisfaction tasks. The paper describes the
developed approach and the ontological model that is the core of the
approach. Application of the developed approach is demonstrated at two
levels: (a) at the level of information integration within one company
and (b) at the level of information integration across a networked
organisation. |
|
Title: |
A LOGIC-BASED APPROACH TO SEMANTIC INFORMATION EXTRACTION |
Author(s): |
Massimo Ruffolo and Marco Manna |
Abstract: |
Recognizing and extracting meaningful
information from unstructured documents, taking into account their
semantics, is an important problem in the field of information and
knowledge management. In this paper we describe a novel logic-based
approach to semantic information extraction, from both HTML pages and
flat text documents, implemented in the HiLex system. The approach is
founded on a new two-dimensional representation of documents, and
heavily exploits DLP+ - an extension of disjunctive logic programming
for ontology representation and reasoning, which has been recently
implemented on top of the DLV system. Ontologies, representing the
semantics of information to be extracted, are encoded in DLP+, while the
extraction patterns are expressed using regular expressions and an ad
hoc two-dimensional grammar. The execution of DLP+ reasoning modules,
encoding the HiLex grammar expressions, yields the actual extraction of
information from the input document. Unlike previous systems, which are
merely syntactic, HiLex combines both semantic and syntactic knowledge
for a powerful information extraction. |
|
Title: |
KNOWLEDGE MANAGEMENT NOVEL APPLICATIONS |
Author(s): |
Vasso Stylianou and Andreas Savva |
Abstract: |
Knowledge Management (KM) is a process through
which an enterprise gathers, organizes, shares, and analyzes the
knowledge of individuals and groups across the organization in ways that
directly affect performance. Numerous businesses have implemented KM
systems in an effort to achieve commercial effectiveness. This paper has
collected information about a number of KM systems developed and used by
modern businesses. It then presents the development steps leading to the
implementation of a Web Content Management System to be used as a
Research Management System. This will manage the acquisition, analysis,
perseverance and utilization of knowledge regarding various research
projects - including proposed projects, ongoing projects and finalized
projects - and research-related emails. |
|
Title: |
COALITION FORMATION WITH UNCERTAIN TASK EXECUTION |
Author(s): |
Hosam Hanna |
Abstract: |
We address the problem of coalition formation
in environments where tasks' executions are uncertain. While previous
works provide good solutions for coalition formation problem, they do
not take into account the uncertain task execution and they do not take
into account the effects of forming a coalition on the future possible
formations. In environments where task execution is uncertain, an agent
can't be sure whether he will be able to execute all the subtasks that
are allocated to him or he will ignore some of them. That is why forming
coalition to maximize the real reward is an unrealizable operation. In
this paper, we propose a theoretical approach to form coalition with
uncertain task execution. We view the formation of a coalition to
execute a task as (1) a decision to make and (2) as an uncertain source
of gain. We associate then the allocation of a task to a coalition with
an expected reward that represents what agents expect to gain by forming
this coalition for executing this task. Also, the agents' aim is to form
coalition to maximize the expected reward instead of the real reward. To
reach this objective, we formalize the coalition formation problem by a
Markov Decision Process (MDP). We consider the situation where decisions
are taken by one agent that develops and solves the corresponding MDP.
An optimal coalition formation which maximizes the agents' expected
reward is then obtained. |
|
Title: |
TOWARDS A COMPLETE DATA MANAGEMENT FRAMEWORK BASED ON INTELLIGENT
AGENTS |
Author(s): |
Iulian Alexandru Negroiu, Octavian Paul Rotaru and Mircea Petrescu |
Abstract: |
Applications are more and more complex and the
volume of information has an exponentially growth nowadays. In these
conditions a huge amount of information needs to be processed while the
processing time and power should be kept to a minimum. The increasing
amount of data transferred over the Internet and other networks, which
are open to a big number of clients, was reflected in the growth of the
distributed information systems. Also, there are a multitude of servers
distributed among remote locations, which are serving the same purposes.
In this cases, traditional models of distributed computing, caching,
concurrency control etc. are becoming less appropriate in overcoming the
actual efficiency problems and in supporting the development of complex
applications. We believe that intelligent and autonomous agents can
solve this problem. In order to have a solution for the above problems,
this paper opens the research for complete data management intelligent
agents based framework. The possible areas that can be handled using
agents are identified and discussed together with the required agents
and agencies, while attempting to provide a bird’s eye architectural
view of the proposed framework. |
|
Title: |
A MULTI-AGENT ARCHITECTURE FOR MOBILE SELF-TRAINING |
Author(s): |
Mourad Ennaji, Hadhoum Bouhachour and Patrick Gravé |
Abstract: |
This article is the result of an
interdisciplinary meeting between sociologists and didacticiens on the
one hand and data processing specialists on the other hand. To develop
the theoretical and methodological principles of the design of a
training environment, by putting the needs and the difficulties of the
student at the center of the design process and data-processing
modeling, constitutes the common action of these two research
laboratories within the framework of this collaboration. To design a
virtual tutor called “teaching agent” in a system of remote formation
implies the implementation of a flexible and adaptive system. We propose
an multi-agent multi-layer architecture able to initiate the training
and to manage a teaching and an individualized follow-up. |
|
Title: |
DATA MINING AS A NEW PARADIGM FOR BUSINESS INTELLIGENCE IN DATABASE
MARKETING PROJECTS |
Author(s): |
Filipe Pinto, Pedro Gago and Manuel Filipe Santos |
Abstract: |
One of the most interesting business management
challenges is to integrate automation and systematisation processes in
order to get insights or trends for decision support activities.
Actually, information technologies provide not only the ability to
collect and register in databases many kinds of signals (relevant
segments of information) external to the organization, but also the
capacity to use them in different ways at different organizational
levels. Database Marketing (DBM) refers to the use of database
technology to support marketing activities in order to establish and
maintain a profitable interaction with clients. Currently DBM is usually
approached using classical statistical inference, which may fail when
complex, multi-dimensional, and incomplete data is available. An
alternative is to apply Data Mining (DM) techniques in a process called
Knowledge Discovery from Databases (KDD), which aims at automatic
pattern extraction. This will help marketers to address customer needs
based on what they know about customers, rather than a mass
generalization of their characteristics. This paper exploits a
systematic approach for the use of DM techniques as a new paradigm in
Business Intelligence (BI) in DBM projects, considering analytical and
marketing aspects. A cross-table is proposed to associate DBM activities
to the appropriate DM techniques. This framework guides the development
of DBM projects, contributing to improve their efficacy and efficiency. |
|
Title: |
MULTI-CRITERIA EVALUATION OF INFORMATION RETRIEVAL TOOLS |
Author(s): |
Nishant Kumar, Jan Vanthienen, Jan De Beer and Marie-Francine Moens |
Abstract: |
We propose a generic methodology for the
evaluation of Text Mining/Search and Information Retrieval tools based
on their functional conformity to a predefined set of functional
requirements prioritized by distinguishable user profiles. The
methodology is worked out and applied within the context of a research
project concerning the assessment of intelligent exploitation tools for
unstructured information sources in the police domain. We present the
general setting of our work, give an overview of our evaluation
approach, and discuss our methodology for testing in greater detail.
These kinds of evaluations are particularly useful for both
(potential)purchasers of exploitation tools, given the high investments
in time and money required in becoming proficient in their use, and
developers who aim at producing better quality software products. |
|
Title: |
AROUND THE EMPIRICAL AND INTENTIONAL REFERENCES OF AGENT-BASED
SIMULATION IN THE SOCIAL SCIENCES |
Author(s): |
Nuno David and Helder Coelho |
Abstract: |
The difficulties in constructing and analyzing
simulations of social theory and phenomena, even the most simplified,
have been underlined in the literature. The experimental reference of
simulation remains ambiguous, insofar as the logic of its method turns
computer programs into something more than a tool in the social
sciences, defining them as the experimental object itself. The goal of
this paper is to construct a methodological perspective that is able to
conciliate the formal and empirical logic of program verification in
computer science, with the interpretative and multiparadigmatic logic of
the social sciences. We demonstrate that the method of simulation
implies at least two distinct types of program verifications, which we
call empirical and intentional verification. By demonstrating that it is
the intentional verification of programs that is contingent upon both
the behaviors of the programs and the social phenomena, we clarify the
experimental reference of simulation. |
|
Title: |
LOGICRUNCHER - A LOGISTICS PLANNING AND SCHEDULING DECISION SUPPORT
SYSTEM FOR EMERGING EMS AND 3PL BUSINESS PRACTICES |
Author(s): |
Raymund J. Lin, Jack Huang, Norman Sadeh-Koniecpol and Benjamin
Tsai |
Abstract: |
LogiCruncher is a dynamic logistics planning
and scheduling module developed to support emerging third party
logistics practices. Given information about inventory profiles for
different product types at different locations, a set of transportation
assets as well as a variety of quotes and contractual arrangements with
logistics service providers, the system is capable of generating or
revising transportation plans and schedules that meet changing customer
requirements. These requirements are expressed in the form of demands
for delivering different types of SKUs in different quantities to
different locations. The system is capable of capturing a rich set of
domain constraints and costs. It can be used to support the development
and dynamic revision of solutions as well as to support requests for
quotes from prospective customers. This includes support for “whatif”
analysis through the creation and manipulation of solutions in different
contexts, each corresponding to possibly different sets of assumptions.
This paper provides an overview of LogiCruncher and summarizes results
of initial evaluation experiments. |
|
Title: |
AN EXTENDABLE JAVA FRAMEWORK FOR INSTANCE SIMILARITIES IN
ONTOLOGIES |
Author(s): |
Mark Hefke, Valentin Zacharias, Andreas Abecker, Qingli Wang, Ernst
Biesalski and Marco Breiter |
Abstract: |
Ontologies, Similarity, Knowledge Management,
Case-Based Reasoning |
|
Title: |
SIMILARITY MEASURES FOR SKILL-PROFILE MATCHING IN ENTERPRISE
KNOWLEDGE MANAGEMENT |
Author(s): |
Ernst Biesalski and Andreas Abecker |
Abstract: |
At DaimlerChrysler’s truck plant in Wörth /
Rhein, we are currently implementing a comprehensive IT solution for
integrated and synergistic pro¬cesses in personnel development. In this
paper, we sketch some onto¬logy-based software modules – as well as
their interdependencies and synergies – which support streamlined and
integrated, comprehensive personnel-development processes. A central
element in the software architecture is ontology-based similarity
assessment for skill-profile matching which is exemplarily discussed for
software-supported project staffing. |
|
Title: |
SOURCE SENSITIVE ARGUMENTATION SYSTEM |
Author(s): |
Chee Fon Chang, Peter Harvey and Aditya Ghose |
Abstract: |
There exist many approaches to agent-based
conflict resolution. Of particular interest are approaches which adopt
argumentation as their underlying conflict resolution machinery. In most
argumentation systems, the argument source plays a minimal role. We feel
that ignoring this important attribute of human argumentation process
reduces the capabilities of current argumentation systems. This paper
focuses on the importance information source in argumentation extending
this to the notion of credibility of agents and the effect on agent
decision making during argumentation. |
|
Title: |
SELECTING AND STRUCTURING SEMANTIC RESOURCES TO SUPPORT SMES
KNOWLEDGE COMMUNITIES |
Author(s): |
António Lucas Soares, Manuel Moreira da Silva and Dora Simões |
Abstract: |
Knowledge management intrinsically involves
communication and information sharing, which can be strongly affected by
the context in which it is viewed and interpreted. This situation gets
worst when complex domains are considered, as it is the case of the
Construction Industry domains. The development of ontologies to unify
and to put into context the different concepts and terms of the
sometimes rather traditional and locally coloured construction industry
domains is a necessary step to avoid misinterpretations and inefficient
communication. The KNOW-CONSTRUCT project decided, as an approach to
this task, to re-use, as far as possible, existing ontologies,
classification systems and other semantic resources in order to develop
a system that may come to contribute to standards and to the
integration, management and reuse of the area specific knowledge via a
common knowledge base in order to consolidate and provide access to
integrated knowledge, making community emergent knowledge a significant
added value. It aims at developing a methodology of common Construction
Industry Knowledge (CIK) representation applicable to large sets of SMEs
in the construction industry as a basis for the establishment of a
knowledge community. |
|
Title: |
FUZZY INTERVAL NUMBER (FIN) TECHNIQUES FOR FUZZY INTERVAL NUMBER
(FIN) TECHNIQUES FOR |
Author(s): |
Catherine Marinagi, Theodoros Alevizos, Vassilis G. Kaburlasos and
Christos Skourlas |
Abstract: |
A new method to handle problems of Information
Retrieval (IR) and related applications is proposed. The method is based
on Fuzzy Interval Numbers (FINs) introduced in fuzzy system
applications. Definition, interpretation and a computation algorithm of
FINs are presented. The frame of use FINs in IR is given. An experiment
showing the anticipated importance of these techniques in Cross Language
Information Retrieval (CLIR) is presented. |
|
Title: |
LOCATING KNOWLEDGE THROUGH AUTOMATED ORGANIZATIONAL CARTOGRAPHY
[AUTOCART] |
Author(s): |
Mounir Kehal, Sandrine Crener and Patrice Sargenti |
Abstract: |
The Post-Globalization aeon has placed
businesses everywhere in new and different competitive situations where
knowledgeable, effective and efficient behaviour has come to provide the
competitive and comparative edge. Enterprises have turned to explicit-
and even conceptualising on tacit- Knowledge Management to elaborate a
systematic approach to develop and sustain the Intellectual Capital
needed to succeed. To be able to do that, you have to be able to
visualize your organization as consisting of nothing but knowledge and
knowledge flows, whilst being presented in a graphical and visual
framework, referred to as automated organizational cartography. Hence,
creating the ability of further actively classifying existing
organizational content evolving from and within data feeds, in an
algorithmic manner, hence potentially giving insightful schemes and
dynamics by which organizational know-how is visualised. It is discussed
and elaborated on most recent and applicable definitions and
classifications of knowledge management, representing a wide range of
views from mechanistic (systematic, data driven) to a more socially
(psychologically, cognitive/metadata driven) orientated. More elaborate
continuum models, for knowledge acquisition and reasoning purposes, are
being used for effectively representing the domain of information that
an end user may contain in their decision making process for utilization
of available organizational intellectual resources.
|
|
Title: |
GEOSPATIAL SEMANTIC QUERY BASED ON CASE-BASED REASONING SYSTEM |
Author(s): |
Kay Khaing Win |
Abstract: |
In today’s fast-growing information age,
currently available methods for finding and using information on the Web
are often insufficient. Today’s retrieval methods are typically limited
to keywords searches or sub-string matches, therefore, users may often
miss critical information when searching the web. After reviewing the
real world Semantic Web, additional research is needed on the Geospatial
Semantic Web. We are rich in geospatial data but poor in up-to-date
geospatial information and knowledge that are ready to be used by anyone
who wants to use. In this paper, we implement a framework of geospatial
semantic query based on case based reasoning system that contributes to
the development of geospatial semantic web. It is important to establish
a geospatial semantics that support for effective spatial reasoning for
performing geospatial semantic query. Compared to earlier keyword-based
and information retrieval techniques that rely on syntax, we use
semantic approaches in our spatial queries. |
|
Title: |
SOME SPECIAL HEURISTICS FOR DISCRETE OPTIMIZATION PROBLEMS |
Author(s): |
Boris Melnikov, Alexey Radionov and Viktor Gumayunov |
Abstract: |
In previous paper we considered some heuristic
methods of decision-making for various discrete optimization problems;
all these heuristics should be considered as the combination of them and
form a common multi-heuristic approach to the various problem. And in
this paper, we begin to consider local heuristics, which are different
for different problems. At first, we consider two problems of
minimization: for nondeterministic finite automata and for disjunctive
normal forms. Our approach can be considered as an alternative to the
methods of linear programming, multi-agent optimization, and neuronets. |
|
Title: |
A METHOD BASED ON THE ONTOLOGY OF LANGUAGE TO SUPPORT CLUSTERS’
INTERPRETATION |
Author(s): |
Wagner Francisco Castilho, Gentil José de Lucena Filho, Hércules
Antonio do Prado and Edilson Ferneda |
Abstract: |
The clusters’ analysis process comprises two
broad activities: generation of a clusters set and extracting meaning
from these clusters. The first one refers, typically, to the application
of algorithms to estimate high density areas separated by lower density
areas from the observed space. In the second one the analyst goes inside
the clusters trying to figure out some sense from them. The whole
activity is strongly dependent from previous knowledge and shows a
considerable burden of subjectivity. In previous works, some
alternatives were proposed to take into account the background knowledge
when creating the clusters. However, the subjectivity of the
interpretation activity continues to be a challenge to create knowledge
from clusters’ analysis. Beyond a soundness domain knowledge demanded
from the specialists, the consolidation of a con-sensual interpretation
will depend from conversational competence for which no support has been
provided. In this paper we propose a method for cluster interpretation
based on the categories existing in the Ontology of Language, aiming to
reduce the gap between a cluster configuration and the effective
extraction of mean-ing from them. |
|
Area 3 - Information Systems Analysis and
Specification |
Title: |
KEY-PROBLEM AND GOAL DRIVEN REQUIREMENTS ENGINEERING - WHICH
COMPLEMENTARITIES FOR MANUFACTURING INFORMATION SYSTEMS? |
Author(s): |
Virginie Goepp and François Kiefer |
Abstract: |
The development of manufacturing information
systems involves various stakeholders, who are not specialists for
information systems. Therefore the stakes of the methods for such
projects are to provide models which are understandable for all people
involved, and conceptual enough to support the alignment between
business, information system and manufacturing strategies of the
company. The use of problem based models, stemmed from dialectical
approaches, is efficient for the understand ability and a coarse
strategic analysis, but it is limited through the project size. At the
opposite, goal driven requirements engineering approaches enable to
tackle large projects and detailed strategic analysis, but they are
limited because of the difficulty to deal with the fuzzy concept of a
goal. So, it would be interesting to gain from these two approaches.
This paper first presents a problem driven approach for manufacturing
information systems. It consists in a key-problem framework and a set of
steps to exploit it. The assumption made is to base requirement
elicitation on the problems encountered by the stakeholders. Then its
matching with goal driven requirements engineering is shown and the
complementarities between these two approaches are drawn and further
discussed. |
|
Title: |
TOWARDS A CONTEXTUAL MODEL-DRIVEN DEVELOPMENT APPROACH FOR WEB
SERVICES |
Author(s): |
Zakaria Maamar, Karim Baïna, Djamal Benslimane, Nanjangud C.
Narendra and Mehdi Chelbabi |
Abstract: |
This paper discusses how we develop and apply a
contextual model-driven approach to Web services. A Web service is
defined using WSDL, posted on an UDDI registry, and invoked through a
SOAP request. To deploy adaptable Web services, we consider the
environment wherein these Web services operate. This environment's
features are made available in a structure, which we refer to as
context. By adopting a contextual model-driven approach, we aim at
developing contextual specifications of Web services. To this end
ContextUML, an extension of UML through UMLProfile, permits developing
such contextual specifications and is discussed in this paper. |
|
Title: |
STRUCTURED APPROACH FOR THE INTRODUCTION OF INFORMATION SERVICES
INTO THE PRIVATE SOCIAL SOLIDARITY INSTITUTIONS |
Author(s): |
Alexandra Queirós and Nelson Rocha |
Abstract: |
The paper presents an overview of a methodology
for the introduction of information technologies into the social
solidarity institutions, helping to overcome a complex set of barriers.
The methodology is based on a set of good practices (intend to be valid
for different type of institutions but considering the specific needs of
each one) and an electronic social record with a generic information
model able to embrace all type of information necessary to the
institutions, but also adjustable to answer to the requirements of each
institution, service or care provider. |
|
Title: |
A PRACTICAL EXPERIENCE WITH NDT - THE SYSTEM TO MEASURE THE GRADE
OF HANDICAP |
Author(s): |
Maria J. Escalona, Dario Villadiego, Javier J. Gutiérrez, Jesus
Torres and Manuel Mejías |
Abstract: |
The necessity of applying technological
advances in the medicine environment is an unquestionable fact. In the
last years, important applications of new technologies in medical
systems to help doctors or to make easier the evaluation, the treatment
or, even, the relation between the doctor and the patient have been
presented. However, there is sometimes an important gap in the
development of these new systems. The specific and complex features of
the medical environment often complicate the communication between
doctors, when they require a new system, and experts in computer
science. This work introduces a methodological proposal to specify and
analyze software systems. Its main goal is to make easier the
communication between final users and customers and the development
team. The report presents our own practical experience by the
application of this methodology in a real system to measure the grade of
handicap in patients following laws in Spain. |
|
Title: |
SUPPORTING METHODS OF GENERATING ALTERNATIVE SCENARIOS FROM A
NORMAL SCENARIO |
Author(s): |
Atsushi Ohnishi |
Abstract: |
A generation method of alternative scenarios
using a normal scenario written with the scenario language SLAF is
proposed. This method includes (1) generation of alternative plans and
(2) generation of alternative scenario by a user’s selection of these
plans. Our method enables to lessen the omission of the possible
alternative scenarios in the early stages of development and contributes
to improve the correctness and effectiveness of the software
development. |
|
Title: |
A FEATURE COMPUTATION TREE MODEL TO SPECIFY REQUIREMENTS AND REUSE |
Author(s): |
Ella E. Roubtsova and Serguei A. Roubtsov |
Abstract: |
A large subset of requirements to complex
systems, services and product lines is specified traditionally by
hierarchical structures of features. Features are usually gathered and
represented in the form of a feature tree. However, a feature tree is a
structural model, it represents mainly composition and specialization
relations between features-requirements and does not provide the
possibility to specify requirements in form of ordering relations
defined on functional features. For specification of ordering relations
on features use case scenarios are traditionally used. Because use cases
comprise isolated scenarios or sequences of features, they can be
inconsistent and even contradict each other and the feature tree.
Moreover, some use cases defining relations on features may be
incomplete. To support consistent specification of requirements, we
suggest to accompany a feature tree model by a feature computation tree
model. The couple of such related feature tree models provides the basis
for the method of consistency checks of requirements that we propose. It
introduces a united view on the system's behaviour at the step of
requirement specification and facilitates specification of forbidden
sequences and construction complete sequences from incomplete ones. It
allows designers to precisely specify the desired reuse and to find out
that a certain sort of reuse is not possible. Understanding already at
the step of requirements engineering that a subsystem cannot be reused
without modification saves effort and money spent for development. The
proposed method and models are explained by a case study of design a
system for production of electronic cards. |
|
Title: |
A FORMAL ARCHITECTURE-CENTRIC MODEL-DRIVEN APPROACH FOR THE
AUTOMATIC GENERATION OF GRID APPLICATIONS |
Author(s): |
David Manset, Hervé Verjus, Richard McClatchey and Flavio Oquendo |
Abstract: |
This paper discusses the concept of
model-driven software engineering applied to the Grid application
domain. As an extension to this concept, the approach described here,
attempts to combine both formal architecture-centric and model-driven
paradigms. It is a commonly recognized statement that Grid systems have
seldom been designed using formal techniques although from past
experience such techniques have shown advantages. This paper advocates a
formal engineering approach to Grid system developments in an effort to
contribute to the rigorous development of Grids software architectures.
This approach addresses quality of service and crossplatform
developments by applying the model-driven paradigm to a formal
architecture-centric engineering method. This combination benefits from
a formal semantic description power in addition to model-based
transformations. The result of such a novel combined concept promotes
the re-use of design models and facilitates developments in Grid
computing. |
|
Title: |
MEDIS – A WEB BASED HEALTH INFORMATION SYSTEM - IMPLEMENTING
INTEGRATED SECURE ELECTRONIC HEALTH RECORD |
Author(s): |
Snezana Sucurovic |
Abstract: |
In many countries there are initiatives for
building an integrated patient-centric electronic health record. There
are also initiatives for transnational integrations. These growing
demands for integration result from the fact that it can provide
improving healthcare treatments and reducing the cost of healthcare
services. While in European highly developed countries computerisation
in healthcare sector begun in the 70’s and reached a high level, some
developing countries, and Serbia and Montenegro among them, have started
computerisation recently. This is why MEDIS (MEDical Information System)
is aimed at integration itself from the very beginning instead of
integration of heterogeneous information systems on a middle layer or
using HL7 protocol. MEDIS has been implemented as a federated system
where the central server hosts basic EHCR information about a patient,
and clinical servers contain their own part of patients’ EHCR. Clinical
servers are connected to a central server through the Internet and the
system can be accessed through a browser from a place that has an
Internet connection. A user also has to have a public key certificate to
be able to login. As health data are highly sensible, MEDIS implements
solutions from recent years, such as Public Key Infrastructure and
Privilege Management Infrastructure, SSL and Web Service security as
well as pluggable, XML based access control policies.
|
|
Title: |
A VIEWPOINTS MODELING FRAMEWORK BASED ON EPISTEMIC LOGIC |
Author(s): |
Min Jiang and Guoqin Wu |
Abstract: |
RE can be considered a process of knowledge
representation, knowledge acquirement and knowledge analysis. Viewpoints
approach hopes that stakeholders in a complex system should describe it
from their own perspectives and then generate a more complete
requirement specification. Just because of this characteristic, several
stakeholders maybe describe a same problem. These overlapping
requirements are the source of inconsistency. This paper puts forward
requirement modeling framework based on problem-domain and viewpoints.
We interpret and reason it with epistemic logic in order to achieve the
following goals: 1) to make requirements more structured; 2) to help
stakeholders formally discover those inconsistent overlapping
requirements. |
|
Title: |
TOWARDS A SUITE OF METRICS FOR BUSINESS PROCESS MODELS IN BPMN |
Author(s): |
Elvira Rolón, Francisco Ruiz, Félix García and Mario Piattini |
Abstract: |
In this paper we present a suite of metrics for
the evaluation of business process models using BPMN notation. Our
proposal is based on the FMESP framework, which was developed in order
to integrate the modeling and measurement of software processes. FMESP
includes a set of metrics to provide the quantitative basis necessary to
know the maintainability of the software process models. This previously
existent proposal has been used in this work as the starting point to
define a set of metrics for the evaluation of the complexity of business
process models defined with BPMN. To achieve this goal, the first step
has been to adopt the metrics of FMESP, which can be directly used to
measure business process models, and then, new metrics have been defined
according to the particular aspects of the business processes and BPMN
notation.
|
|
Title: |
FLEXIBLE REALIZATION OF BUSINESS PROCESSES USING EXISTING SERVICES |
Author(s): |
Jelena Zdravkovic and Martin Henkel |
Abstract: |
When realizing executable business process
models, the assumption of a transparent integration with existing
software services is highly unrealistic. In most situations, business
process specifications collide with specific properties of existing
services. In this paper we propose an approach for relaxation of the
business process specification to enable flexible integration between
the process and existing services. The approach is based on the notion
of visibility, which allows a categorized relaxation of the process
specification by not requiring every process state to be distinguished
after the process is realised with existing services. The categories of
visibility presented in this paper are applied by indicating flexible
elements in the process design phase. The presented approach stimulates
the alignment between business processes and existing services,
facilitating a larger scale of transparent process realisations. |
|
Title: |
REQUIREMENTS ELICITATION FOR DECISION SUPPORT SYSTEMS: A DATA
QUALITY APPROACH |
Author(s): |
Alejandro Vaisman |
Abstract: |
Today, information and timely decisions are
crucial for an organization’s success. A Decision Support System is a
software tool that provides information allowing its users to take
decisions timely and cost-effectively. This is highly conditioned by the
quality of the data involved. In this paper we show that conventional
techniques for requirement elicitation cannot be used in Decision
Support Systems, and propose DSS-METRIQ, a methodology aimed at
providing a single data quality-based procedure for complete and
consistent elicitation of functional (queries) and non functional (data
quality) requirements. In addition, we present a method based on QFD
(Quality Function Deployment), that, using the information collected
during requirements elicitation, ranks the operational data sources from
which data is obtained, according to their degree of satisfaction of
user information requirements.
|
|
Title: |
FLEXIBLE COMPLETION OF WORKFLOW ACTIVITIES |
Author(s): |
Georg Peters and Roger Tagg |
Abstract: |
Over the last twenty years business process
management has become a central approach to maintaining the
competitiveness of companies. However the automation of the business
processes utilizing workflow systems have often led to over-structured
solutions that lack of the flexibility inherent in the underlying
business model. Therefore there is a need to develop flexible workflow
management systems that easily and quickly adapt to dynamically changing
business models and processes. Lin and Orlowska [2005] introduced partly
complete-able activities as one way to make workflow systems more
flexible. In our paper, we extend the concept of partly complete-able
activities by recognizing separate probability and fuzzy dimensions and
by introducing process memory. |
|
Title: |
A SUPPORTING TOOL TO IDENTIFY BOTH SATISFIED REQUIREMENTS AND
TOLERANT THREATS FOR A JAVA MOBILE CODE APPLICATION |
Author(s): |
Haruhiko Kaiya, Kouta Sasaki, Chikanobu Ogawa and Kenji Kaijiri |
Abstract: |
A mobile code application can be easily
integrated by using existing software components, thus it is one of the
promising ways to develop software efficiently. However, using a mobile
code application sometimes follows harmful effects on valuable resources
of users because malicious codes in such an application can be
activated. Therefore, users of mobile code applications have to identify
both benefits and risks by the applications and to decide which benefits
should be gotten and which risks should be tolerated. However, there is
no method or tool to support such things. In this paper, we introduce a
tool to support such users. By using this tool, the users can identify
security related functions embedded in each mobile code automatically.
The users can also relate these functions to each benefit or risk. By
defining a security policy for mobile codes, some functions are
disabled, thus some benefits and risks are also disabled. By adjusting
the security policy, the users can make decision about the benefits and
the risks.
|
|
Title: |
DIFFERENT STRATEGIES FOR RESOLVING PRICE DISCOUNT COLLISIONS |
Author(s): |
Henrik Stormer |
Abstract: |
Managing Discounts is an essential task for all
business applications. Discounts are used for turning prospects to new
customers and for customer retention. However, if more than one discount
type can be applied, a collision may arise. An example is a promotional
discount and a discount for a customer. The system has to decide which
discount should be applied, eventually also combinations are possible. A
bad collision strategy can lead to annoyed customers as they do not get
the price that they think is correct. This paper examines three
different business applications designed for small and medium sized
companies and shows the applied strategies for resolving discount
collisions. Afterwards, a new way of defining discounts, called discount
tree (dTree) is introduced. It is shown that with dTrees, collisions can
be detected directly when the discounts are defined. A detected
collision can then resolved by the administrator. |
|
Title: |
MANAGING THE KNOWLEDGE NEEDED TO SUPPORT AN ELECTRONIC PERSONAL
ASSISTANT - AN END-USER FRIENDLY GRAPHICAL ONTOLOGY EDITING TOOL |
Author(s): |
Matthias Einig, Roger Tagg and Georg Peters |
Abstract: |
Today’s administrative worker has to handle
huge amounts of data of different types, from many different sources and
using multiple software tools. The effort in organizing and retrieving
this data is often disproportionate to the actual benefit gained.
Ontology-based categorization of knowledge has been advocated to provide
a common infrastructure to the tools. However, most current software for
building and maintaining ontologies is too complicated for the average
end-user. This paper describes a prototype ontology editor application
that provides an easily understandable and usable interface. |
|
Title: |
A SYSTEMATIC ANALYSIS PATTERNS SPECIFICATION |
Author(s): |
Ricardo Raminhos, Marta Pantoquilho, João Araújo and Ana Moreira |
Abstract: |
Analysis Patterns are indicative analysis
solutions for a recurrent problem. Many patterns have been proposed and
are successfully used. The writing of a pattern follows a specific
structure that can be tailored to each author’s needs. We have developed
an analysis pattern template that solves some previously identified gaps
on other approaches. This paper focuses on the definition of a
systematic process to guide developers to fill that analysis pattern
template. The definition of this process will contribute to the
unification of the analysis patterns representation, and thus for their
understandability and completeness. |
|
Title: |
ONTOLOGY CONSTRUCTION IN AN ENTERPRISE CONTEXT: COMPARING AND
EVALUATING TWO APPROACHES |
Author(s): |
Eva Blomqvist, Annika Öhgren and Kurt Sandkuhl |
Abstract: |
Structuring enterprise information and
supporting knowledge management is a growing application field for
enterprise ontologies. Research work presented in this paper focuses on
construction of enterprise ontologies. In an experiment, two methods for
ontology construction were used in parallel when developing an ontology
for a company in automotive supplier industries. One method is based on
automatic ontology construction exploiting ontology patterns, the other
method is a manual approach based on cookbook-like instructions. The
paper compares and evaluates the methods and their results. For ontology
evaluation, selected approaches were combined including comparison of
general characteristics, evaluation by ontology engineers, and
evaluation by domain experts. The main conclusion is that the compared
methods have different strengths and an integration of both, developed
ontologies and used methods, should be investigated. |
|
Title: |
TOWARDS PRACTICAL TOOLS FOR MINING ABSTRACTIONS IN UML MODELS |
Author(s): |
Michel Dao, Marianne Huchard, Mohamed Rouane Hacène, Cyril Roume
and Petko Valtchev |
Abstract: |
We present an experience of applying an
extension of Formal Concept Analysis to UML class model restructuring.
The Relational Concept Analysis (RCA) mines potentially useful
abstractions from UML classes, attributes, operations and associations
and therefore outscores competing restructuring techniques which usually
focus exclusively on classes. Nevertheless, the complexity and the size
of the RCA output require interactive tools to assist the human
designers in comprehending the corresponding class model. We discuss the
benefits of using RCA-based techniques in the light of an initial set of
tools that were devised to ease the navigation and the visual analysis
of the results of the restructuring process. |
|
Title: |
A PROJECT MANAGEMENT MODEL TO A DISTRIBUTED SOFTWARE ENGINEERING
ENVIRONMENT |
Author(s): |
Lúcia Norie Matsueda Enami, Tania Fatima Calvi Tait and Elisa
Hatsue Moriya Huzita |
Abstract: |
This article presents a project management
model to a distributed environment that will be integrated to the DiSEN
(Distributed Software Engineering Environment). The model purpose is to
supply to the interested ones in the software project the pertinent
information to each one and also treat the aspects of the team member’s
physical distribution in a distributed environment. It was based in
PMBOK (Project Management Body of Knowledge) Model and CMMI (Capability
Maturity Model Integration) and the issues treated by the Project
Management Model include cultural differences between the members,
distribution of knowledge, the use of a tool to facilitate the
communication between members, standardization of software project
management documents and motivating people geographically dispersed. |
|
Title: |
VALIDATION OF INFORMATION SYSTEMS USING PETRI NETS |
Author(s): |
Asghar Bokhari and Skip Poehlman |
Abstract: |
Enterprise information systems are complex
software that are frequently required to adapt to rapid changes in
business environments. Although there have been some successes, the
research literature is full of horror stories due to failure of these
systems. Current software engineering practice requires
verification/validation of complex systems at the design stage. Unified
Modeling Language (UML), which lacks formal semantics, is the defacto
standard for designing the majority of information systems and that
means dynamic analysis techniques cannot be used for validation of UML
models. Consequently there has been a considerable interest among
researchers in formalization of UML models. Early proposals translate
UML state diagrams into some kind of mathematical language and input
this textual description to a model checker. In this paper we present a
rule-based technique to convert UML state diagrams to Object Coloured
Petri (OCP) nets. A strong mathematical foundation, more amenable to
verification and validation procedures, alongwith a graphical
representation, makes Petri nets ideally suitable for dynamic analysis
of UML modelled information systems. |
|
Title: |
APPLYING BLOCK ACTIVITY PATTERNS IN WORKFLOW MODELING |
Author(s): |
Lucinéia Heloisa Thom and Cirano Iochpe |
Abstract: |
In an early work we have identified a set of
organizational –oriented workflow patterns based on organizational
structure aspects (e.g., centralization on decision-making and
coordination mechanisms). Relying on this work, we verified that the use
of workflow patterns based on structural aspects of the organization may
improve both productivity and accuracy of workflow design and, hence,
the resulting workflow process will better represent the business
process of the real world as it is executed by the organization. In this
paper we discuss a set of business (sub-)process types that were
identified by different authors and are a result of a classification of
business process “pieces” (e.g., logistic, financial, decision,
information, material, notification and both unidirectional and
bi-directional communication). After integrating the classification work
found in the literature the business process types that we are calling
“workflow block activity patterns” were described in a common language
(UML 2.0) and through some study cases we tried to find out whether they
are frequently reused during business as well as workflow process
modeling. The “matching exercise” was carried out not only to validate
the set of patterns but also, eventually to identify some new ones. The
results showed that the patterns are frequently identified not only in
workflow components but also in workflow applications. We believe they
can be reused to improve both the quality and the performance of the
design phase in a workflow project. Within this context we also present
an inside of how the block activity patterns can effectively be used in
workflow modeling. |
|
Title: |
THE CONCEPT OF ETHICS IN ELECTRONIC QUALITATIVE RESEARCH |
Author(s): |
Nouhad J. Rizk and Elias M. Choueiri |
Abstract: |
As a key form of communications technology, the
internet has created new methodological approaches for social science
research. This study focuses on moral issues created by information
technology for qualitative research environments. The primary concern is
with ethical analysis and legal issues and how both are applied to,
although not limited to, issues of privacy, intellectual property,
information access, interpersonal communication, moral and civil rights,
responsibility and liability, and professional codes as well as some
social implications of technology. The Internet is now exposed to a
growing number and a wider variety of threats and vulnerabilities.
Moreover, Internet-based research raises several ethical questions and
introduces new ethical challenges, especially pertaining to privacy,
informed consent and confidentiality and anonymity. This study aims to
highlight the main ethical issues in electronic qualitative research and
to provide some guidance for those doing or reviewing such research.
While recognizing the reservations held about strict ethical guidelines
for electronic qualitative research, this study opens the door for
further debate of these issues so that the social science research
community can move towards the adoption of agreed standards of good
practice. In addition, it suggests that empirical research is desirable
in order to quantify the actual risks to participants in electronic
qualitative studies. |
|
Title: |
PERFORMANCE EVALUATION FRAMEWORK FOR IT/IS BASED ASSET MANAGEMENT |
Author(s): |
Abrar Haider and Andy Koronios |
Abstract: |
Engineering assets managing businesses use a
variety of information and communication technologies for process
efficiency, control, and management. Nevertheless, key to all these is
the effective measurement of the IT/IS utilisation for existing process
such that the underperforming areas are highlighted, and corrective
actions are taken to achieve optimal use of IS/IT. There are a variety
of performance measurement mechanisms available that stimulate
improvement efforts, in so doing helping businesses to translate
perceived business strategy into action. However, these approaches are
mostly aimed at high level evaluation of an organisation’s performance;
whereas the stochastic nature and ever expanding scope of asset
management processes demands asset managers to have a comprehensive view
of asset lifecycle and the interacting business areas. This paper
proposes an evaluation framework for IT/IS based asset management in an
engineering enterprise. The paper firstly seeks to present a critique of
the asset management paradigm. It then discusses available performance
measurement mechanisms and develops a case for the constituents of an
effective asset management measurement framework that provides detailed
indicators for controls actions required to achieve optimal process
efficiency through the use of IT/IS. The paper, then, presents an
integrated asset performance measurement framework that not only is
derived from business strategy, but informs strategy formulation through
a closed loop learning cycle that encompasses asset management
lifecycle. |
|
Title: |
A REUSE-BASED REQUIREMENTS ELICITATION PROCESS |
Author(s): |
Sangim Ahn and Kiwon Chong |
Abstract: |
Establishing good requirements is important in
an initial phase of software development not to make over time and cost
of projects and low quality of software products. In the context of
Requirements Engineering (RE), reuse is effective in particular because
it can help to define requirement explicitly and to anticipate
requirement change. We propose a reuse-based process approach to elicit
potential requirements from various stakeholders. To achieve our goal,
we present (1) analyzing gaps between requirements map of collected and
reused in the repository and (2) potential requirements elicitation
process with these maps. The former is composed of classifying styles of
requirements, requirements representation formalism with use cases, and
gap analysis using generic gap types. The latter is sequential
procedures to look for potential requirements in addition to Plus Minus
Interests(PMI) method. We illustrate our approach through a credit
system case study. |
|
Title: |
THE VOCABULARY ONTOLOGY ENGINEERING - FOR THE SEMANTIC MODELLING OF
HOME SERVICES |
Author(s): |
Jarmo Kalaoja, Julia Kantorovitch, Sara Carro, José María Miranda,
Álvaro Ramos and Jorge Parra |
Abstract: |
With great advance in information technology
and broadband networks, the interconnected networked home devices are
becoming increasingly popular. Number of heterogeneous networked devices
and services which belong to the traditionally separated functional
islands such as PC (i.e. Internet), mobile, CE broadcasting, and home
automation, not working together can be found in our today’s home.
Merging of these devices and services would offer home individual
residents user-friendly, intelligent, and meaningful interfaces to
handle home information and services. The semantic ontology based
modelling of home services can enable interoperability of heterogeneous
services. The ontology may facilitate clear description on how far each
device is suitable for different kinds of information and different
interaction demands. This paper is presenting an analysis on the kind of
vocabulary ontologies necessary in different functional domains to cope
with heterogeneity of service descriptions. The ontology based rich
representation of services will facilitate an efficient service
discovery, integration and composition. |
|
Title: |
AN ALGORITHM FOR BUILDING INFORMATION SYSTEM’S ONTOLOGIES |
Author(s): |
Mohamed Mhiri, Sana Chabaane, Achraf Mtibaa and Faïez Gargouri |
Abstract: |
Current applications' modelling becomes
increasingly complex. Indeed, it requires a hard work to study the
particular studied field in order to determine its main concepts and
their relationships. The conceptual representations (CR) results of the
modelling of such applications can contain structural and semantic
errors which are not detectable by current CASE. The solution that we
propose is to associate an ontology, for the studied field, as a help to
the designers during IS modelling steps. Building such information
system’s ontologies require the use of an approach allowing the
determination of the concepts and the relationships between these
concepts. Using ontologies makes it possible to ensure conceptual
representations' semantic coherence for a given field. In this paper, we
propose an algorithm for building an information system’s ontology based
on the comparison between the concepts and using a set of semantic
relationships. |
|
Title: |
BRIDGING THE LANGUAGE-ACTION PERSPECTIVE AND ORGANIZATIONAL
SEMIOTICS IN SDBC |
Author(s): |
Boris Shishkov, Jan L. G. Dietz and Kecheng Liu |
Abstract: |
The SDBC approach addresses the actual problem
of business-software alignment through the identification of re-usable
business process models and their mapping to software specification
models. In such an alignment, it is crucial to adequately grasp all
essential business aspects and properly reflect them in modeling the
functionality of the software application-to-be. In achieving such a
business process modeling foundation, SDBC relies on the theories of LAP
and OS: OS allows for an adequate consideration of all essential
semantic aspects in conducting a business process modeling, while LAP is
capable of grasping pragmatics on top of that; therefore a LAP-OS-driven
business process modeling foundation is claimed to be useful. However,
combining LAP and OS is not a trivial task and needs to be based on an
adequate study. Such a study has been initiated during the development
of SDBC. In the current paper, we further elaborate on our (SDBC-driven)
views on how LAP and OS could be appropriately combined for the purpose
of a sound business process modeling that is to found a further
specification of software. |
|
Title: |
INTEROPERABLITY REQUIREMENTS ELICITATION, VALIDATION AND SOLUTIONS
MODELLING |
Author(s): |
Sobah Abbas Petersen, Frank Lillehagen and Maria Anastasiou |
Abstract: |
This paper describes a methodology and a
model-based approach for supporting the requirements elicitation and
validation work in the ATHENA project. Numerous interoperability
requirements have been gathered by four industrial partners and these
requirements are validated against interoperability issues. The process
of obtaining requirements from industrial users and developing solutions
for them involves several communities such as the users, stakeholders
and developers. A model-based methodology and approach are proposed to
support the analysis of the requirements and for incorporating the
different perspectives and views that are desired by everyone. An
example from the telecommunications sector is used to illustrate the
methodology and a matrix-based validation approach is supported using a
model developed in the Metis modelling environment. |
|
Title: |
OBJECT NORMALIZATION AS THE CONTRIBUTION TO THE AREA OF FORMAL
METHODS OF OBJECT-ORIENTED DATABASE DESIGN |
Author(s): |
Jan Vraný, Zdenek Struska and Vojtech Merunka |
Abstract: |
In the article there is described an overview
of current status in the area of formal technique of object database
design. It is discussed there, why relational design techniques as
normalization, decomposition and synthesis are not able to be easy used
in object databases. The article informs with various proposals of
object normal forms and it brings own authors evaluation and an example
of object normalization. |
|
Title: |
EVOLUTION MANAGEMENT FRAMEWORK FOR MULTI-DIMENSIONAL INFORMATION
SYSTEMS |
Author(s): |
Nesrine Yahiaoui, Bruno Traverson and Nicole Levy |
Abstract: |
Because Information Systems are today
tightly-coupled with enterprise activities, adaptability requirement on
software is becoming essential. The framework we have developed aims to
keep synchronized multiple descriptions of the same system in case of
evolution. Its foundations are based on RM-ODP viewpoints and
meta-modeling technology. A prototype tool to support the framework has
been developed as an EMF/Eclipse plug-in. |
|
Title: |
UNDERSTANDING B SPECIFICATIONS WITH UML CLASS DIAGRAM AND OCL
CONSTRAINTS |
Author(s): |
Bruno Tatibouët and Isabelle Jacques |
Abstract: |
B is a formal method (and a specification
language) which enables the automatic generation of an executable code
through a succession of refinements stemming from an abstract
specification. There are two current industrial tools (Clearsy's Atelier
B\footnote{http://www.clearsy.com}, B-Core B
Toolkit\footnote{http://www.b-core.com}) which provide support for all
the development process (type-checking facilities, automatic and
interactive proof support, ...). A B specification requires a certain
knowledge of mathematical notations (Classical logic and sets) as well
as specific terminology (generalized substitutions, B keywords) which
may in all likelihood leave a non-specialist of the B notation in the
dark. To address this problem, we will extract graphic elements from B
specification in an effort to render it more understandable. In a
previous work, these visual elements are illustrated in a UML class
diagram. These visual elements being insufficient they are completed by
OCL constraints allowing to present the invariant and the operations of
a B abstract machine. |
|
Title: |
ARGUMENT-BASED APPROACHES IN PRIORITIZED CONFLICTING SECURITY
POLICIES |
Author(s): |
Salem Benferhat and Rania El Baida |
Abstract: |
Information security system is an important
problem in many domains. Therefore, it is very important to define
security policies to restrict access to pieces of information in order
to guarantee security properties, i.e. confidentiality, integrity and
availability requirements. The joint handling of confidentiality,
integrity and availability properties raises the problem of potential
conflicts. The objective of this paper is to propose tools, based on the
argumentation reasoning, for handling conflicts in prioritized security
policies. |
|
Title: |
CODE OF ETHICS FOR PROFESSIONALS OF INFORMATION SYSTEMS – CEPIS
MODEL |
Author(s): |
Helena Dulce Campos and Luis Amaral |
Abstract: |
On the area of Information Systems Technology
(IST) there is a multiplicity of competences and knowledge. In order the
professionals may carry them out with success and great advantage, the
existence of a structured and standardized framework that could be used
as a reference for any organization, is needed. Parallel to this
problematic there still exists the acknowledgement of the impossibility
of a technological life without ethics. So, a Code of Ethics for
Professionals of Information Systems - CEPIS it will be proposed. |
|
Title: |
ENABLING OR DISABLING WITH OLD SPECIFICATIONS - A NEW INFORMATION
SYSTEM BASED ON OLD SPECIFICATIONS |
Author(s): |
Raija Halonen |
Abstract: |
This research concentrates on the development
of an information system that was based on previously made
specifications. We study the influence of before-made specifications and
discuss the difficulties in adopting them. In our case we had several
universities involved in the development project and the aim was to
implement a joint information system to be used by student affairs
officials and students in universities. Implementing information systems
by several organisations is highly dependent on collaboration between
the organisations. We discuss how the collaboration was managed in our
case and show what the role of previous specifications was. We conclude
that despite the specifications, the information system was finalised. |
|
Title: |
DEONTIC PROTOCOL MODELLING - MODELLING BUSINESS RULES WITH STATE
MACHINES |
Author(s): |
Ashley McNeile and Nicholas Simons |
Abstract: |
State machines can be used as a means of
specifying the behaviour of objects in a system by describing their
event protocols, this being the relationships between the states that
the object may adopt and the ability of the object to respond to events
of different types presented to it. Suitable choice of semantics for the
state machines used to describe protocols allow multiple machines to be
composed in parallel, in the manner of Hoare’s CSP, in the description
of the behaviour of a single object. We describe an extension to this
approach whereby different machines in the composition of a single
object have different deontic semantics; covering necessary behaviour,
encouraged behaviour and discouraged behaviour. This provides a language
that has the expressive power to model the way software interacts with
the domain in which it is embedded to encourage or discourage behaviours
of the domain. |
|
Title: |
USER AUTONOMY IN REQUIREMENTS CHANGING SUPPORTED BY ORGANIZATIONAL
SEMIOTICS AND TAILORING |
Author(s): |
Carlos Alberto Cocozza Simoni, Maria Cecilia Calani Baranauskas and
Rodrigo Bonacin |
Abstract: |
Nowadays, organizations are impacted with
changes from several sources, such as: process reengineering, searching
for continuous quality improvement of products and processes,
globalization, and competitors. Literature points out that we still have
a gap between the dynamic of the system maintenance and changes in the
organizational processes. To cover this gap we consider the use of
practices from Organisational Semiotics and Tailoring, that allow a deep
understanding of the organizational context and the technical system
embedded in it, and Tailoring, proposes and suggests us how to provide
autonomy to users in dealing with changes in computer systems. With this
theoretical referential we present in this paper a case study developed
in our University to explore and extend an existing approach to provide
more autonomy to end users in changing their computer applications,
according to the evolution and changes in their business requirements. |
|
Title: |
VERIFYING THE VALUE OF OBJECTIVE MEASURES - A PROPOSAL FOR A
SYSTEMATIC EVALUATION OF MEASURES |
Author(s): |
Harald Kjellin |
Abstract: |
The results of work in any section of an
enterprise should preferably be described in a way that makes the
results suited for benchmarking with other sections of the enterprise.
The same goes for individual work results. Results are easily compared
if they are measured according to some numerical standard. Numerical
measures can be generalized and standardized until they can be
considered as having a high degree of “reusability”. There are several
types of enterprise models that include the use of reusable “soft”
numerical values. With “soft” numerical values I refer to the type of
values that cannot be directly measured in relation to objective facts
but are artificially constructed measures that includes some kind of
subjective estimation for calculating the value. Another requirement on
such measures is that it should be possible to use them for comparing
performance between individuals, or between units of an organization, or
between organizations. These measures can, for instance, be used for
customer appreciation of their relationships with the organization, as
is often recommended in the method called “Balanced Scorecards” or they
can be used when giving students numerical values as credits (points)
for passing university courses. A summary of informal evaluations is
presented. The evaluations concern how “soft” measures have been
implemented in organizations. The results of the evaluations show that
objective values based on facts can be combined with subjective
estimations in a way that makes them less vulnerable to people
manipulating the measures and less vulnerable to the subjectivity of
superiors when estimating the quality of the results. |
|
Title: |
CONFIGURING REFERENCE MODELS - AN INTEGRATED APPROACH FOR
TRANSACTION PROCESSING AND DECISION SUPPORT |
Author(s): |
Ralf Knackstedt, Christian Janiesch and Tobias Rieke |
Abstract: |
Reference models are of normative, universal
nature and provide a solution schema for specific problems by depicting
best or common-practice approaches. The configuration of these reference
models has been a field of research in the past. However, the
integrative configuration of different reference models or respectively
reference models serving multiple purposes lacks of applicable methods.
In practice this is a common problem as the simultaneous implementation
of an enterprise resource planning system and a management information
system shows. We provide a method that allows the integrative
configuration of conceptual models for transaction processing and
decision support by integrating meta models for modeling languages. In
addition, we exemplarily show its application to extend an existing
reference model. |
|
Title: |
A NEW FRAMEWORK FOR THE SUPPORT OF SOFTWARE DEVELOPMENT COOPERATIVE
ACTIVITIES |
Author(s): |
Arnaud Lewandowski and Grégory Bourguin |
Abstract: |
Software development is a cooperative activity,
since it implies many actors. We focus on CSCW integrated global
environments. Many studies in this field have already shown, for a long
time, that a ‘good’ cooperative environment should be able to take into
account the emergent needs of the users, and should be adaptable. Of
course, such properties should also be found in environments supporting
software development. However, our study of some existing platforms
supporting software development cooperative activities points out their
lacks in terms of tailorability and cooperative support. Eclipse is one
of these broadly used platforms. But even if it presents some
shortcomings, its underlying framework offers some features particularly
interesting for our purpose. Upon results previously obtained in the
CSCW domain, we propose to extend the Eclipse platform, in order to
offer a new support for software development by creating a cooperative
context for the activities supported in Eclipse by each integrated
plug-in. |
|
Title: |
DOMAIN MODELING WITH OBJECT-PROCESS METHODOLOGY |
Author(s): |
Arnon Sturm, Dov Dori and Onn Shehory |
Abstract: |
Domain engineering can simplify the development
of software systems in specific domains. During domain analysis, the
first step of domain engineering, the domain is modeled in a reusable
manner. Most domain analysis approaches suffer from low accessibility,
limited expressiveness, and weak formality. In this paper we utilize the
application-based domain modelling (ADOM) approach and apply it to the
Object-Process Methodology (OPM) modelling language. We do that by
extending Object-Process Methodology (OPM) to support domain analysis.
We also performed an experiment to verify that the proposed extension
improves the model quality compared to quality arrived at without the
extension. Our experimental results show that, when presented with a set
of requirements, subjects that used OPM with the domain analysis
extension arrived at a system model which is better than the system
model arrived at by subjects that used OPM alone. |
|
Title: |
AN XML-BASED LANGUAGE FOR SPECIFICATION AND COMPOSITION OF
ASPECTUAL CONCERNS |
Author(s): |
Elisabete Soeiro, Isabel Sofia Brito and Ana Moreira |
Abstract: |
Separation of concerns refers to the ability of
identifying, encapsulating and manipulating parts of software that are
crucial to a particular purpose (Dijkstra, 1976). Traditional software
development methods were developed with this principle in mind. However,
certain broadly-scoped properties are difficult to modularize and keep
separated during the lifecycle, producing tangled representations that
are difficult to understand and to evolve. Aspect-oriented software
development aims at addressing those crosscutting concerns, known as
aspects, by providing means for their systematic identification,
separation, representation and composition. This paper focuses on the
representation and composition activities, by proposing an XML-based
language to specify and compose concerns at the requirements level. An
illustration of the proposed approach to an example supported by a tool
is presented. |
|
Title: |
EB3TG: A TOOL SYNTHESIZING RELATIONAL DATABASE TRANSACTIONS FROM
EB3 ATTRIBUTE DEFINITIONS |
Author(s): |
Frédéric Gervais, Panawé Batanado, Marc Frappier and Régine Laleau |
Abstract: |
EB3 is a formal language for specifying
information systems (IS). In EB3, the sequences of events accepted by
the system are described with a process algebra; they represent the
valid trace of the IS. Entity type and association attributes are
computed by means of recursive functions defined on the valid traces of
the system. In this paper, we present EB3TG, a tool that synthesizes
Java programs that execute relational database transactions which
correspond to EB3 attribute definitions. |
|
Title: |
AN ARCHITECTURE-CENTRIC APPROACH FOR MANAGING THE EVOLUTION OF EAI
SERVICES-ORIENTED ARCHITECTURE |
Author(s): |
Frédéric Pourraz, Hervé Verjus and Flavio Oquendo |
Abstract: |
The development of big software applications
(like EAI solution) is oriented toward the interoperation of existing
software components (like COTS and legacy systems). This tendency is
accompanied by a certain number of drawbacks for which classical
approaches in software composition cannot be applied and fail.
COTS-based systems are built in ad-hoc manner and it is not possible to
reason on them no more it is possible to demonstrate if such systems
satisfy important properties like Quality Of Service and Quality
Attributes. The recent works issued in web field allow the definition
and the use of complex web service architecture. Languages such as WSFL,
XLANG and BPEL4WS support these architectures called Services Oriented
Architectures. However, these languages do not have any formal
foundation. One cannot reason on such architectures expressed using such
languages: properties cannot be expressed and the system dynamic
evolution is not supported. On the other hand, software architecture
domain aims at providing formal languages for the description of
software systems allowing to check properties (formal analysis) and to
reason about software architecture models. The paper proposes an
approach that consists in formalizing, deploying and evolving EAI
architectures. For that purpose, the ArchWare environment and
engineering languages (especially the ArchWare formal ADL, based on the
π-calculus) and accompanied tools are used. The paper will also present
our approach consisting in refining an abstract architecture to an
executable and services-oriented one. |
|
Title: |
INFORMATION-CENTRIC VS. STORAGE/DATA-CENTRIC SYSTEMS |
Author(s): |
Charles Milligan, Steven Halladay and Deren Hansen |
Abstract: |
It is essential to recognise that information
(i.e., the meaning and value that must be extracted from data for a
business to run) is very different from the data itself. Information
must be managed using different processes and tools than those used in
data management. The current notion of Information Lifecycle Management
(ILM) is really only about making Systems Managed Storage work
universally and does not relate to information management at all.
However, recent developments of new technologies have potential to open
a new paradigm in extracting, organizing and managing the meaning and
value from data sources that can allow processes and decision systems to
take a quantum leap in effectiveness. The networked structure of a graph
database combined with concept modelling will foster this shift. |
|
Title: |
SECURITY THREATS TO DIGITAL TELEVISION PLATFORM AND SERVICE
DEVELOPMENT |
Author(s): |
Jarkko Holappa and Reijo Savola |
Abstract: |
Digital convergence is introducing more diverse
digital television services. The return channel, which enables
interactive television, is a key to this development and may be
considered the most vulnerable element of the terminal device in terms
of information security. Accordingly, its protection from threats
brought about by Internet use, such as malicious programs, is of the
essence. Multimedia Home Platform (MHP) is one of the most important
technologies enabling interactive television. The information security
threats related to it are examined from the viewpoint of the service
developer. Threat analysis presented in this paper is carried out in
Finnish companies that include digital-TV broadcasters, MHP-platform
developers, service developers and telecom operators. |
|
Title: |
SEMANTIC ALIGNMENT OF BUSINESS PROCESSES |
Author(s): |
Saartje Brockmans, Marc Ehrig, Agnes Koschmider, Andreas Oberweis
and Rudi Studer |
Abstract: |
This paper presents a method for semantically
aligning business processes. We provide a representation of Petri nets
in the standard ontology language OWL DL, to semantically enrich the
business processes. On top of this, we propose a technique for
semantically aligning business processes to support inter-organizational
business collaboration. This semantic alignment is improved by a
background ontology. Therefore, we propose a specific UML Profile, which
allows to visually model this background ontology. The different parts
of our proposal, which reduces communication efforts and solves
interconnectivity problems, are discussed in detail. |
|
Title: |
FORMALISATION OF A FUNCTIONAL RISK MANAGEMENT SYSTEM |
Author(s): |
Víctor M. Gulías, Carlos Abalde, Laura M. Castro and Carlos Varela |
Abstract: |
This work shows a first approximation to the
formalisation of a risk management information system. It is based on
our experience in the development of a large, scalable and reliable
client/server risk management information system. This system was
developed using the distributed functional language Erlang for
describing the domain logic. Using a functional language for this task
is very useful to face the challenge of formalising such a complex
system. This kind of formal work is the first step to applying powerful
software verification techniques to reinforce a real system's
reliability. |
|
Title: |
BUSINESS RULES ELICITATION IN THE PROCESS OF ENTERPRISE INFORMATION
SYSTEM DEVELOPMENT |
Author(s): |
Olegas Vasilecas and Diana Bugaite |
Abstract: |
Events modelling in the process of business
rules based information systems development and their importance are
discussed in this paper. The concepts of a rule and an event are defined
in different levels (business system, information system and software
system) of abstraction. According to the made definitions of business
event, event in information system and software event the modelling
abstraction levels are extended with rules and event modelling
facilities and their propagation into lower levels of enterprise system.
Since ontology represents the real-world domain knowledge and events as
well as business rules making a specific part of all domain knowledge,
it is suggesting using an ontology for business rules and events
elicitation. |
|
Title: |
IWISE: A FRAMEWORK FOR PROVIDING DISTRIBUTED PROCESS VISIBILITY
USING AN EVENT-BASED PROCESS MODELLING APPROACH |
Author(s): |
Claire Costello, Weston Fleming, Owen Molloy, Gerard Lyons and
James Duggan |
Abstract: |
Distributed business processes such as supply
chain processes execute across heterogeneous systems and company
boundaries. This research aims to provide an event-based process model
to describe business processes spanning disparate enterprise systems.
This model will be generic enough to support processes from multiple
industry domains. In addition, this paper introduces the iWISE framework
as a light-weight process diagnostic tool. The iWISE architecture uses
the process model described to provide business process performance
monitoring capabilities. |
|
Title: |
BUSINESS PROCESS DESIGN BASED ON COMMUNICATION AND INTERACTION |
Author(s): |
Joseph Barjis and Isaac Barjis |
Abstract: |
The easiest way that people describe their
roles in an organization or the way that members of an organization make
promises and commitments to fulfill a task is through communication and
interaction. In such a communication language is used as a tool or
facilitator of action when a customer requests a service and the
supplier promises to provide such a service. In this paper we introduce
a language-action based methodology for designing business processes for
the Department of University Housing at Georgia Southern University
planning to acquire a new information system for managing, supporting
and improving the “process of rooms assignment” to some 4000 students.
As stated, the methodology is based on language-action perspective and
therefore we have used the business transaction concept for mining
atomic business processes. Each business transaction identifies an
essential activity and reveals the actors and their roles as an
initiator or executor of the transaction. Since the transaction concept
is used as a conceptual basis, the methodology is complemented with
Petri net graphical notations in order to construct businesses process
models of the department of housing. |
|
Title: |
A NEW PERFORMANCE OPTIMIZATION STRATEGY FOR JAVA MESSAGE SERVICE
SYSTEM |
Author(s): |
Xiangfeng Guo, Xiaoning Ding, Hua Zhong and Jing Li |
Abstract: |
Well suited to the loosely coupled nature of
distributed interaction, message oriented middleware has been applied in
many distributed application fields. To most of these applications, the
need to transmit messages with reliability is necessary. Efficiently
transmitting messages is a key feature of message oriented middleware.
Due to the necessary persistence facilities, the performance of
transmitting is subject greatly to the persistence action. The Openness
of Java platform has made the systems conforming to Java Message Service
Specification supported widely. In these applications, many consumers
get messages periodically. We bring forward a new efficient strategy
using different persistence methods with different kinds of messages,
which improves system performance greatly. The strategy also utilizes
daemon threads to reduce its influence to the system. The strategy has
been implemented in our Java Message Service conformed system, ONCEAS
MQ. |
|
Title: |
A NEW PUBLIC-KEY CRYPTOSYSTEM AND ITS APPLICATIONS |
Author(s): |
Akito Kiriyama, Yuji Nakagawa, Tadao Takaoka and Zhiqi Tu |
Abstract: |
We propose in this paper a new public-key
crypto-system, called the non-linear knapsack cryptosystem. The security
of this system is based on the NP-completeness of the non-linear
knapsack problem. We extend the system into secret sharing and group
authentication. That is, an encrypted message can be decrypted only when
all members of a group agree to do so. The other is to allow group
authentication/access control. That is, when the verifier challenges the
prover with encrypted messages with public keys for several groups, the
prover can prove he belongs to those groups using the secret keys for
them. In our system group authentication can be done in a
batch-processing manner, not one-by-one. Group authentication can be
used for access control as well. Some experimental results on group
authentication are given, which demonstrate the efficiency of our
system. |
|
Title: |
DESIGN OF REAL-TIME SYSTEMS BY SYSTEMATIC TRANSFORMATION OF UML/RT
MODELS INTO SIMPLE TIMED PROCESS ALGEBRA SYSTEM SPECIFICATIONS |
Author(s): |
Kawtar Benghazi Akhlaki, Manuel Icidro Capel Tuñon and Juan Antonio
Holgado Terriza |
Abstract: |
The systematic translation from a UML/RT model
into CSP+T specifications, proposed in a previous paper, may give a way
to use jointly UML and CSP in a unified, practical and rigorous software
development method for real-time systems. We present here a systematic
transformation method to derive a correct system specification in terms
of CSP+T from a semi-formal system requirement specification (UML-RT),
by applying a set of transformation rules which give a formal semantics
to the semi-formal analysis entities of UML/RT, and thus open up the
possibility of verifying a software system design that also includes
real-time constraints. As to show the applicability of the approach, a
correct design of a real-time system is obtained by following the
process of development proposed here. |
|
Title: |
USING ASPECT-ORIENTED SOFTWARE DEVELOPMENT IN REAL-TIME EMBEDDED
SYSTEMS SOFTWARE - A REVIEW OF SCHEDULING, RESOURCE ALLOCATION AND
SYNCHRONIZATION |
Author(s): |
Pericles Leng Cheng and George Angelos Papadopoulos |
Abstract: |
Timeliness and criticality of a process are the
two main concerns when designing real-time systems. In addition to that
embedded systems are bounded by limited resources. To achieve timeliness
and conform to the criticality issues of various processes while at the
same time using a minimal amount or resources, real-time embedded
systems use different techniques such as task scheduling, resource
management and task synchronization. All of these techniques involve a
number of the modules of the system which makes the use of
Aspect-Oriented Software Development imperative. AOSD is a programming
technique which uses the notion of join points to capture specific
locations in code execution and then use advices to insert new code.
This paper examines existing work in the development of schedulers,
resource allocation agents and synchronization techniques using
Aspect-Oriented Software Development in real-time systems and more
specifically in embedded systems. An analysis of the existing research
is used to describe the advantages of using AOSD over conventional OOP
methods and to identify areas where further research may be required. |
|
Title: |
SYSTEM ANALYSIS AND DESIGN IN A LARGE-SCALE SOFTWARE PROJECT: THE
CASE OF TRANSITION TO AGILE DEVELOPMENT |
Author(s): |
Yael Dubinsky, Orit Hazzan, David Talby and Arie Keren |
Abstract: |
Agile software development methods mainly aim
at increasing software quality by fostering customer collaboration and
performing exhaustive testing. The introduction of Extreme Programming
(XP) – the most common agile software development method – into an
organization is accompanied with conceptual and organizational changes.
These changes range from daily-life changes (e.g., sitting together and
maintaining an informative project environment) and continue with
changes on the management level (e.g., meeting and listening to the
customer during the whole process and the concept of the whole team
which means that all role holders are part of the team). This paper
examines the process of transition to an agile development process in a
large-scale software project in the Israeli Air Force as it is perceived
from the system analysis and design perspective. Specifically, the
project specifications of the agile team are compared with those of a
team who continues working according to the previous heavyweight method
during the first half year of transition. Size and complexity measures
are used as the basis of the comparison. In addition to the inspection
of the specifications, the change in the role of the system analysts, as
the system analysts conceive of it, is examined. |
|
Title: |
BUSINESS PROCESS VISUALIZATION - USE CASES, CHALLENGES, SOLUTIONS |
Author(s): |
Stefanie Rinderle, Ralph Bobrik, Manfred Reichert and Thomas Bauer |
Abstract: |
The proper visualization and monitoring of
their (ongoing) business processes is crucial for any enterprise. Thus a
broad spectrum of processes has to be visualized ranging from simple,
short-running processes to complex long-running ones (consisting of up
to hundreds of activities). In any case, users shall be able to quickly
understand the logic behind a process and to get a quick overview of
related tasks. One practical problem arises when different fragments of
a business process are scattered over several systems where they are
often modeled using different process meta models (e.g., High--Level
Petri Nets). The challenge is to find an integrated and user-friendly
visualization for these business processes. In this paper we discover
use cases relevant in this context. Since existing graph layout
approaches have focused on general graph drawing so far we further
develop a specific approach for layouting business process graphs. The
work presented in this paper is embedded within a larger project on the
visualization of automotive processes. |
|
Title: |
HYBRID MODELING USING I* AND AGENTSPEAK(L) AGENTS IN AGENT ORIENTED
SOFTWARE ENGINEERING |
Author(s): |
Aniruddha Dasgupta, Farzad Salim, Aneesh Krishna and Aditya K.
Ghose |
Abstract: |
In this paper we use i* which is a semi-formal
modelling framework to model agent based applications. We then describe
how we execute these models into AgentSpeak(L) agents to form the
essential components of a multi-agent system. We show that by making
changes to the i* model we can generate different executable multi-agent
systems. We also describe reverse mapping rules to see how changes to
agents in the multi-agent system gets reflected in i* model. This
co-evolution of two models offers a novel approach for configuring and
prototyping agent based systems. |
|
Title: |
ON IMPLEMENTING INTEROPERABLE AND FLEXIBLE SOFTWARE EVOLUTION
ACTIVITIES |
Author(s): |
Mourad Bouneffa, Henri Basson and Y. Maweed |
Abstract: |
In this paper we present an approach for
assistance at software evolution based on an integrated model of
representation of the various software artefacts. This model founded on
the typed and attributed graphs as well as a representation of these
graphs using GXL (eXtensible Graph Language) a language for structuring
hyperdocuments. The hyperdocuments GXL are used to facilitate the
interoperability between tools intended to represent and handle various
aspects of the software evolution. We also use the graph rewriting
systems for a simple and flexible implementation of mechanisms required
for reasoning by software evolution management. Our approach has been
applied to several applications; it is illustrated here on change impact
management of applications developed according to multi-tiered
architecture Java J2EE and the architecture recovery of these
applications. |
|
Title: |
A PRODUCT ORIENTED MODELLING CONCEPT - HOLONS FOR SYSTEMS
SYNCHRONISATION AND INTEROPERABILITY |
Author(s): |
Salah Baïna, Hervé Panetto and Khalid Benali |
Abstract: |
Throughout product lifecycle coordination needs
to be established between reality in the physical world (physical view)
and the virtual world handled by manufacturing information systems
(informational view). This paper presents the “Holon” modelling concept
as a means for the synchronisation of both physical view and
informational views. Afterwards, we show haw the concept of holon can
play a major role in ensuring interoperability in the enterprise
context. |
|
Title: |
TOWARDS A RIGOROUS PROCESS MODELING WITH SPEM |
Author(s): |
Benoit Combemale, Xavier Crégut, Alain Caplain and Bernard Coulette |
Abstract: |
Modeling software process is a good way to
improve development and thus quality of resulting applications. The OMG
proposes the SPEM metamodel to describe software processes. SPEM is a
MOF instance and a UML profile. Its concepts are described through class
diagrams. Unfortunately, it lacks a formal description of its semantics
that makes it hard to use. So, we propose a specialization of SPEM that
clarifies it and we use OCL to formally express constraints on the SPEM
metamodel and on the process model. This specialization has been used to
model a UML based process called MACAO that focuses on user/system
interactions. |
|
Title: |
METHOD FOR USER ORIENTED MODELLING OF DATA WAREHOUSE SYSTEMS |
Author(s): |
Lars Burmester and Matthias Goeken |
Abstract: |
The paper describes a method for data warehouse
development. One critical success factor of data warehouse development
is determining information requirements. Hence, the method focuses on
gathering of requirements and information needs of the users first. An
extended data warehouse architecture and a technique for decomposition
of the system serve as a developing framework. On the one hand this
framework is used to define releases (builds) of the system, which is
indispensable for an incremental development process. On the other hand
it defines intermediate and final work products (artifacts) that are
produced and used during further development stages. Starting with
information requirements elicitation, each increment is realized through
a series of data models which successively are transformed from
conceptual to logical level. These logical data models are then used for
implementation as well as for the modelling of ETL processes. |
|
Title: |
CPN BASED COMPONENT ADAPTATION |
Author(s): |
Yoshiyuki Shinkawa |
Abstract: |
One of the major activities in component based
software development is to identify the adaptable components to the
given requirements. We usually compare requirement specifications with
the component specifications, in order to evaluate the equality between
them. However, there could be several differences between those
specifications, e.g. granularity, expression forms, viewpoints, or the
level of detail, which make the component evaluation difficult. In
addition, recent object oriented approaches require many kinds of models
to express software functionality, which make the comparison of the
specification complicated. For rigorous component evaluation, it is
desirable to use concise and simple expression forms of specifications,
which can be used commonly between requirements and components. This
paper presents a formal evaluation technique for component adaptation.
In order to relieve the granularity difference, the concept of a virtual
component is introduced, which is the reusable unit of this approach. A
virtual component is a set of components that can acts as single
component. In order to express requirements and components commonly and
rigorously, algebraic specification and Colored Petri Nets (CPNs) are
used. Algebraic specification provides the theoretical foundation of
this technique, while CPNs help us to use it intuitively. |
|
Title: |
INFORMATION ASSURANCE ASSET MANAGEMENT ARCHITECTURE USING XML FOR
SYSTEM VULNERABILITY |
Author(s): |
Namho Yoo and Hyeong-Ah Choi |
Abstract: |
This paper suggests an XML-based IA asset
management architecture for system vulnerability. Once an information
assurance vulnerability notice is given for a system, it is important
for reducing massive system engineering efforts for IA asset management.
When systems are updated by security patch for mitigating system
vulnerability, asset management based on vulnerability update and
request is trivial, in order to increase accuracy, efficiency and
effectiveness of software processes. By employing XML technology, we can
achieve seamless and efficient asset management between heterogeneous
system format as well as data formats in analysing and exchanging the
pertinent information for information assurance vulnerability. Thus,
when a system is updated to improve system vulnerability, the proposed
XML-based IA asset management architecture. Then, an executable
architecture for implementation to verify the proposed scheme and
testing environment is presented to mitigate vulnerable systems for
sustained system. |
|
Title: |
A SOA-BASED SYSTEM INTERFACE CONTROL FOR E-GOVERNMENT |
Author(s): |
Namho Yoo and Hyeong-Ah Choi |
Abstract: |
In this paper, a SOA-based system approach is
presented for system interface control in sustained systems. Once a
system is completed developed, it goes into a sustained phase supported
by many interfaces. As new technologies develop, updating and
maintaining such systems require non-trivial efforts. A clear
pre-requisite before the deployment of a new system is to clarify the
influence of changes on other systems connected through interfaces.
However, as each sustained system manages its own information
separately, integrating relevant information among the interfaced
systems is a major hurdle to build SOA in E-Gov. Therefore, the XML
technology is applied to support system interface control toward SOA
using step-by-step approach in E-Government. In particular, I focus on
messaging interface issues in Health Level Seven typically used in
medical information system and propose SOA framework cube and a scheme
to represent message information that can be used for the decision
support of interface impact between sustained systems. |
|
Title: |
APPLYING SOFTWARE FACTORIES TO PERVASIVE SYSTEMS: A PLATFORM
SPECIFIC FRAMEWORK |
Author(s): |
Javier Muñoz and Vicente Pelechano |
Abstract: |
The raise of the number and complexity of
pervasive systems is a fact. This kind of systems involves the
integration of physical devices and software components in order to
provide services to the inhabitants of an environment. Current
techniques for developing pervasive systems provide low-level
abstraction primitives which makes difficult the construction of large
systems. Software Factories and the Model Driven Architecture (MDA) are
two important trends in the software engineering field that can provide
sensible benefits in the development of pervasive systems. In this
paper, we present an approach for building a Software Factory for
pervasive systems, focusing in the definition of a product line for this
kind of systems. We introduce a software architecture for pervasive
systems, which is supported by a software framework implemented using
the OSGi technology. Then, we integrate the framework into the MDA
standard defining the framework metamodel and providing tool support for
the automatic code generation. |
|
Title: |
GRADUAL MODELING OF INFORMATION SYSTEM - MODEL OF METHOD EXPRESSED
AS TRANSITIONS BETWEEN CONCEPTS |
Author(s): |
Marek Pícka and Robert Pergl |
Abstract: |
The objective of this paper is to show a new
way of depicting information systems‘ models of design methods. New
terms of the method are created by sequential transformations from the
existing terms. The model of elements‘ transformation is an instance of
this model. It depicts the process of constructing given information
system |
|
Title: |
INCREASING THE VALUE OF PROCESS MODELLING |
Author(s): |
John Krogstie, Vibeke Dalberg and Siri Moe Jensen |
Abstract: |
This paper presents an approach supporting
efforts to increase the value gained from enterprise modelling
activities in an organisation, both on a project and on an
organisational level. The main objective of the approach is to
facilitate awareness of, communication about, and coordination of
modelling initiatives between stakeholders and within and across
projects, over time. The first version of the approach as a normative
process model is presented and discussed in the context of case projects
and activities, and we conclude that although work remains both on
sophistication of the approach and on validation of its general
applicability and value, our results so far show that it addresses
recognised challenges in a useful way |
|
Title: |
INTRODUCING A UML PROFILE FOR DISTRIBUTED SYSTEM CONFIGURATION |
Author(s): |
Nancy Alexopoulou, A. Tsadimas, M. Nikolaidou, A. Dais and D.
Anagnostopoulos |
Abstract: |
Distributed system configuration consists of
distributed application component placement and underlying network
design, thus is a complex process dealing with interrelated issues. A
four-stage methodology has been proposed in order to effectively explore
configuration problems. A common metamodel for distributed system
representation in all configuration stages is thus required, so that
unclear dependencies between discrete stages can be easily identified.
This model should also be easily adopted by autonomous software tools
used for the automation of discrete configuration stages and for the
efficient development of system specifications by designers. We propose
such a metamodel using UML 2.0. More specifically, we introduce a UML
2.0 profile facilitating distributed system configuration process. In
this profile, different UML 2.0 diagrams are integrated and properly
extended, in order to model all aspects of the distributed system
configuration process. Stereotypes proved to provide an efficient
extension mechanism as no metamodel extensions were needed. This profile
can also be used within the Rational Modeler platform. |
|
Title: |
UML-BASED BUSINESS PROCESS REENGINEERING (BPR-UML) APPLIED TO IT
OUTSOURCING |
Author(s): |
Edumilis Maria Méndez, Luis Eduardo Mendoza, María A. Pérez and
Anna C. Grimán |
Abstract: |
Business Process Reengineering (BPR) is one of
the current trends used by organizations to face global market
pressures. BPR provides firms with an analysis of their internal
processes with the view to offering them customized solutions which are
focused on their goals. In addition, Business Process Outsourcing (BPO)
can be used: 1) as a tool for new processes defined by reengineering,
allowing Information Technology experts to perform the business process
involving that know-how; and 2) as part of a BPR, to reorient its
implementation regarding quality levels and client satisfaction. This
paper presents a methodological proposal merging the BPR methodology
proposed by Jacobson (1994) with Rational Unified Process (RUP). It also
describes how this proposal can be applied in a BPR for IT Outsourcing
in order to improve efficiency and quality levels in the corresponding
business processes, specifically, the Printing Outsourcing Service
(POS). This methodological proposal ensures traceability between the
models proposed for the business and the features that the technological
enablers should have for supporting them, thus reaching higher
effectiveness in the reengineering process. |
|
Title: |
A GENERATOR FRAMEWORK FOR DOMAIN-SPECIFIC MODEL TRANSFORMATION
LANGUAGES |
Author(s): |
Thomas Reiter, Elisabeth Kapsammer, Werner Retschitzegger, Wieland
Schwinger and Markus Stumptner |
Abstract: |
Domain specific languages play an important
role in model driven development, as they allow to model a system using
modeling constructs carrying implicit semantics specific to a domain.
Consequently, possibly many reusable, domain specific languages will
emerge. Thereby, certain application areas, such as business process
engineering, can be jointly covered by a number of conceptually related
DSLs, that are similar in a sense of sharing semantically equal
concepts. Although, a crucial role in being able to use, manage and
integrate all these DSLs comes to model transformation languages with
QVT as one of their most prominent representatives, existing approaches
have not aimed at reaping benefit of these semantically overlapping DSLs
in terms of providing abstraction mechanisms for shared concepts.
Therefore, as opposed to a generalpurpose model transformation language
sought after with the QVT-RFP, this work discusses the possibility of
employing domain-specific model transformation languages. These are
specifically tailored for defining transformations between metamodels
sharing certain characteristics. In this context, the paper introduces a
basic framework which allows generating the necessary tools to define
and execute transformations written in such a domain-specific
transformation language. To illustrate the approach, an example language
will be introduced and its realization within the framework is shown. |
|
Title: |
A FORMAL APPROACH TO DETECTING SHILLING BEHAVIORS IN CONCURRENT
ONLINE AUCTIONS |
Author(s): |
Yi-Tsung Cheng and Haiping Xu |
Abstract: |
Shilling behaviors are one of the most serious
fraudulent problems in online auctions, which make winning bidders have
to pay more than what they should pay for auctioned items. In concurrent
online auctions, shilling behaviors are even more severe because
detecting, predicting and preventing such fraudulent behaviors become
very difficult. In this paper, we propose a formal approach to detecting
shilling behaviors in concurrent online auctions using model checking
techniques. We first developed a model template that represents two
concurrent online auctions in Promela. Based on the model template, we
derive an auction model that simulates the bidding process of two
concurrent auctions. Then we use the SPIN model checker to formally
verify if the auction model satisfy normal and questionable behavioral
properties that are written in LTL (Linear Temporal Logic) formulae. Our
approach simplifies the problem of searching for shilling behavior in
concurrent online auctions into a model checking problem. Finally, we
provide a case study to illustrate how our approach can effectively
detect possible shill bidders. |
|
Title: |
ARCHITECTING SOA SOLUTIONS FROM ENTERPRISE MODELS - A MODEL DRIVEN
FRAMEWORK TO ARCHITECT SOA SOLUTIONS FROM ENTERPRISE MODELS |
Author(s): |
Xabier Larrucea and Gorka Benguria |
Abstract: |
The improvement of the operational efficiency
is an important concern in the several kinds of enterprises, but it
involves the management of a multitude of elements. To be able to cope
with such as complexity several enterprises are relaying in the use of
enterprise modelling tools. This usually becomes a starting point for
business process automation initiatives towards the improvement of the
organisation. However, there is still a large gap from these enterprise
models to the infrastructure systems. The current paper presents a MDA
(Model Driven Architectures) framework over eclipse platform to address
this gap for SOA (Service Oriented Architecture) based solutions and
more in deep the notation and transformation aspects of the framework.
The framework provides a systematic approach for deriving SOA solutions
from enterprises models, ensuring that the information systems really
implements the models developed by the business experts and no partial
interpretations from IT experts. |
|
Title: |
TOWARDS A MAINTAINABILITY EVALUATION IN SOFTWARE ARCHITECTURES |
Author(s): |
Anna Grimán, Luisana Chávez, María Pérez, Luis Mendoza and Kenyer
Domínguez |
Abstract: |
Software quality is defined as user requirement
satisfaction, including those requirements that are visible (external
quality) and those that are exclusive to the product (internal quality).
Maintainability is an internal quality characteristic that is
contemplated by many users and developers, and therefore is deeply
related to software architecture. It presents an organization of its
components and relation which promote or obstruct different attributes
like testability, changeability, and analyzability. This relationship
between maintainability and software architecture determines the
importance of making appropriate architectural decisions. As part of a
research in progress, this article analyzes and organizes a set of
architectural mechanisms that guarantee software maintainability. To
propose the architectural mechanisms we decided first to construct an
ontology, which helps identify all concepts related to Maintainability
and their relationships. Then we decided to focus and specify mechanisms
that promote maintainability, also we present a set of scenarios that
will explore the presence at the architecture of those concepts
previously identified, including the architectural mechanism analyzed.
With the products described in this article we have the bases to develop
an architectural evaluation method, which is based on maintainability |
|
Title: |
METHODOLOGICAL GUIDELINES FOR SQA IN DEVELOPMENT PROCESS - AN
APPROACH BASED ON THE SPICE MODEL |
Author(s): |
Anna Griman, Maria Perez and Luis Mendoza |
Abstract: |
As far as international standards for promoting
Software Process Quality are concerned, one of the most popular and
accepted is ISO 15504 (or SPICE model). On the other hand, since a
development methodology must guide the main activities in software
development, it is necessary that this one fulfils some Quality Base
Practices to guarantee a high-level product. The purpose of this
research is analyzing a set of five methodologies widely used by
developers, to identify its adjustment with respect to the
aforementioned standard. This analysis allowed us: (1) determining the
degree of alignment of these methodologies with respect to the SPICE
model, and (2) proposing a synthesis of methodological guidelines, based
on the best practices obtained from these methodologies, that supports
the characteristics contained in the studied standard. |
|
Title: |
REFINEMENT OF SDBC BUSINESS PROCESS MODELS USING ISDL |
Author(s): |
Boris Shishkov and Dick Quartel |
Abstract: |
A reason for software failures today is the
limited capability of most of the currently used software development
methods to appropriately reflect the original business information in a
software model. The SDBC approach addresses this challenge by allowing
for an adequate mapping between a business process model and a software
specification model. Both models consist of corresponding aspect models
which are to be consistent with each other. They relate to particular
perspectives which consider statics, dynamics, and data. Nevertheless,
we acknowledge that real-life communicative and coordination actions
make a business system more complex than a well-structured and
rules-driven software systems. Hence, SDBC considers also a business
perspective which concerns communication and coordination. Thus SDBC
should allow for capturing these issues (as complementing the dynamics)
on the business process modeling level and adequately mapping them to a
dynamic software specification aspect model. SDBC uses three modeling
techniques concerning this goal: two business process modeling
techniques grasping communication and dynamics, respectively, as well as
a software specification technique grasping dynamics. However, the
transformations among these techniques complicate the modeling process.
Further, different techniques use different modeling formalisms whose
reflection sometimes causes limitations. For this reason, we studied
potentials for combining SDBC with an integrated modeling facility based
on the language ISDL. In particular, we explore in this paper the value
which ISDL could bring to SDBC in aligning communication and dynamic
business process models as well as in mapping them towards software
specification. ISDL allows one to refine dynamic process models by
adding communication and coordination actions, and provides a method to
assess whether this refinement conforms to the original process model.
Furthermore, ISDL can be used to model software application services and
designs, thereby allowing one to relate business process modeling and
software specification within the context of the same language facility. |
|
Title: |
VISUAL CONTRACTS - A WAY TO REASON ABOUT STATES AND CARDINALITIES
IN IT SYSTEM SPECIFICATIONS |
Author(s): |
José Diego De la Cruz, Lam-Son Lê and Alain Wegmann |
Abstract: |
Visual modeling languages propose specialized
diagrams to represent behaviour and concepts necessary to specify IT
systems. As a result, to understand a specification, the modeller needs
to analyze these two types of diagrams and, often, additional statements
that make explicit the relationships between them. In this paper, we
define a visual contract notation that integrates behaviour and
concepts. Thanks to this notation, the modeler can specify, within one
diagram, an action and its effects on the specified IT system. The
notation semantics is illustrated by a mapping to Alloy, a light weight
formalization language.
|
|
Title: |
AN ONTOLOGY FOR ARCHITECTURAL EVALUATION - CASE STUDY:
COLLABORATION SYSTEMS |
Author(s): |
Anna Grimán, María Pérez, José Garrido and María Rodriguez |
Abstract: |
Barbacci et al. (1995) state that the
development of systematic ways to relate the quality attributes of a
system to its architecture, constitutes the basis for making objective
decisions on design agreements, and helps engineers do reasonably
accurate predictions as to the system attributes, free of prejudice and
non-trivial assumptions. The aim is being able to evaluate architecture
quantitatively to reach agreements among multiple quality attributes and
thus globally attain a better system. However, the elements required to
incorporate this evaluation into different types of development models,
are not clear. This paper proposes an ontology to conceptualize the
issues inherent to architectural evaluation within a development
process, which will help identify the scope of the evaluation, as well
as the issues to be guaranteed to achieve effectiveness within different
development processes, both agile and rigorous. The main conclusion of
the research allowed us to identify the interaction elements between the
development process and an architectural evaluation method, establishing
the starting and end points as well as the inputs required for the
incorporation into different kinds of processes. This interaction was
validated through a case study, a Collaboration Systems Development
Methodology. |
|
Title: |
CEO FRAMEWORK ENTERPRISE MODELS CONFORMANCE WITH ISO14258 |
Author(s): |
Patrícia Macedo, Carla Ferreira and José Tribolet |
Abstract: |
Several international standards for Enterprise
Engineering were developed in order to: promote the quality and
reliability of the communication between the partners involved in
business processes; upgrade the compatibility and alignment between the
systems which support business processes. In this area an international
standard was develop – ISO 14258 – which specifies rules and concepts
for enterprise modelling. CEO Framework is an analysis framework that
provides a formal way of describing enterprises. This article describes
the how to verify that an enterprise modelling frameworks generates
models in conformance with ISO14248. This sequence of steps is applied
to verify CEO framework compliance. |
|
Title: |
METHODOLOGY TO SUPPORT SEMANTIC RESOURCES INTEGRATION IN THE
CONSTRUCTION SECTOR |
Author(s): |
Simona Barresi, Yacine Rezgui, Farid Meziane and Celson Lima |
Abstract: |
Ontologies, taxonomies, and other semantic
resources, are used in a variety of sectors to facilitate knowledge
reuse and information exchange between people and applications. In
recent years, the need to access multiple semantic resources has led to
the development of a variety of projects and tools aiming at integrating
existing resources. This paper describes the methodology used during the
FUNSIEC project, to develop an open infrastructure for the European
Construction sector (OSIECS). This infrastructure aims towards
facilitating integration among Construction related semantic resources,
providing a base for the development of a new generation of e-services
for the domain. |
|
Title: |
BUSINESS PROCESSES: BEHAVIOR PREDICTION AND CAPTURING REASONS FOR
EVOLUTION |
Author(s): |
Sharmila Subramaniam, Vana Kalogeraki and Dimitrios Gunopulos |
Abstract: |
Workflow systems are being used by business
enterprises to improve the efciency of their internal processes and
enhance the services provided to their customers. Workow models are the
fundamental components of Workow Management Systems used to dene
ordering, scheduling and other components of workow tasks. Companies
can use these models to plan their resource usage, satisfy customer
needs and maximize prot and productivity. Companies increasingly follow
exible workow models in order to adapt to changes in business logic,
making it more challenging to predict resource demands. In such a
scenario, knowledge of what lies ahead i.e., the set of tasks that are
going to be executed in the future, assists the process administration
to take decisions pertaining to process management in advance. In this
work, we propose a method to predict possible paths of a running
instance, by applying classication techniques to the history of past
executions of the corresponding process. For instances that deviate from
the workow model graph, we propose methods to determine the
characteristics of the changes using classication rules. These rules
can be given as input to the process modeler to restructure the graph
accordingly. |
|
Title: |
GENERATION AND USE OF ONE ONTOLOGY FOR INTELLIGENT INFORMATION
RETRIEVAL FROM ELECTRONIC RECORD HISTORIES |
Author(s): |
Miguel A. Prados de Reyes, Maria Carmen Peña Yañez, Maria Amparo
Vila Miranda and M. Belen Prados Suarez |
Abstract: |
This paper analyzes the terminology used in the
diagnosis, treatment, exploration, and operation descriptions entered by
doctors in the electronic healthcare record. From this, expression
stability (and the use of a sufficiently limited and controlled
language) is shown, which is therefore reasonably valid for a
conceptualization process to be employed on it. This conceptualization
process is performed by the generation of an ontology which proposes
semantic classes according to the different medical concepts to be used
on database query profiles. By way of summary, we shall propose a
semantic organizational method so that classes, attributes and
properties in the ontology may act as links between the database and the
users, both in information incorporation processes and in queries. It
offers a wide range of benefits by extending and making information
management possibilities more flexible, and enabling the application of
traditional data mining techniques. |
|
Title: |
SUPPORTING AUTHENTICATION REQUIREMENTS IN WORKFLOWS |
Author(s): |
Ricardo Martinho, Dulce Domingos and António Rito-Silva |
Abstract: |
Workflow technology represent nowadays
significant added value to organizations that use information systems to
support their business processes. By their nature, workflows support the
integration of different information systems. As organizations use
workflows increasingly, workflows manipulate more valuable and sensitive
data. Either by interoperability issues or by the value of data
manipulated, a workflow may present several and distinct authentication
requirements. Typically, information systems deal with their
authentication requirements once, within their authentication process.
This strategy cannot be easily applied to workflows since each workflow
activity may present its own authentication requirements. In this paper
we identify authentication requirements that workflows present and we
propose to meet these requirements by incorporating authentication
constraints into workflow authorization definitions. With this purpose,
we extend the Role- Based Access Control (RBAC) model and we define an
access control algorithm that supports and enforces authorization
decisions constrained by authentication information. |
|
Title: |
TECHNOLOGY FOR LEAST-COST NETWORK ROUTING VIA BLUETOOTH AND ITS
PRACTICAL APPLICATION - REPLACING INTERNET ACCESS THROUGH WIRELESS PHONE
NETWORKS BY BT DATA LINKS |
Author(s): |
Hans Weghorn |
Abstract: |
Today, mobile devices are equipped with a
variety of wireless communication interfaces. While initially small
handheld devices only could use cellular telephony networks for
establishing data communication such as Internet downloads, nowadays
data contents can be retrieved additionally through communication
standards like wireless LAN or Bluetooth. For the latter there exists a
variety of technical and scientific pa-pers that discuss how Bluetooth
communication can be established in principle – especially between two
mobile devices. On the other hand, a description of how data
communication between a mobile device and a desktop computer can be
implemented is not found in detail. Furthermore, the restrictions of
Bluetooth communication like extended search times are not discussed in
these qualitative articles. In a technical de-scription here, it shall
be displayed how to establish with a minimal effort a streaming data
link between handheld devices and fixly installed computer systems in
terms of a software implementation recipe. On base of concrete
application samples, it is shown that Bluetooth can be employed to
construct location-based information services with least-cost Internet
data routing, but also the constraints and efficiency of Bluetooth
communication technology are investigated and discussed for the given
applications. |
|
Title: |
COMMONALITY VERSUS VARIABILITY - THE CONTRADICTORY NATURE OF
ENTERPRISE SYSTEMS |
Author(s): |
Stig Nordheim |
Abstract: |
This position paper argues that there is a
major contradiction inherent in Enterprise Systems (ES). The evidence
for this contradiction is seen in the meta level concepts of commonality
and variability that characterize enterprise systems. The inherent
contradiction of commonality and variability is discussed in the light
of ES literature and interviews with three ES vendors. The inherent
contradiction of enterprise systems is then presented, with the
questions it raises. |
|
Title: |
TRANSFORMATION OF UML DESIGN MODEL INTO PERFORMANCE MODEL - A
MODEL-DRIVEN FRAMEWORK |
Author(s): |
Ramrao Wagh, Umesh Bellur and Bernard Menezes |
Abstract: |
Software Performance Engineering is receiving
increasing attention in today’s software dominated world. Compared to
research work in performance evaluation in hardware and networks, this
field is still in its nascent stage. Many methods have been proposed but
majority of them are unable to adapt in the software development
life-cycle dominated by professionals without substantial performance
engineering background. We propose a Model Driven Software Performance
Engineering Framework to facilitate performance engineering within
software development life cycle, based on OMG’s MDA initiative.
|
|
Title: |
MOLDING ARCHITECTURE AND INTEGRITY MECHANISMS EVOLUTION - AN
ARCHITECTURAL STABILITY EVALUATION MODEL FOR SOFTWARE SYSTEMS |
Author(s): |
Octavian-Paul Rotaru |
Abstract: |
The stability of architecture is a measure of
how well it accommodates the evolution of the system without requiring
changes to the architecture. The link between integrity mechanisms and
application’s architecture starts right from the moment the requirements
of the application are defined and evolves together with them. The
integrity mechanisms used will evolve whenever the application’s
requirements are modified. Apart from the possible architectural changes
required, adding a new requirement to an application can trigger
structural changes in the way data integrity is preserved. The paper
studies the architectural stability of a system on integrity oriented
case study and proposes a mathematical model for architectural
evaluation of software systems inspired from the perturbations’ theory.
The proposed mathematical model can be used to mold the evolution of any
software system affected by requirements changes; to find the
architectural states of the system for which a given set of requirements
is not a trigger (doesn’t provoke an architectural change); and to find
the architectural configuration which is optimal for a given set of
requirements (evolves as less as possible). |
|
Title: |
APPLYING AGENT-ORIENTED MODELLING AND PROTOTYPING TO
SERVICE-ORIENTED SYSTEMS |
Author(s): |
Aneesh Krishna, Ying Guan and Aditya Ghose |
Abstract: |
A Service-Oriented Architecture (SOA) is a form
of distributed systems architecture, which is essentially a collection
of services. Web services are built in the distributed environment of
the Internet, enabling the integration of applications in a web
environment. In this paper, we show how agent-oriented conceptual
modelling techniques can be used to model service-oriented systems and
architectures and how these models can be executed. The resulting
executable specification environment permits us to support early rapid
prototyping of the service-oriented systems, at varying levels of
abstraction. |
|
Area 4 - Software Agents and Internet
Computing |
Title: |
MOBILE AGENT IN E-COMMERCE |
Author(s): |
Mohamed Elkobaisi |
Abstract: |
Among features often attributed to software
agents are autonomy and mobility. Autonomy of e-commerce agents involves
adaptability to engage in negotiations governed by mechanisms not known
in advance, while their mobility entails such negotiations taking place
at remote locations. This paper aims at combining adaptability with
mobility, by joining rule-based mechanism representation with modular
agent design, and at UML-formalizing selected aspects of the resulting
system. Furthermore, we discuss the issue of agent mobility and argue
why such agents have been proposed for the system under consideration. |
|
Title: |
CONTEXT-DRIVEN POLICY ENFORCEMENT AND RECONCILIATION FOR WEB
SERVICES |
Author(s): |
S. Sattanathan, N. C. Narendra, Z. Maamar and G. Kouadri Mostéfaoui |
Abstract: |
Security of Web services is a major factor to
their successful integration into critical IT applications. An extensive
research in this direction concentrates on low level aspects of security
such as message secrecy, data integrity, and authentication. Thus,
proposed solutions are mainly built upon the assumption that security
mechanisms are static and predefined. However, the dynamic nature of the
Internet and the continuously changing environments where Web services
operate require innovative and adaptive security solutions. This paper
presents our solution for securing Web services based on adaptive
policies, where adaptability is satisfied using the contextual
information of the Web services. The proposed solution includes a
negotiation and reconciliation protocol for security policies. |
|
Title: |
OWL-BASED KNOWLEDGE DISCOVERY USING DESCRIPTION LOGICS REASONERS |
Author(s): |
Dimitrios A. Koutsomitropoulos, Dimitrios P. Meidanis, Anastasia N.
Kandili and Theodore S. Papatheodorou |
Abstract: |
The recent advent of the Semantic Web has given
rise to the need for efficient and sound methods that would provide
reasoning support over the knowledge scattered on the Internet.
Description Logics and DL-based inference engines in particular play a
significant role towards this goal, as they seem to have overlapping
expressivity with the Semantic Web de facto language, OWL. In this paper
we argue that DLs currently constitute one of the most tempting
available formalisms to support reasoning with OWL. Further, we present
and survey a number of DL based systems that could be used for this
task. Around one of them (Racer) we build our Knowledge Discovery
Interface, a web application that can be used to pose intelligent
queries to Semantic Web documents in an intuitive manner. As a proof of
concept, we then apply the KDI on the CIDOC-CRM reference ontology and
discuss our results. |
|
Title: |
MAINTAINING PROPERTY LIBRARIES IN PRODUCT CLASSIFICATION SCHEMES |
Author(s): |
Joerg Leukel |
Abstract: |
Semantic interoperability in B2B e-commerce can
be achieved by committing to a product ontology that establishes a
shared and common understanding of a product domain. This issue is
mainly subject of standard product classification schemes. Recently,
considerable research and industry work has been carried out on
enhancing the formal precision of these schemes. Providing specific
property lists for each product class is seen as a first step towards
true product ontologies. However, horizontal classification schemes
often consist of more than 10,000 classes, several thousand properties,
and an even greater number of class-property relations. Given the new
requirement towards property-centric classification, maintaining these
business vocabularies is greatly influenced by strategies for managing
the property definitions and their relationships to classes. This paper
proposes and evaluates measures for coping with the problem of extensive
and steadily growing property libraries. It can be shown that
implementing these measures greatly influence both standards makers and
standards adopters. |
|
Title: |
A POLICY-BASED APPROACH TO SECURE CONTEXT IN A WEB SERVICES
ENVIRONMENT |
Author(s): |
Zakaria Maamar, Ghita Kouadri Mostéfaoui and Djamal Benslimane |
Abstract: |
This paper presents a policy-based approach for
securing the contexts associated with Web services, users, and computing
resources. Users interact with Web services for personalization needs,
and Web services interact with resources for performance needs. To
authorize any context change, a security context is developed. The
security context reports on the strategies that protect a context using
authorization and restriction~policies. |
|
Title: |
INTEGRATING SEMANTIC WEB REASONING INTO LEARNING OBJECT METADATA |
Author(s): |
Shang-Juh Kao and I-Ching Hsu |
Abstract: |
One of important functions of Learning Object
Metadata (LOM) is to associate XML-based metadata with learning objects.
The inherent problem of LOM is that it’s XML specified, which emphasizes
syntax and format rather than semantic and knowledge. Hence, it lacks
the semantic metadata to provide reasoning and inference functions.
These functions are necessary for the computer-interpretable
descriptions that are critical in the reusability and interoperability
of the distributed learning objects. This paper aims at addressing this
shortage, and proposes a multi-layered semantic framework to allow the
reasoning and inference capabilities to be added to the conventional
LOM. To illustrate how this framework work, we developed a
Semantic-based Learning Objects Annotations Repository (SLOAR) that
offers three different approaches to locate relevant learning objects
for an e-learning application - LOM-based metadata, ontology-based
reasoning, and rule-based inference. Finally, an experimental report for
performance evaluation of the various approaches is presented. |
|
Title: |
RECOVERY SERVICES FOR THE PLANNING LAYER OF AGENTS |
Author(s): |
Khaled Nagi and George Beskales |
Abstract: |
Software agents represent aim at automating
user tasks. A central task of an agent is planning to achieve its goals.
Unexpected disturbances occurring in the agent execution environment
represent a serious challenge for agent planning. In this work, a
recovery model for the planning process of agents is proposed to cope
with these disturbances. The proposed recovery model supports the
Hierarchical Task Networks (HTN) planners which represent a broad family
of planners that are widely used in agent systems. A prototype for the
proposed recovery services is implemented to demonstrate the feasibility
of the proposed approach. Fur-thermore, a simulation is built and many
simulation experiments were conducted to gain insight about the
performance of the proposed recovery model. |
|
Title: |
SAFETY OF CHECKPOINTING AND ROLLBACK-RECOVERY PROTOCOL FOR MOBILE
SYSTEMS WITH RYW SESSION GUARANTEE |
Author(s): |
Jerzy Brzeziński, Anna Kobusińska and Jacek Kobusiński |
Abstract: |
This paper presents rRYW checkpointing and
rollback-recovery protocol for mobile environment, where clients
accessing data are not bound to particular servers and can switch from
one server to another. The proposed protocol is integrated with the
underlying consistency VsSG protocol, which provides system consistency
from a client’s perspective. The rRYW protocol combines logging and
checkpointing of operations to prevent Read Your Writes session
guarantee provided by VsSG, even in case of servers failures. The paper
includes proofs of safety and liveness properties of the presented
protocol. |
|
Title: |
A MULTI-AGENT SYSTEM FOR INFORMATION SHARING |
Author(s): |
Marco Mari, Agostino Poggi and Michele Tomaiuolo |
Abstract: |
This paper presents RAIS, a peer-to-peer
multi-agent system supporting the sharing of information among a
community of users connected through the internet. RAIS has been
designed and implemented on the top of well known technologies and
software tools for realizing multi-agent and peer-to-peer systems, for
the searching of information and for the authentication and
authorization of users. RAIS offers a similar search power of Web search
engines, but avoids the burden of publishing the information on the Web
and guaranties a controlled and dynamic access to the information.
Moreover, the use of agent technologies simplifies the realization of
three of the main features of the system: i) the filtering of the
information coming from different users on the basis of the previous
experience of the local user, ii) the pushing of the new information
that can be of possible interest for a user, and iii) the delegation of
access capabilities on the basis of a network of reputation built by the
agents of the system on the community of its users.
|
|
Title: |
MIGRATING LEGACY VIDEO LECTURES TO MULTIMEDIA LEARNING OBJECTS |
Author(s): |
Andrea De Lucia, Rita Francese, Massimiliano Giordano, Ignazio
Passero and Genoveffa Tortora |
Abstract: |
Video Lectures are an old distance learning
approach which do not offer any feature of interaction and retrieval to
the user. Thus, to follow the new learning paradigms we need to
reengineer the e-learning processes while preserving the investments
made in the past. In this paper we present a methodology for
semi-automatically migrating traditional video lectures into multimedia
Learning Objects. The process identifies the frames where a slide
transition occurs and extracts from the PowerPoint Presentation
information for structuring the Learning Object metadata. Similarly to
scene detection approaches, we iteratively tune several parameters
starting from a small portion of the video to reach the best results.
Once a slide transition is correctly detected, the video sample is
successively enlarged until satisfactory results are reached. The
proposed approach has been validated in a case study. |
|
Title: |
AN SMS-BASED E-GOVERNMENT MODEL |
Author(s): |
Tony Dwi Susanto and Robert Goodwin |
Abstract: |
The fact that more than one-third of
e-government initiatives in developing countries are total failures,
half are partial failures and roughly only one-seventh are successful,
show that e-government development in developing countries has many
problems. According to an analysis by Heeks (2003), one of the failure
factors of e-government in developing countries is unrealistic design.
This paper will focus on this factor, particularly the mismatch of the
technological design for accessing e-government systems and the skills
and access to the technology of the citizens. Many countries,
particularly developing countries, still face problems of lack of
internet infrastructure, low of internet penetration, internet
illiteracy and high internet costs. When governments implement web-based
e-government models which require citizens to access the system by the
Internet/web medium, the failure rate is high as few citizens can
participate. There is technology gap between design and reality. In the
same countries, mobile phones are widely used, are low in cost, and
citizens are more familiar with the short message service application
(SMS) than the Internet and Web. In order to address this situation, the
paper proposes an SMS-based e-government system as a first step toward a
future Internet-based e-government system in order to increase public
(citizens and businesses) participation in e-government systems. |
|
Title: |
SUPPORTING COMPLEX COLLABORATIVE LEARNING ACTIVITIES – THE
LIBRESOURCE APPROACH |
Author(s): |
Olivera Marjanovic, Hala Skaf-Molli , Pascal Molli , Fethi Rabhi1
and Claude Godart |
Abstract: |
The main objective of this paper is to describe
collaborative technology called LibreSource and how it is used to
implement an innovative learning/teaching activity designed for software
engineering students. From the educational perspective, this educational
activity is based on the principles of problem-based learning and the
latest Learning Design theory. The main objective of this activity to
offer students a real-life experience in collaborative software
development. Compared to the popular Learning Management Systems that
only offer collaborative tools and support individual collaborative
tasks, this technology enables design and implementation of complex
collaborative processes. |
|
Title: |
TOWARDS A WEB PORTAL DATA QUALITY MODEL |
Author(s): |
Angélica Caro, Coral Calero, Ismael Caballero and Mario Piattini |
Abstract: |
The technological advance and the internet have
favoured the appearance of a great diversity of web applications, one of
them are Web Portals. Through this, organizations develop their
businesses in a more and more competitive environment. A decisive factor
for this competitiveness is the assurance of data quality. In the last
years, several research works on Web Data Quality have been developed.
However, there is a lack of specific proposals for web portals data
quality. In this paper, we will present a proposal for a model data
quality for web portals based on some web data quality works. |
|
Title: |
E-PROCUREMENT ADOPTION AMONG ITALIAN FIRMS BY USING DOMAIN NAMES |
Author(s): |
Maurizio Martinelli, Irma Serrecchia and Michela Serrecchia |
Abstract: |
The digital divide can occur either as a
“local” ( within a given country) or “global” (between developing and
industrialized countries) phenomenon. Our study intends to offer an
important contribution by analyzing the digital divide in Italy and the
factors contributing to this situation at the territorial level (i.e.,
macroareas: North, Center, South and at the provincial level) To do
this, we used the registration of Internet domains under the “.it” ccTLD
as proxy. In particular, we analyzed domain names registered by firms.
The analysis produced interesting results: the distribution of domains
registered by firms in Italian provinces is more concentrated than the
distribution according to income and number of firms, suggesting a
diffusive effect. Furthermore, when analyzing the factors that
contribute to the presence of a digital divide at the regional level,
regression analysis was performed using demographic, social, economic
and infrastructure indicators. Results show that Italian regions that
have good productive efficiency measured of the added value per employee
and a high educational level measured by number of firms specialized in
the ICT service sale (provider/maintainer) and by number of employees
devoted to research and development are the best candidates for
utilization of the Internet. |
|
Title: |
DYNAMIC SERVICE COMPOSITION : A PETRI-NET BASED APPROACH |
Author(s): |
Michael Köhler, Daniel Moldt and Jan Ortmann |
Abstract: |
Dynamic service composition requires a formal
description of the services such that an agent can process these
descriptions and reason about them. The amount of detail needed for an
agent to grasp the meaning of a service results in clumsy specification.
Petri nets offer a visual modeling technique for processes, that offers
a refinement mechanism. Through this, a specification can be inspected
on the level of detail needed for a given objective. In this paper we
introduce a Petri net based approach to capture the semantics of
services by combining Petri nets ideas from the description logic area
focusing on ontologies. The resulting framework can than be used by
agents to plan about activities involving services. |
|
Title: |
A SUCCINCT ANALYSIS OF WEB SERVICE COMPOSITION |
Author(s): |
Wassam Zahreddine and Qusay H. Mahmoud |
Abstract: |
Numerous standards are being proposed by
industry and academia to find ways to best compose web services
together. Such standards have produced semi-automatic compositions which
can only be applied in a limited number of scenarios. Indeed, the future
is moving towards a semantic web and fully automatic compositions will
only occur when semantics are involved with web services. This paper
discusses recent research and future challenges in the field of service
composition. The paper classifies service composition into two streams:
semi-automatic and automatic, then it compares and contrasts the
available composition techniques. |
|
Title: |
WSRF-BASED VIRTUALIZATION FOR MANUFACTURING RESOURCES |
Author(s): |
Lei Wu, Xiangxu Meng, Shijun Liu, Chenlei Yang and Xueqin Li |
Abstract: |
The essence of a Grid is the virtualization of
resource and the virtualization of the concept of a user. The term of
manufacturing grid (MG) is the applying of grid technologies on
manufacturing. To share manufacturing resources in Manufacturing Grid,
they could be virtualized and exposed as web service. The paper presents
a new way based WSRF to virtualize resource. The paper presents resource
encapsulation templates and resource container to virtualilze resource
conveniently. The design principle and the implement of the resource
encapsulate template and the resource container is described in detail.
The resource container is a WSRF service deployed in GT java core and
can virtualize resource as WS-Resource. Resource providers can
encapsulate their resources with the encapsulation template and added
them in the resource container. The resource will be virtualized and
exposed as web service and be shared in grid. We have developed a
prototype platform to testify the validity of the resource
virtualization method. The portal (www.mgrid.cn) can be visited now. |
|
Title: |
A MULTI-AGENT BASED FRAMEWORK FOR SUPPORTING LEARNING IN ADAPTIVE
AUTOMATED NEGOTIATION |
Author(s): |
Rômulo Oliveira, Herman Gomes, Alan Silva, Ig Bittencourt and
Evandro Costa |
Abstract: |
Automated negotiation is one of the hottest
research topics in AI applied to e-commerce. Lately this topic is
receiving more attention from the scientific community with the
challenges related to providing more realistic and feasible solutions.
Following this track, we propose a multi-agent based framework for
supporting adaptive bilateral automated negotiation during buyer-seller
agent interactions. In this work, these interactions are viewed as a
cooperative game (from the idea of two-person game theory, nonzerosum
game), where the players try to reach an agreement about a certain
negotiation object that is offered by one player to another. The final
agreement is assumed to be satisfactory to both parts. To achieve
effectively this goal, we modelled each player as a multi-agent system
with its respective environment. In doing so, we aim at providing an
effective means to collect relevant information to help agents to make
good decisions, that is, how to choose the ``best way to play'' among a
set of alternatives. Then we define a mechanism to model the opponent
player and other mechanisms for monitoring relevant variables from the
player´ environment. Also, we maintain the context of the current game
and keep the most relevant information of previous games. Additionally,
we integrate all the information to be used in the refinement of the
game strategies governing the multi-agent system. |
|
Title: |
AOPOA - ORGANIZATIONAL APPROACH FOR AGENT ORIENTED PROGRAMMING |
Author(s): |
Enrique González and Miguel Torres |
Abstract: |
This paper presents AOPOA, an agent oriented
programming methodology based in an organizational approach. The
resulting multiagent system is composed by a set of active entities that
aim to accomplish a well-defined set of goals. This approach allows to
build complex systems by decomposing them into simpler ones. The
organizational approach makes it easier to perform an iterative and
recursive decomposition based in the concept of goal; and at the same
time to identify the interactions between the entities composing the
system. At each iteration an organization level is developed. During the
analysis phase, tasks and roles are detected. During the design phase,
the interactions are specified and managed by cooperation links. At the
final iteration, the role parameterization is performed, which allows to
specify the events and actions associated to each agent. |
|
Title: |
THE STATE OF E-BUSINESS ON THE GERMAN ELECTRONIC CONSUMER GOODS
INDUSTRY |
Author(s): |
Eulalio G. Campelo F. and Wolffried Stucky |
Abstract: |
B2B electronic commerce is an increasing
important component of company’s strategy as it provides key support for
the business processes and transactions. Therefore, e-business
applications were expected to have a high cumulative growth and be
widely applied by companies in different sectors of the global economy.
This paper outlines the state of e-business on one of the most dynamic
sectors in the area of B2B electronic commerce, the electronic consumer
goods industry in the highly competitive German market. The intention is
to develop a better understanding on the level of information technology
utilisation to support businesses as well as the reasons and the course
of e-business initiatives in this sector. |
|
Title: |
POWERING RSS AGGREGATORS WITH ONTOLOGIES - A CASE FOR THE RSSOWL
AGGREGATOR |
Author(s): |
Felipe M. Villoria, Oscar Díaz and Sergio F. Anzuola |
Abstract: |
Content syndication through RSS is gaining wide
acceptance, and it is envisaged that feed aggregators will be provided
as a commodity in future browsers. As we consume more of our information
by way of RSS feeds, search mechanisms other than simple keyword search
will be required. To this end, advances in semantic tooling can
effectively improve the current state of the art in feed aggregators.
This work reports on the benets of making a popular RSS aggregator,
RSSOwl, ontology-aware. The paper uses three common functions, namely,
semantic view, semantic navigation and semantic query, to illustrate how
RSS aggregators can be ontology powered. The outcome is that location,
browsing and rendering of RSS feeds are customised to the conceptual
model of the reader, making RSS aggregators a powerful companion to face
the RSSosphere. The system has been fully implemented, and successfully
tested by distinct users. |
|
Title: |
THE ATTITUDE TOWARDS E-DEMOCRACY - EMPIRICAL EVIDENCE FROM THE
VIENNESE POPULATION |
Author(s): |
Alexander Prosser, Yan Guo and Jasmin Lenhart |
Abstract: |
Systems for citizen participation have become
technically feasible and are currently being developed. But what are the
preferences of the citizens and which factors determine their attitude
towards e-democracy? This paper reports the results of a representative
survey in the Viennese population investigating the attitude towards
e-democracy, the relationship to the respondents’ current Internet usage
and possible motives for e-democracy. |
|
Title: |
AN OPEN ARCHITECTURE FOR COLLABORATIVE VISUALIZATION IN RICH MEDIA
ENVIRONMENTS |
Author(s): |
Bernd Eßmann, Thorsten Hampel and Frank Goetz |
Abstract: |
Mobile cooperation systems are in focus of
current CSCW research. Many challenges of supporting users' mobility
arise from weak computational power of their mobile devices. Especially
when visualization applications need to compute complex
three-dimensional scenes from huge data-sets also more powerful mobile
devices reach their limits. In this paper we present a solution for
providing complex visualization techniques embedded in a fully-fledged
CSCW system for broad spectrum of (portable) computer devices. Our
approach combines two sophisticated technologies. On the visualization
part it deploys remote render farms to produce the representation as
video streams, separately for every cooperation partner. Thus the mobile
devices only need to be able to play MEPG-4 compliant video streams,
commonly provided by off-the-shelf PDAs and laptops. On the part of the
collaboration support we use a full-featured CSCW system. This allows
embedding visualizations as active objects into cooperative knowledge
spaces. In a document centered environment the visualizations are
displayed as active pictures on a cooperative shared whiteboard. Users
may manipulate the visualization scene as well as the whiteboard
representation which is a view on the persistent knowledge space saved
in the CSCW system. |
|
Title: |
TOWARDS AN INTEGRATED IS FRAMEWORK FOR THE DESIGN AND MANAGEMENT OF
LEAN SUPPLY CHAINS |
Author(s): |
Emmanuel Adamides, Nikos Karacapilidis, Hara Pylarinou and
Dimitrios Koumanakos |
Abstract: |
In this paper we present Co-LEAN, an integrated
suite of software tools suitable for the design and management of lean
supply chains. In addition to providing full operational support in the
planning and execution of the lean supply chain, Co-LEAN supports
internet-based collaboration in the innovation and product design,
manufacturing strategy, and supply-chain improvement tasks. The paper
discusses the information system support requirements of a lean supply
chain, describes the main components and the integration mechanisms of
Co-LEAN and concludes with a brief description of its pilot use in a
major supermarket chain. |
|
Title: |
THE CASCOM ABSTRACT ARCHITECTURE FOR SEMANTIC SERVICE DISCOVERY AND
COORDINATION IN IP2P ENVIRONMENTS |
Author(s): |
Cesar Caceres, Alberto Fernandez, Sascha Ossowski and Matteo
Vasirani |
Abstract: |
Intelligent agent-based peer-to-peer (IP2P)
environments provide a means for pervasively providing and flexibly
co-ordinating ubiquitous business application services to the mobile
users and workers in the dynamically changing contexts of open,
large-scale, and pervasive settings. In this paper, we present an
abstract architecture for service delivery and coordination in IP2P
environments that has been developed within the CASCOM project.
Furthermore, we outline the potential benefits of a role-based
interaction modelling approach for a concrete application of this
abstract architecture based on a real-world scenario for emergency
assistance in the healthcare domain. |
|
Title: |
E-LEARNING USE IN THE TERTIARY EDUCATION IN CYPRUS |
Author(s): |
Vasso Stylianou and Angelika Kokkinak |
Abstract: |
E-Learning may be defined as any training
activity that utilizes electronic technology to provide instructional
content or various learning experiences through an electronic network,
which can be customized for specific business needs or even individual
needs. The geographic position and size of Cyprus make it ideal for
businesses to grow by capitalizing on the benefits of e-learning and
e-training. Opportunities exist in all areas of e-learning in academic
education and industrial training for expanding education in Cyprus to
other regions, creating regional hubs for international students and
industries on-line worldwide. Any strategic move in this direction would
initially require an appreciation of the current state of affairs in
respect to e-learning availability in Cyprus. Thus, the aim of this
study has been to investigate e-learning facilities provided in some of
the main tertiary education institutions in the island, namely the
Intercollege, the University of Cyprus and the Higher Technical
Institute. The findings of this investigation, formed as brief case
studies, are presented herein and certain conclusions are being made. |
|
Title: |
CLIENT SYNTHESIS FOR WEB SERVICES BY WAY OF A TIMED SEMANTICS |
Author(s): |
Serge Haddad, Patrice Moreaux and Sylvain Rampacek |
Abstract: |
A complex Web service described with languages
like BPEL4WS, consists of an executable process and its observable
behaviour (called an abstract process) based on the messages exchanged
with the client. The abstract process behaviour is non deterministic due
to the internal choices during the service execution. Furthermore the
specification often includes timing constraints which must be taken into
account by the client. Thus given a service specification, we identify
the synthesis of a client as a key issue for the development of Web
services. To this end, we propose an approach based on (dense) timed
automata to first describe the observable service behaviour and then to
build correct interacting clients when possible. The present work
extends a previous discrete time approach and overcomes its limitations. |
|
Title: |
DISTRIBUTED BUSINESS PROCESSES IN OPEN AGENT ENVIRONMENTS |
Author(s): |
Christine Reese, Kolja Markwardt, Sven Offermann and Daniel Moldt |
Abstract: |
In the context of multi agent systems, one
general aim is the interoperability of agents. One problem remaining is
the control of processes between agents. The need for workflow
technology on an agent level to support business processes becomes
obvious. We provide concepts for distributed WFMS where the distribution
is realised within the architecture. This work is innovative regarding
the interplay of those technologies against the formal background of
Petri nets. |
|
Title: |
USING SHADOW PRICES FOR RESOURCE ALLOCATION IN A COMBINATORIAL GRID
WITH PROXY-BIDDING AGENTS |
Author(s): |
Michael Schwind and Oleg Gujo |
Abstract: |
Our paper presents an agent-based simulation
environment for task scheduling in a distributed computer systems
(grid). The scheduler enables the simultaneous allocation of resources
like CPU time, communication bandwidth, volatile and non-volatile memory
while employing a combinatorial resource allocation mechanism. The
resource allocation is performed by an iterative combinatorial auction
in which proxy-bidding agents try to acquire their desired resource
allocation profiles with respect to limited monetary budget endowments.
To achieve an efficient bidding process, the auctioneer provides
resource price information to the bidding agents. The calculation of the
resource prices in a combinatorial auction is not trivial, especially if
the the bid bundles exhibit complementarities or substitutionalities. We
propose an approximate pricing mechanism using shadow prices from a
linear programming formulation for this purpose. The efficiency of the
shadow price-based allocation mechanism is tested in the context of a
closed loop grid system in which the agents can use monetary units
rewarded for the resources they provide to the system for the
acquisition of complementary capacity. Two types of proxy-bidding agents
are compared in terms of efficiency (received units of resources, time
until bid acceptance) within this scenario: An aggressive bidding agent
with intense rising bids and a smooth bidding agent with slow increasing
bids. |
|
Title: |
PROVIDING RECOMMENDATIONS IN AN AGENT-BASED TRANSPORTATION
TRANSACTIONS MANAGEMENT PLATFORM |
Author(s): |
Alexis Lazanas, Nikos Karacapilidis and Yiannis Pirovolakis |
Abstract: |
Diverse recommendation techniques have been
already proposed and encapsulated into several e-business systems aiming
to perform a more accurate evaluation of the existing alternatives and
accordingly augment the assistance provided to the users involved.
Extending previous work, this paper focuses on the development of a
recommendation module for transportation transactions purposes and its
integration in a web-based platform. The module is built according to a
hybrid recommendation technique, which combines the advantages of
collaborative filtering and knowledge-based recommendations. The
proposed technique and supporting module enable customers to consider in
detail alternative transportation transactions satisfying their
requests, as well as to evaluate such transactions after their
completion. |
|
Title: |
A NARRATIVE APPROACH TO COLLABORATIVE WRITING - A BUSINESS PROCESS
MODEL |
Author(s): |
Peter Henderson and Nishadi De Silva |
Abstract: |
Narratives have been used in the past to
enhance technical documents such as research proposals by implementing a
single-user writing tool called CANS (Computer-Aided Narrative Support).
This study has now been extended to collaborative writing (CW); another
area that can greatly benefit from a narrative-based writing tool.
Before implementing such an asynchronous, multi-user system, however, it
was imperative to do a concrete design for it. Therefore, after studying
existing CW tools and strategies, a concise business process (BP) model
was designed to describe the process of narrative-based CW. This paper
introduces narrative-based CW for technical authors, the BP model for it
and discusses the benefits of such an implementation on particular areas
of research, such as the development of Grid applications. |
|
Title: |
ADDING MEANING TO QOS NEGOTIATION |
Author(s): |
Cláudia M. F. A. Ribeiro, Nelson Souto Rosa and Paulo Roberto
Freire Cunha |
Abstract: |
Using quality of service (QoS) to discover Web
Services that better meet users’ needs became a key factor to
differentiate similar services. Treating QoS includes negotiating QoS
capabilities, since there is a potential conflict between service
provider and service requestor requirements. This paper addresses this
problem by using an ontological approach for QoS negotiation that aims
to improve user participation. For this purpose, a Service Level
Agreement (SLA) ontology that explicitly considers subjective user QoS
specification was conceived. Its internal components and the role it
plays during service discovery are detailed. |
|
Title: |
A GRID SERVICE COMPUTING ENVIRONMENT FOR SUPPLY CHAIN MANAGEMENT |
Author(s): |
Sam Chung and George A. Orriss |
Abstract: |
This paper proposes to develop a Grid Service
Computing Environment for Supply Chain Management. Current research into
Grid Services for distributed systems has resulted in interesting
questions being raised as to whether or not the Open Grid Service
Architecture can be applied to developing a Supply Chain Management
system. If so, how will it affect development of SCM systems as a
typical example of Business-to-Business (B2B) application integration?
As much recent development has been focused on resource allocation in a
Grid environment, this approach to Grid computing is still relatively
unexplored. By developing a Supply Chain Management system using the
Open Grid Service Architecture and the Globus toolkit, this research
will provide an infrastructure for composing existing services into a
system that can be utilized for Supply Chain Management. The result of
this project is a Grid environment that provides efficient and effective
service management of available Supply Chain Management services. Also,
we address some of the inherent issues of dynamic binding and automation
associated with B2B transactions, such as those surrounding security
protocols, service lifecycle, and instance creation. |
|
Title: |
THE CONCEPT AND TECHNOLOGY OF PLUG AND PLAY BUSINESS |
Author(s): |
Paul Davidsson1, Anders Hederstierna2, Andreas Jacobsson1, Jan A.
Persson, Bengt Carlsson, Stefan J. Johansson, Anders Nilsson, Gunnar
Ågren, and Stefan Östholm |
Abstract: |
Several barriers to turn innovative ideas into
growth-oriented businesses with a global outlook are identified. The
Plug and Play Business concept is suggested to lower these barriers by
making it possible for the innovator to plug into a network of actors or
potential collaborators with automated entrepreneurial functions. A P2P
paradigm with intelligent agents is proposed to realize an environment
that manages a dynamic network of roles and business relations. It is
suggested that a critical characteristic of the Plug and Play Business
software is to facilitate trust in-between the actors. |
|
Area 5 - Human-Computer Interaction |
Title: |
LIBRARY IN VIRTUAL REALITY: AN INNOVATIVE WAY FOR ACCESSING,
DISSEMINATING, AND SHARING INFORMATION |
Author(s): |
Tereza G. Kirner, Andréa T. Matos and Plácida L. Costa |
Abstract: |
This paper focuses on Virtual Reality (VR) as a
very useful resource to be applied to the libraries available in the
web, aiming at contributing to process of accessing, disseminating, and
sharing information. The paper gives an overview of the use of
technology in libraries, stressing some forecasts related to the so
called “library of the future”. Then, it describes the VR in terms of
three essential characteristics, that is, immersion, interaction, and
involvement. After that, it presents some libraries in virtual reality
under utilization in different countries. Finally, the paper gives the
final considerations, pointing out the use of virtual reality technology
to develop libraries as collaborative virtual environments. |
|
Title: |
CONSTRUCTIVIST INSTRUCTIONAL PRINCIPLES, LEARNER PSYCHOLOGY AND
TECHNOLOGICAL ENABLERS OF LEARNING |
Author(s): |
Erkki Patokorpi |
Abstract: |
Constructivists generally assume that the
central principles and objectives of the constructivist pedagogy are
realized by information and communication technology (ICT) enhanced
learning. This paper critically examines the grounds for this assumption
in the light of available empirical and theoretical research literature.
The general methodological thrust comes from Alavi and Leidner (2001),
who have called for research on the interconnections of instructional
method, psychological processes and technology. Hermeneutic psychology
and philosophical argumentation are applied to identify some potential
or actual weaknesses in the chain of connections between constructivist
pedagogical principles, psychological processes, supporting technologies
and the actual application of ICT in a learning environment. One example
of a weak link is personalisation technologies whose immaturity hampers
the constructivists’ attempts at enabling learners to create personal
knowledge. Pragmatism enters the picture as a ready source of criticism,
bringing out a certain one-sidedness of the constructivist view of man
and learning. |
|
Title: |
MULTI-MODAL HANDS-FREE HUMAN COMPUTER INTERACTION: A PROTOTYPE
SYSTEM |
Author(s): |
Frangiskos Frangeskides and Andreas Lanitis |
Abstract: |
Conventional Human Computer Interaction
requires the use of hands for moving the mouse and pressing keys on the
keyboard. As a result paraplegics are not able to use computer systems
unless they acquire special Human Computer Interaction (HCI) equipment.
In this paper we describe a prototype system that aims to provide
paraplegics the opportunity to use computer systems without the need for
additional invasive hardware. The proposed system is a multi-modal
system combining both visual and speech input. Visual input is provided
through a standard web camera used for capturing face images showing the
user of the computer. Image processing techniques are used for tracking
head movements, making it possible to use head motion in order to
interact with a computer. Speech input is used for activating commonly
used tasks that are normally activated using the mouse or the keyboard.
Speech input improves the speed and ease of executing various HCI tasks
in a hands free fashion. The performance of the proposed system was
evaluated using a number of specially designed test applications.
According to the quantitative results, it is possible to perform most
HCI tasks with the same ease and accuracy as in the case that a touch
pad of a portable computer is used. Currently our system is being used
by a number of paraplegics. |
|
Title: |
AN INCLUSIVE APPROACH TO COOPERATIVE EVALUATION OF WEB USER
INTERFACES |
Author(s): |
Amanda Meincke Melo and M. Cecília C. Baranauskas |
Abstract: |
Accessibility has been one of the major
challenges for interface design of Web applications nowadays, especially
those involving e-government and e-learning. In this paper we present an
Inclusive and Participatory approach to the Cooperative Evaluation of
user interfaces. It was carried out with an interdisciplinary research
group that aims to include students with disabilities in the university
campus and academic life. HCI specialists and non-specialists, with and
without visual disabilities, participated as users and observers during
the evaluation of a web site designed to be one of the communication
channels between the group and the University community. This paper
shows the benefits and the challenges of considering the differences
among stakeholders in an inclusive and participatory approach, when
designing for accessibility within the Universal Design paradigm. |
|
Title: |
MEDICAL INFORMATION PORTALS: AN EMPIRICAL STUDY OF PERSONALIZED
SEARCH MECHANISMS AND SEARCH INTERFACES |
Author(s): |
Andrea Andrenucci |
Abstract: |
The World Wide Web has become, since its
creation, one of most popular tools for accessing and distributing
medical information. The purpose of this paper is to provide indications
about how users search for health-related information and how medical
portals should be implemented to fit users’ needs. The results are
mainly based on the evaluation of a prototype that tailors the retrieval
of documents from the Web4health portal to users’ characteristics and
information needs with the help of a user model. The evaluation is
conducted through a user empirical study based on user observation and
in-depth interviews |
|
Title: |
MODELING THE TASK - LEVERAGING KNOWLEDGE-IN-THE-HEAD AT DESIGN TIME |
Author(s): |
George Christou, Robert J. K. Jacob and Pericles Leng Cheng |
Abstract: |
A key problem in Human Computer Interaction is
the evaluation and comparison of tasks that are designed in different
interaction styles. A closely related problem is how to create a model
of the task that allows this comparison. This paper tries to tackle
these two questions. It initially presents a structure (Specific User
Knowledge Representation) that allows the creation of task models which
allow direct comparisons between different interaction styles. It then
presents an evaluation measure based on the structures. The measure is a
predictive quantitative one named Learnability The combination of the
two allows the researcher or the designer to evaluate an interaction
design very early in the design process. Then a case study is presented
to show how the measure may be used to create performance predictions
for novices. |
|
Title: |
ENWIC: VISUALIZING WIKI SEMANTICS AS TOPIC MAPS - AN AUTOMATED
TOPIC DISCOVERY AND VISUALIZATION TOOL |
Author(s): |
Cleo Espiritu, Eleni Stroulia and Tapanee Tirapat |
Abstract: |
In this paper, we present ENWiC (EduNuggets
Wiki Crawler), a framework for intelligent visualization of Wikis. In
recent years, e-learning has emerged as an appealing alternative to
traditional teaching. The effectiveness of e-Learning is depended upon
the sharing of information on the web, which makes the web a vast
library of information that students and instructors can utilize for
educational purposes. Wiki s collaborative authoring nature makes it a
very attractive tool to use for e-Learning purposes; however, its
text-based navigational structure becomes insufficient as the Wiki grows
in size, and this backlash can hinder students from taking full
advantage of the information available. ENWiC s goal is to provide
student with an intelligent interface for navigating Wikis and other
similar large-scale websites. ENWiC make use of graphic organizers to
visualize the relationships between content pages so that students can
gain a cognitional understanding of the content as they navigating
through the Wiki pages. We describe ENWiC s automated visualization
process, and its user interfaces for students to view and navigate the
Wiki in a meaningful manner, and for instructors to further enhance the
visualization. We also discuss our usability study for evaluating ENWiC
s effectiveness as a Wiki Interface. |
|
Title: |
E-INSTRUMENTATION: A COLLABORATIVE AND ADAPTATIVE INTELLIGENT HCI
FOR REMOTE CONTROL OF DEVICES |
Author(s): |
Christophe Gravier and Jacques Fayolle |
Abstract: |
What is being discussed here is a Computer
Supported Collaborative Learning System. The learning system aims at
providing a distant, collaborative and adaptative access to high
technological devices. Some strong points in the fields of HCI are to be
stressed : the user is expected to access the device while being in a
group of users, that is to say "group awareness" must be supported by
some mechanisms. In order to get a generic platform, being
"instrument-independant", tools are expected to be provided for the
graphic user interface to be easily (and without too much user skills)
built. Moreover, the notion of sequence of utilization of a device could
possibly be a tool enabling new models of evaluation of graphic user
interface (consequently HCI) and/or user behaviors. |
|
Title: |
GDIT - TOOL FOR THE DESIGN, SPECIFICATION AND GENERATION OF GOALS
DRIVEN USER INTERFACES |
Author(s): |
Antonio Carrillo, Juan Falgueras and Antonio Guevara |
Abstract: |
This paper presents a style of interaction,
called Goals Driven Interaction, a user interface design methology (also
called GDI) and a software tool (GDIT) that brings support for it. GDI
is a way of interaction for those applications in which a zero learning
time is necessary. We also present GDIT, a software tool for the design
and specification of any type of user interface regardless of its
interaction paradigm. Moreover, this tool is specially appropiated for
GDI, and generates user interface prototypes that can be tested early in
the developing process. |
|
Title: |
PERSONALIZATION IN INTERACTIVE SYSTEMS - THE GENERATION OF
COMPLIANCE? |
Author(s): |
Stephen Sobol |
Abstract: |
This paper examines applications of
personalization in interactive systems and seeks to map success in this
area into a general theory of database and narrative. It is argued that
conventions of human communication established well before the digital
age play a large part in determining user responses to personalization.
The analysis offers a logic to help determine when to apply
personalization. The results of an experiment to detect personalization
effects are reported which provide evidence of the value of
personalization.
|
|
Title: |
UBIQUITOUS KNOWLEDGE MODELING FOR DIALOGUE SYSTEMS |
Author(s): |
Porfírio Filipe and Nuno Mamede |
Abstract: |
The main general problem that we want to
address is the re-configuration of dialogue systems to work with a
generic plug-and-play device. This paper describes our research in
designing knowledge-based everyday devices that can be dynamically
adapted to spoken dialogue systems. We propose a model for ubiquitous
knowledge representation that enables the spoken dialogue system to be
aware of the devices belonging to the domain and of the tasks they
provide. We consider that each device can be augmented with
computational capabilities in order to support its own knowledge model.
A knowledge- -based broker adapts the spoken dialogue system to deal
with an arbitrary set of devices. The knowledge integration process
between the knowledge models of the devices and the knowledge model of
the broker is depicted. This process was tested in the home environment
domain. |
|
Title: |
A SIMULATION ENVIRONMENT TO EVALUATE DRIVER PERFORMANCES WHILE
INTERACTING WITH TELEMATICS SYSTEMS |
Author(s): |
Gennaro Costagliola, Sergio Di Martino and Filomena Ferrucci |
Abstract: |
The evaluation of user interfaces for vehicular
telematics systems is a challenging task, since, due to safety issues,
it is necessary to particularly take into account the effects of
interaction on user mental workload and thus on driving performances. To
this aim, in 2005 we developed a simulation environment specifically
conceived for the indoor evaluation of these systems. In this paper we
present some significant improvements of that proposal. Among others, we
highly enhanced the possibility to assess indoor the navigation module,
which is a very distinguish feature. Indeed, the automatic generation of
virtual test tracks starting from a cartographical database was
improved, giving rise to more realistic scenarios, which increase the
sense of presence in the virtual scenario. Moreover, to support data
analysts in understanding subjects’ driving performances, we developed a
graphical analysis tool, able to provide a clear and deep insight on the
high amount of logged data generated by the simulator. Finally, we
validated the effectiveness of the framework in measuring on-road
driving performances, by employing a set of sixteen subjects, gathering
positive results. |
|
Title: |
DESIGN AND IMPLEMENTATION OF ADAPTIVE WEB APPLICATION |
Author(s): |
Raoudha Ben Djemaa, Ikram Amous and Abdelmajid Ben Hamadou |
Abstract: |
Engineering adaptive Web applications imply the
development of content that can be automatically adjusted to varying
classes of users and their preferences in terms of presentation. To meet
this requirement, we present in this paper a generator for adaptive web
applications called GIWA. GIWA‘s target is to facilitate the automatic
execution of the design and the automatic generation of adaptable web
interface. GIWA methodology is based on three levels: semantic level,
adaptation level and presentation level. The implementation of GIWA is
based on java swing interface to instantiate the models which are
translated in XML files. GIWA uses then XSL files to generate the HTML
page corresponding to the user. |
|
Title: |
PROACTIVE PSYCHOTHERAPY WITH HANDHELD DEVICES |
Author(s): |
Luís Carriço, Marco de Sá and Pedro Antunes |
Abstract: |
This paper presents a set of components that
support psychotherapy processes on mobile and office settings. One
provides patients the required access to psychotherapy artefacts,
enabling an adequate and tailored aid and motivation for fulfilment of
common therapy tasks. Another offers therapists the ability to define
and refine the artefacts, in order to present, help and react to the
patient according to his/her specific needs and therapy progress. Two
other components allow the analysis and annotation of the aforementioned
artefacts. All these components run on a PDA base. Evaluation results
validated some of the design choices, and indicate future directions and
improvements. |
|
Title: |
A MULTIMODAL INTERFACE FOR PERSONALISING SPATIAL DATA IN MOBILE GIS |
Author(s): |
Julie Doyle, Joe Weakliam, Michela Bertolotto and David Wilson |
Abstract: |
Recently the availability and usage of more
advanced mobile devices has significantly increased, with many users
accessing information and applications while on the go. However, for
users to truly accept and adopt such technologies it is necessary to
address human-computer interaction challenges associated with such
devices. We are interested in exploring these issues within the context
of mobile GIS applications. Current mobile GIS interfaces suffer from
two major problems: nterface complexity and information overload. We
have developed a novel system that addresses both of these issues.
Firstly, our system allows GIS users to interact multimodally, providing
them with the flexibility of choosing their preferred mode of
interaction for specific tasks in specific contexts. Secondly, it
records all such interactions, analyses them and uses them to build
individual user profiles. Based on these, our system returns
personalised spatial data to users, and hence liminates superfluous
information that might be otherwise presented to them. In this paper we
describe the system we have developed that combines multimodal
interaction with personalised services, and that can be used by mobile
users, whether they are novices or professionals within the field of
GIS. The advantages of our multimodal GIS interface approach are
demonstrated through a user interaction study. |
|
Title: |
EVALUATION OF USER INTERFACES FOR GEOGRAPHIC INFORMATION SYSTEMS: A
CASE STUDY |
Author(s): |
Lucia Peixe Maziero, Cl´audia Robbi Slutter, Laura Sanchéz García
and Cássio da Pieva Ehlers |
Abstract: |
This paper presents an evaluation of user
interfaces of two Geographic Information Systems (GIS) tools, ArcView
and Spring. This work is related to a specific characteristic of those
interfaces, which is to allow the user to view a map. User interfaces
were evaluated based on the main cognitive difficulties related to the
execution and evaluation bridges employing a cognitive engineering
approach and, complementarily, in accordance with the semiotics
engineering parameter called "system communicability". Both evaluated
systems are generally considered to be quite complex, not only in terms
of understanding the interaction elements present in the interface, but
also the knowledge embedded in the tasks that can be accomplished by
these systems. Although the study described in this work was focused on
a single task it confirmed the general opinion about these kinds of
applications: a novice user cannot explore them without some assistance
from an expert user or by studying books and manuals. Even an expert
user usually faces significant difficulties using those GIS tools. |
|
Title: |
CONTEXT OF USE ANALYSIS - ACTIVITY CHECKLIST FOR VISUAL DATA MINING |
Author(s): |
Edwige Fangseu Badjio and François Poulet |
Abstract: |
In this paper, emphasis is placed on
understanding how human behaviour interacts with visual data mining
tools in order to improve their design and usefulness. Computer tools
that are more useful assist users in achieving desired goals. Our
objective is to highlight quality in context of use problems with
existing VDM systems that need to be addressed in the design of new VDM
systems. For this purpose, we defined a checklist based on activity
theory. The responses provided by 15 potential users are summarised as
design insights. The users respond to questions selected from the
activity checklist. This paper describes the evaluation method and
shares lessons learned from its application. |
|
Title: |
INTELLIGENT TUTORING SYSTEM: AN ASSESSMENT STRATEGY FOR TUTORING
ON-LINE |
Author(s): |
Francesco Colace, Massimo De Santo and Mario Vento |
Abstract: |
In this paper we introduce a tutoring approach
for E-Learning formative process. This approach is strictly related to
the assessment phase. Assessment in the context of education is the
process of characterizing what a student knows. The reasons to perform
evaluation are quite varied, ranging from a need to informally
understand student learning progress in a course to a need to
characterize student expertise in a subject. Otherwise finding an
appropriate assessment tool is a central challenge in designing a
tutoring approach. In this paper we propose an assessment method based
on the use of ontologies and their representation through a Bayesian
Networks. The aim of our approach is the generation of adapted
questionnaires in order to test the student’s knowledge in every phase
of learning process. Analyzing the results of the evaluation an
intelligent tutoring system can help students offering an effective
support to learning process and adapting their learning paths. |
|
Title: |
INTELLIGENT TUTORING SYSTEM: A MODEL FOR STUDENT TRACKING |
Author(s): |
Francesco Colace, Massimo De Santo and Marcello Iacone |
Abstract: |
Thanks to the technological improvements of
recent years, distance education today represents a real and effective
tool for integrate and support the traditional formative processes. In
literature it is widely recognized that an important component of this
success is related with the ability “to customize” the learning process
for the specific needs of a given learner. This ability is still far to
have been reached and there is a lot of interest in investigating new
approaches and tools to adapt the formative process on the specific
individual needs. In this paper we present and discuss a model to
capture information about learning style and capabilities of students;
this information is successively used to select the most suitable
learning objects and to arrange them in “adapted” learning paths |
|
Title: |
FACE RECOGNITION FROM SKETCHES USING ADVANCED CORRELATION FILTERS
USING HYBRID EIGENANALYSIS FOR FACE SYNTHESIS |
Author(s): |
Yung-hui Li, Marios Savvides and Vijayakumar Bhagavatula |
Abstract: |
Most face recognition systems focus on
photo-based (or video) face recognition, but there are many
law-enforcement applications where a police sketch artist composes a
face sketch of the criminal and that is used by the officers to looks
for the criminal. Currently state-of-the-art research approach
transforms all test face images into sketches then perform recognition
on sketch domain using the sketch composite, however there is one flaw
in such approach which hinders it from being deployed fully automatic in
the field, due to the fact that generating a sketch image from a
surveillance footage will vary greatly due to illumination variations of
the face in the footage under different lighting conditions. This will
result imprecise sketches for real time recognition. In our approach we
propose the opposite which is a better approach; we propose to generate
a realistic face image from the composite sketch using a Hybrid subspace
method and then build an illumination tolerant correlation filter which
can recognize the person under different illumination variations. We
show experimental results on our approach on the CMU PIE (Pose
Illumination and Expression) database on the effectiveness of our novel
approach. |
|
Title: |
TOWARDS AN ONTOLOGY OF LMS - A CONCEPTUAL FRAMEWORK |
Author(s): |
Gabriela Díaz-Antón and María A. Pérez |
Abstract: |
Learning Management Systems (LMS) are used
widely to support training in an organization. Selecting and
implementing an LMS can have an impact in cost, time and customer
satisfaction in the organization. Due to the existence of a variety of
definitions on the subject of elearning and LMS, it is necessary a
conceptual framework using an ontology. This article presents a research
in progress whose final objective is to develop a method to select,
implement and integrate an LMS into an organization with a systemic
quality approach. As a first step, in this article is presented an
ontology to conceptualize the terms associated to LMS, unifying them
through their relations. |
|
Title: |
AN EVALUATION OF A VISUAL QUERY LANGUAGE FOR INFORMATION SYSTEMS |
Author(s): |
Haifa Elsidani Elariss, Souheil Khaddaj and Ramzi Haraty |
Abstract: |
In recent years, many non-expert user
applications have been developed to query Geographic Information Systems
(GIS). GIS are also being used to browse and view data about space and
time thus naming them spatio-temporal databases. Many approaches to
querying spatio-temporal databases have been recently proposed. The
Location Based Services which are considered as part of the
spatio-temporal field, concern the user who asks questions related to
his current position by using a mobile phone. Our research aims at
designing and developing an Intelligent Visual Query Language (IVQL)
that allows users to query databases based on their location. The
databases are installed on a GIS server computer. The queries are sent
to the server from a mobile phone through the Short Messages System
(SMS). With the emerging Globalization of user interfaces, IVQL is meant
to have a global and international user interface that could be
understood by all users worldwide who are from different countries with
different cultures and languages. We propose a user interface consisting
of smiley icons that are used to represent and build an international
query language. Smiley icons enable the users to access data and build
queries easily and in a user-friendly way. A query is formulated by
means of selecting the smiley icon that represents the operation to be
executed then selecting the theme or object to be found. IVQL is an
expandable language. It can include as many icons as needed whether they
represent themes or objects. Hence, IVQL is considered to be an
intelligent query language. The visual query language and its user
interface are explained. The IVQL model is described. The query
formulation is illustrated using a sample GIS system for tourists. The
IVQL user interface and query formulation can be applied to other fields
such as Management Information Systems specifically in Customer
Relationship Management, air traffic and bioinformatics. We then
conclude about our future work. |
|
Title: |
A FUZZY-BASED DISTANCE TO IMPROVE EMPIRICAL METHODS FOR MENU
CLUSTERING |
Author(s): |
Cristina Coppola, Gennaro Costagliola, Sergio Di Martino, Filomena
Ferrucci and Tiziana Pacelli |
Abstract: |
An effective menu organization is fundamental
to obtain usable applications. A common practice to achieve this is to
adopt empirical methods in the menu design phase, by requesting a number
of intended final users to provide their ideal tasks arrangements.
However, to improve the effectiveness of this approach, it is necessary
to filter results, by identifying and discarding data coming from
subjects whose mental models are too weak on the considered domain. To
this aim, in the paper, we propose a formal tool suited to support menu
designers, which is based on a fuzzy-based distance we defined. This
measure can be easily calculated on the empirical datasets, thanks to a
specifically conceived supporting application we developed. As a result,
by exploiting the proposed solution, menu designers can rely on a formal
tool to evaluate significance of empirical data, thus leading towards
more effective menu clustering. |
|
Title: |
AUTOMATIC FEEDBACK GENERATION - USING ONTOLOGY IN AN INTELLIGENT
TUTORING SYSTEM FOR BOTH LEARNER AND AUTHOR BASED ON STUDENT MODEL |
Author(s): |
Pooya Bakhtyari |
Abstract: |
One of the essential elements needed for
effective learning is feedback. Feedback can be given to learners during
learning but also to authors during course development. But producing
valuable feedback is often time consuming and makes delays. So with this
reason and the others like incomplete and inaccurate feedback generating
by human, we think that it’s a good idea if we could generate feedback
automatically for both learner and author in an intelligent tutoring
system (ITS) that is one of the main tutoring and guidance tool in these
systems. For the development of these mechanisms, we used ontology to
create a rich supply of feedback. We designed all components of the ITS
like course materials and learner model based on ontology to share
common understanding of the structure of information among other
software agents and make it easier to analyze the domain knowledge. With
ontologies we specify (1) the knowledge to be learned (domain and task
knowledge) and (2) how the knowledge should be learned (education). We
add all the factors that is needed for a good feedback generating in the
learner model. We will develop algorithms with which we automatically
create valuable feedback to learners during learning, and to authors
during course development. In this paper we also show a mechanism to
make reason from the resources and learner model that it made feedbacks
based on learner. |
|
ICEIS Doctoral Consortium |
Title: |
A FRAMEWORK FOR ASSESSING ENTERPRISE RESOURCES PLANNING (ERP)
SYSTEMS SUCCESS: AN EXAMINATION OF ITS ASPECT FOCUSING ON EXTERNAL
CONTEXTUAL INFLUENCES |
Author(s): |
Princely Ifinedo |
Abstract: |
Doctoral Consortium: Assessing the success of
information systems (IS) is a critical issue to researchers and
practitioners. IS evaluations to some practitioners is a nightmare
because of the lack of knowledge with regard to such issues. This is
because differing organizational and external environmental contexts may
exist for firms vis-à-vis the adopted technologies or software. Among
the fastest growing IS globally are enterprise resources planning (ERP)
systems. ERP systems are complex, making it even harder for
practitioners with limited skills to assess the success of such systems
in their contexts. Our notion of contexts encompasses the external and
internal (organizational) levels. This paper is motivated by paucity of
research in this area. In general, we propose an integrative framework
for assessing ERP systems success that benefits from other theoretical
models. In particular, this paper presents the findings regarding the
influence of some contextual factors on ERP systems success measurement.
Our contributions in this area of research are discussed. |
|
Title: |
A MULTIPLE CLASSIFIER SYSTEM FOR INTRUSION DETECTION USING BEHAVIOR
KNOWLEDGE SPACES AND TEMPORAL INFORMATION |
Author(s): |
Claudio Mazzariello and Carlo Sansone |
Abstract: |
As modern enterprises rely more and more on the
services provided by means of complex and pervasive network
infrastructures, the urge of coping with network security emerges
clearer and clearer; the reliability, dependability and trustworthiness
of the provided services, and the privacy of the exchanged information,
are issues an enterprise cannot avoid to consider. Due to the very huge
amount of data to deal with, network traffic analysis is a task which
requires the aid of automated tools; the detection of network intrusions
might be regarded, to some extent, as the practice of telling attack
packets apart from legal and authorized traffic, hence it ends up to be
nothing more than a canonical classification problem, where network
packets are the data to be classified, and the outcome assesses whether
each packet belongs to an attack pattern or not, and eventually
associates it to one out of K know attack classes. As network attacks
are usually spread over often long packet sequences, we propose to
analyze time series of events rather than a single packet, in order to
take into account the intrinsic nature of the problem under exam.
Furthermore, by observing that carefully combining several base
classifiers it is possible to achieve better detection results, we
propose an approach to intrusion detection based on multiple classifiers
detecting attacks by observing temporal sequences of events. |
|
Title: |
A METHOD FOR INTEGRATING AND MIGRATING ENTERPRISE INFORMATION
SYSTEMS BASED ON MIDDLEWARE STYLES |
Author(s): |
Simon Giesecke |
Abstract: |
The PhD project described here devises an
architecture development method called MIDARCH for selecting a
middleware platform in Enterprise Application Integration (EAI) and
migration projects. The method uses a taxonomy of middleware platforms
based on the architectural styles that are induced by middleware
platforms (MINT Styles). The choice of a style is based on
extra-functional properties of the resulting systems. The method builds
upon the ANSI/IEEE Standard 1471. The major goal of the method is to
improve development process productivity and predictability through
increasing reuse of design knowledge between projects. The key feature
of the proposed method is that it enables binding of design knowledge to
MINT styles. |
|
Title: |
TOWARDS A COMPLETE APPROACH TO GENERATE SYSTEM TEST CASES |
Author(s): |
Javier J. Gutiérrez |
Abstract: |
System testing allows to verify the behavioural
of system under test and to guarantee the satisfaction of its
requirements. The expected behaviour of a software system is stored in
its functional requirements. Thus those requirements are the most
valuable start point for generating test cases. We need approaches,
process and tools to drive the process of generation of test cases from
software requirements. This paper resumes the objectives and results of
our thesis project focused in defining a process to derive test cases
from functional requirements expressed as use cases. |
|
Title: |
MODEL ANALYSIS OF ARCHITECTURES FOR E-COMMERCE SYSTEMS ON THE
ELECTRICITY RETAILER MARKET |
Author(s): |
Victor Manuel Oliveira Cruz dos Santos |
Abstract: |
A UML model of the electricity retailer has
been achieved, next step is to test interfaces model to limit conditions
and analyse its balance between the business-to-business (B2B) and
business-to-client (B2C) sides of the model. Follows the internal units
communication and process analysis to identify compensation limits of
internal units to external conditions. |
|
Title: |
AN ARCHITECTURE FOR MOBILE-SUPORTED CUSTOMER LOYALTY SCHEMES -
ARCHITECTURAL FRAMEWORK AND ASSOCIATED BUSINESS MODELS |
Author(s): |
Christian Zeidler |
Abstract: |
This paper presents a proposed research work in
the area of mobile information system applications for marketing
purposes, in particular the application to business-to-consumer loyalty
schemes. The introductory section gives an overview of the application
domain and introduces the different aspects of relationship marketing
and customer loyalty as well as the potentials offered by using mobile
technology in supporting such schemes. The second section presents the
current stage and proposed methodology for the research that consists of
designing and evaluating a framework for mobile loyalty schemes
encompassing an architectural design based on the 4+1 view model of
software architectures and the development of related business models,
both with respect to the particularities of mobile information systems
concerning technical as well as user specific issues. Mobile information
systems in this context are to be seen as client/server platforms
supporting multi-channel user interfaces with an emphasis on the mobile
phone as a client platform. In the third section, based on the presented
methodology, a conclusion and overview of the expected artefacts and
results is given. |
|
Title: |
ONTOLOGY-SUPPORTED WEB SEARCHING |
Author(s): |
Vicky Dritsou |
Abstract: |
In this paper we define our doctoral research
problem of using ontologies to support indexing and information
retrieval on the Web. Previous works have shown that we can benefit from
this approach. We specifically intend to focus on taxonomies and address
issues of indexing and retrieval effectiveness in view of the invalid
compound term problem, as well as explore the combined use of
logic-based and distance-based approaches to ontology mapping. |
|
Title: |
TASK SWITCHING IN DETAIL - TASK IDENTIFICATION, SWITCHING
STRATEGIES AND INFLUENCING FACTORS |
Author(s): |
Olha Bondarenko |
Abstract: |
In this paper the first stages of a project
that aims at supporting document management of information workers in
complex multitask environments are presented. A rich sample of
qualitative data was obtained from an observational study. At the first
stage of data analysis we identified task switching on the level that it
was perceived by the subjects. The factors that surrounded the task
switching process have been the focus of the second stage. Various
aspects of task switching such as environment, reasons, switching
strategies and subjects' activities were categorized. We then compared
how the use of various task switching strategies differed depending on
these factors. Although the data analysis has not yet been complete, it
was clear that subjects applied different strategies depending on
whether they switched tasks on their own will or were forced to do it.
Moreover, the use of strategies differed in physical and digital
domains. The outline of an expected outcome and the research plans are
discussed. |
|
Title: |
MANAGEMENT CHALLENGES IN SOFTWARE PROJECTS: IMPROVING SOFTWARE
PROJECT MANAGEMENT AND REPRESENTATION IN MULTIPROJECT ENVIRONMENTS |
Author(s): |
Fran J. Ruiz-Bertol |
Abstract: |
Management becomes an important issue in
software project development. Unfortunately, it still arise too many
management problems in current software projects. The most notorious
consequences of these drawbacks are overruns in schedule and budget, and
an outcome –the final product or service– that does not fulfil the
initially specified features and/or quality. The resolution of these
problems comes up focusing on critical success factors, as adequate
communication channels or realistic budget and schedule. However, these
factors do not provide specific solutions to overruns or insufficient
features, but the key factors to solve management problems in software
projects. The dissertation presented here aims to provide particular
solutions to management problems. Specifically, we focus on two aspects
of software management: the representation of project data, and the
management issues in multiproject software development. |
|
Title: |
GENRE AND ONTOLOGY BASED BUSINESS INFORMATION ARCHITECTURE
FRAMEWORK (GOBIAF) |
Author(s): |
Turo Kilpeläinen |
Abstract: |
This thesis proposal introduces a plan for
developing a genre and ontology based business information architecture
(GOBIA) framework and its development method. The framework builds upon
the mutual and reported high cohesion of the business (processes) and
information needed to operate the business to form the basis for total
Enterprise Architecture (EA) development. As most of the present
architectural models are systems architecture oriented, there seem to be
a clear need for efficient, but still comprehensive tool to support
domain analysis and results representation from the viewpoint of
business critical information in the architecture development. In this
way, organisations may have better control over their architectural
descriptions instead of being forced to adopt information and process
models embedded in enterprise system packages like ERP systems. The
fundamental premise is, thus, to support the development of holistic
information management principles in geographically dispersed
environments where business processes cross the boundaries of number of
business units. Thereby, the GOBIA framework will contribute as a shared
mechanism to support BIA-based strategic and operational thinking that
forces dispersed business units to conceive, understand, structure, and
present local information (in conceptual level), their interconnections,
and level of management through a well-defined development method and
principles. |
|
Title: |
TEMPORAL INFORMATION IN NATURAL LANGUAGES - IS TIME IN ROMANIAN THE
SAME? |
Author(s): |
Corina Forăscu |
Abstract: |
Many Natural Language Processing applications
rely on the temporal information in texts. As currently to our
knowledge, there are no software tools to deal with and further use the
temporal information in Romanian texts, we decided to study, encode,
(automatically) detect and use in other NLP applications the temporal
information in NL when used in Romanian or in conjunction with other
languages. The paper presents work-in-progress towards the final PhD
dissertation. |
|
Title: |
SEMANTIC COHERENCE IN SOFTWARE ENGINEERING |
Author(s): |
Michael Skusa |
Abstract: |
During software engineering processes lots of
artifacts are produced to document the development of a concrete
software. For pairs of artifacts which are related with respect to their
meaning for the development process, but which differ in their formal
foundation, formal associations often do not exist or are not
documented. This leads to gaps in the documentation of the software
engineering process and a decrease of traceability. The ideas behind the
software are harder to understand and because of this missing
understanding a consistent extension, modification and usage of the
software becomes hard as well. In this paper a novel approach is
presented to close gaps in the documentation of software engineering
processes. Formal representations from various sources, which are used
during the development process, are integrated on a semi-formal level.
In a first step artifacts with different formal foundation are
classified according to their role in the current development situation.
These classified artifacts, which will be called assets, share a common
representation. Using this representation simple asset descriptions can
be linked together in a second step. They become part of complex asset
descriptions. The approach is semi-formal because the kinds of
associations which can be defined within this asset representation can
vary in their degree of formal foundation. The simultaneous usage of
different formalisms during software development is supported, even if a
formal integration is not possible or has not been attempted before.
Closing gaps in the documentation of software engineering processes
using this approach improves traceability of development decisions,
simplifies round trip engineering and enables new kinds of decision
support for software developers. This approach allows to turn support
functions of common software development tools, e.g. suggestions for
code completion, into a generalized support of “model completion” or
“design completion” which will improve development productivity and
documentation quality substantially. |
|
Title: |
ALIGNING NETWORK SECURITY TO CORPORATE GOALS: INVESTING ON NETWORK
SECURITY |
Author(s): |
Mathews Zanda Nkhoma |
Abstract: |
In recent years, information and
telecommunications technology and services have expanded at an
astonishing rate. The public and private sectors increasingly depend on
information and telecommunications systems capabilities and services. In
the face of rapid technological change, public and private organizations
are undergoing significant changes in the way they conduct their
business activities, including the use of wide area networking via
public networks. These changes include mandates to reduce expenses,
increase revenue, and, at the same time to compete in a global
marketplace. Even during prosperous economic times, security has not
been easy to sell to senior management unless the organization has
recognized that it has been the victim of a major security incident. In
today’s business environment it is difficult to obtain senior management
approval for the expenditure of valuable resources to “guarantee” that a
potentially disastrous event will not occur that could affect the
ultimate survivability of the organization. |
|
Workshop on Wireless Information Systems
(WIS-2006) |
Title: |
ANALYSIS OF DISTRIBUTED RESOURCE MANAGEMENT IN WIRELESS LANS THAT
SUPPORT FAULT TOLERANCE |
Author(s): |
Ghassan Kbar and Wathiq Mansoor |
Abstract: |
Deploying wireless LANs (WLAN) at large scale
is mainly affected by reliability, availability, and performance. These
parameters will be a concern for most of managers who wanted to deploy
WLANs. In order to address these concerns, new radio resource management
techniques with fault tolerance can be used in a new generation of
wireless LAN equipment. These techniques would include distributed
dynamic channel assignment, sharing load among Access Points (AP), and
supporting fault tolerance. Changing from the relatively static radio
resource management techniques generally in use today to dynamic methods
has been addressed in previous research article using centralized
management, but it suffer from network availability problem. In [10] a
new distributed management for dynamic channel assignment has been
suggested. In this paper the idea has been extended to support fault
tolerance, which improves the network availability and reliability
compared to centralized management techniques. In addition, it will help
in increasing network capacities and improve its performance especially
in large-scale WLANs. The new system has been analyzed and according to
binomial distribution results showed improvement of network performance
compared to static load distribution. |
|
Title: |
ULTRA-WIDEBAND INTERFERENCE MITIGATION USING CROSS-LAYER COGNITIVE
RADIO |
Author(s): |
Rashid A. Saeed, Sabira Khatun., Borhanuddin Mohd. Ali and Mohd.
Khazani Abdullah |
Abstract: |
Cognitive Radio (CR) is an emerging approach
for a more efficient usage of the precious radio spectrum resources,
which it considers an expanded view of the wireless channel by managing
and adapting various dimensions of time, frequency, space, power, and
coding. In this paper, we define the system requirements for cognitive
radio, as well as the general architecture and basic physical and link
layer functions. In order to self-adapt the UWB pulse shape parameters
and maximize system capacity while co-exist with in-band legacy NB
systems (WiFi and FWA) in the surrounding environments. |
|
Title: |
A DISTRIBUTED BROADCAST ALGORITHM FOR AD HOC NETWORKS |
Author(s): |
Li Layuan, Li Chunlin and Sun Qiang |
Abstract: |
Abstract—In mobile ad hoc networks, many
unicast and multicast protocols depend on broadcast mechanism to finish
control and route establishment functionality. In a straightforward
broadcast by flooding, each node will retransmit a message to all it
neighbors until the message has been propagated to the entire network.
So it will become very inefficient and will be easy to result the
broadcast storm problem. Thus an efficient broadcast algorithm should be
used to less the broadcast storm caused by broadcast. Due to the dynamic
nature of ad hoc networks, global information of the network is
difficult to obtain, so the algorithm should be distributed. In this
paper, an efficient distributed heuristic-based algorithm is presented.
The algorithm is based on joint distance-counter threshold scheme. It
runs in a distributed manner by each node in the network without needing
any global information. Each node in an ad hoc network hears the message
from its neighbors and decides whether to retransmit or not according to
the signal strength and the number of the receiving messages. By using
the JDCT algorithm, it’s easy to find the nodes that consist of the
vertices of the hexagonal lattice to cover the whole networks. The
algorithm is very simple and it is easy to operate and has a good
performance in mobile wireless communication environments. A comparison
among several existing algorithms is conducted. Simulation results show
that the new algorithm is efficient and robust. |
|
Title: |
HIGH PERFORMANCE IN A WIRED-AND-WIRELESS INTEGRATED IC CARD TICKET
SYSTEM |
Author(s): |
Shiibashi Akio, Tadahiko Kadoya and Kinji Mori |
Abstract: |
Automatic Fare Collection Systems (AFCSs)
require high performance and high reliability. The automatic fare
collection gates (AFCGs) must let the passengers pass as quickly as
possible because the congested passengers rush to them during rush
hours. High performance on wireless communications lowers reliability.
It had been a problem when a company was developing a wireless IC card
ticket system. "Autonomous Decentralized Processing Technology" and
"Decentralized Algorithm on the Fare Calculations with an IC Card and
AFCGs" were designed as the solutions. This paper introduces the two
models – the decentralized process and the centralized one – to simulate
the performances and shows the satisfactory results on the decentralized
one. These technologies are implemented to the practical system and have
proven the effectiveness. |
|
Title: |
IN&OUT: A CONTEXT AWARE APPLICATION BASED ON RFID LOCALIZATION
TECHNOLOGY |
Author(s): |
Alessandro Andreadis, Fabio Burroni and Pasquale Fedele |
Abstract: |
The wide adoption of wireless technologies has
modified the typology of services that can be offered by information
systems. Mobile users need ad-hoc information based on location and
usage context, in order to minimize network usage and achieve service
efficiency and effectiveness. This paper presents an information
architecture providing context-aware and location-based contents to
users exploring a museum and/or a related archaeological excavation.
While moving around, the user is equipped with a client device and his
position is precisely detected through RFID technology. Thus the system
is able to suggest the user for specific multimedia contents, letting
him select the ones to be downloaded through wireless connections. The
system offers the user with a constant association between objects of
interest and the place they were recovered from the excavation. Thus the
visitor inside a museum room can have a visualization or a hypothetical
reconstruction of the place of recovery. |
|
Title: |
EXPERIENCES WITH THE TINYOS COMMUNICATION LIBRARY |
Author(s): |
Paolo Corsini, Paolo Masci and Alessio Vecchio |
Abstract: |
TinyOS is a useful resource for developers of
sensor networks. The operating system includes ready-made software
components that enable rapid generation of complex software
architectures. In this paper we describe the lessons gained from
programming with the TinyOS communication library. In particular, we try
to rationalize existing functionalities, and we present our solutions in
the form of a communication library, called TComm-Lib. |
|
Title: |
A REFLECTIVE MIDDLEWARE ARCHITECTURE FOR ADAPTIVE MOBILE COMPUTING
APPLICATIONS |
Author(s): |
Celso Maciel da Costa, Marcelo da Silva Strzykalski and Guy Bernard
|
Abstract: |
Mobile computing applications are required to
operate in environments in which the availability for resources and
services may change significantly during system operation. As a result,
mobile computing applications need to be capable of adapting to these
changes to offer the best possible level of service to their users.
However, traditional middleware is limited in its capability of adapting
to the environment changes and different users requirements.
Computational Reflection paradigm has been used in the design and
implementation of adaptive middleware architectures. In this paper, we
propose an adaptive middleware architecture based on reflection, which
can be used to develop adaptive mobile applications. The
reflection-based architecture is compared to a component-based
architecture from a quantitative perspective. The results suggest that
middleware based on Computational Reflection can be used to build mobile
adaptive applications that require only a very small overhead in terms
of running time as well as memory space. |
|
Title: |
MIDDLEWARE SUPPORT FOR TUNABLE ENCRYPTION |
Author(s): |
Stefan Lindskog, Reine Lundin and Anna Brunstrom |
Abstract: |
To achieve an appropriate tradeoff between
security and performance for wireless applications, a tunable and
differential treatment of security is required. In this paper, we
present a tunable encryption service designed as a middleware that is
based on a selective encryption paradigm. The core component of the
middleware provides block-based selective encryption. Although the
selection of which data to encrypt is made by the sending application
and is typically content-dependent, the representation used by the core
component is application and content-independent. This frees the
selective decryption module at the receiver from the need for
application or content-specific knowledge. The sending application
specifies the data to encrypt either directly or through a set of
high-level application interfaces. A prototype implementation of the
middleware is described along with an initial performance evaluation.
The experimental results demonstrate that the proposed generic
middleware service offers a high degree of security adaptiveness at a
low cost. |
|
Title: |
APPLYING MUPE CONTEXT PRODUCERS IN DEVELOPING LOCATION AND CONTEXT
AWARE APPLICATIONS |
Author(s): |
Kimmo Koskinen, Kari Heikkinen and Jouni Ikonen |
Abstract: |
Location based services (LBS) and applications
have recently emerged as a significant application area. However,
location based services and application could also benefit of the
dimensions of the contextual data. Multi-User Publishing Environment
(MUPE) has a built-in context mediation capability that allows the
application developer to concentrate on using contextual data and thus
enabling rapid prototyping of location and context aware applications.
In this paper MUPE context producers are applied so that the
applications can exploit the different available context in a manner
suitable to their logic. This paper purposes to demonstrate that context
mediation is a powerful tool to speed up the prototyping process and to
enable an efficient application development. |
|
Workshop on Modelling,
Simulation,Verification and Validation of Enterprise Information Systems
(MSVVEIS-2006) |
Title: |
AN ONTOLOGY BASED ARCHITECTURE FOR INTEGRATING ENTERPRISE
APPLICATIONS |
Author(s): |
Razika Driouche, Zizette Boufaїda and Fabrice Kordon |
Abstract: |
Today, companies investigate various domains of
collaborative business-to-business in e-commerce. They have to look
inward to their applications and processes. These applications must be
able to cooperate dynamically. This leads to a rise of cooperative
business processes, which need the integration of autonomous and
heterogeneous applications. However, currently existing approaches for
application integration lack an adequate specification of the semantics
of the terminology that leads to inconsistent interpretations. In this
paper, we propose a formal solution to the problem of application
integration and we reflect upon the suitability of the ontology as a
candidate for solving the problem of heterogeneity and ensure greater
interoperability between applications. |
|
Title: |
CSPJADE: ARCHITECTURAL DRIVEN DEVELOPMENT OF COMPLEX EMBEDDED
SYSTEM SOFTWARE USING A CSP PARADIGM BASED GENERATION TOOL CODE |
Author(s): |
Agustín A. Escámez, Kawthar Bengazhi, Juan A. Holgado, Manuel I.
Capel |
Abstract: |
A code generation tool for the development of
control system software based on an architectural view of distributed
real-time systems as a set of communicating parallel processes is
presented. The design of the entire system under modelling is firstly
carried out by obtaining a complete system specification by using the
CSP+T process algebra and then, an architectural design of the entire
system under modelling is done using the CSPJade graphical tool. The
implementation of the concurrent set of processes is finally tackled by
using the JSCP library. CSPJade defines a graphical abstract model of
processes similar to the semantic of CSP+T, so a translation of
modelling entities into CSP+T processes and vice-versa is easy to
achieve and becomes less error prone, specially when there are multiple
levels of nested processes. CSPJade allows the designer to deploy a set
of reusable software components together with a well defined interface
whose connectors are mapped into some fixed ports. In the architectural
software design model that supports the proposed method, processes
inter-communicate over channels and processes execute in parallel using
the “parallel” construct provided by the CSP programming paradigm, which
is subsequently mapped in the equivalent construct of the JCSP library
(, one of the target CSP library that can be used with the tool). A
project designed with CSPJade can be saved and reused thanks to the
newly proposed java project model (JPM) structures, being these composed
of the smallest possible units of information, which are called the java
container model (JCM) structures, which implement the desired
functionality of any given component of the methodology. Both structures
JPM and JCM are written in the well known and extended XML write format. |
|
Title: |
THE DEVELOPMENT OF THE PRECEDENT MODEL FOR THE LATVIA FOREST
MANAGEMENT PLANNING PROCESSES |
Author(s): |
Inita Sile and Sergejs Arhipovs |
Abstract: |
The question of nowadays is the application of
information technologies in every sector that enables to improve the
functionality of system performance. One of the sectors is forestry – in
this sector it is essential to manage the forest territories
appropriately. As a result, it is possible to develop the system by
means of information technologies that would help the forest experts to
manage the forest territories in order there would be no lack of timber
resources. To develop such system it is necessary, first of all, to
perform the analysis of forestry, as a result of which the precedent
models are developed. The specification and notation of Unified Modeling
Language (UML) is used in the development of precedent models.
Consequently, the system requirements are defined according to which it
is possible to design and develop the system. |
|
Title: |
THE STATIC MODEL OF LATVIAN FOREST MANAGEMENT PLANNING AND CAPITAL
VALUE ESTIMATION |
Author(s): |
Salvis Dagis and Sergejs Arhipovs |
Abstract: |
Latvia, where forests cover up to 45% of
territory, might be proud of its forests. Forestry is the most
significant export sector in Latvia and in total forestry provides up to
14% of GDP. Regardless of the significant felling areas, there can be
observed more increment than there are trees cut. The main aim of
forestry policy is to ensure the sustainable management of forests and
forest lands, therefore it is necessary to evaluate the current
situation and think ahead in order to plan the cutting of forests. |
|
Title: |
SIMULATION MODELLING PERFORMANCE DYNAMICS OF SHIP GAS TURBINE AT
THE LOAD OF THE SHIP’S SYNCHRONOUS GENERATOR |
Author(s): |
Josko Dvornik and Eno Tireli |
Abstract: |
Simulation modelling, performed by System
Dynamics Modelling Approach-MIT and intensive use of digital computers,
which implies the extensive use of, nowadays inexpensive and powerful
personal computers (PCs), is one of the most convenient and most
successful scientific methods of analysis of performance dynamics of
nonlinear and very complex natural technical and organizational systems
[1]. The purpose of this work is to demonstrate the successful
application of system dynamics simulation modelling at analyzing
performance dynamics of a complex system of ship’s propulsion system.
Ship turbine generator is a complex nonlinear system which needs to be
analysed systematically, i.e. as an entirety composed of a number of
sub-systems and elements which are through cause-consequence links (UVP)
connected by retroactive circles (KPD), both within the propulsion
system and with the corresponding environment. Indirect procedures of
analysis of performance dynamics of turbine generator systems used so
far, which procedures are based on the use of standard, usually linear
methods, as Laplace’s transformation, transfer functions and stability
criteria, do not meet the current needs for information about
performance dynamics of nonlinear turbine generator systems. Since the
ship turbine generator systems are complex and the efficient application
of scientific investigation methods called qualitative and quantitative
simulation methodology of System dynamics will be presented in this
work. It will enable the production and application of more and varied
kinds of simulation models of the observed situations, and enable the
continuous computer simulation using high speed and precise digital
computers, which will significantly contribute to the acquisition of new
information about nonlinear characteristic of performance dynamics of
turbine generator systems in the process of designing and education.
Successful realization of this work, or qualitative and quantitative
scientific determination of a complex phenomenon of performance dynamics
of load of the ship electric network, or ship turbine generator system,
will give a significant scientific contribution to the fundamental and
applied technical scientific fields, and to interdisciplinary
sub-directions of maritime transport, exploitation of ship drive
systems, mariners education, automatics, theory of management and
regulation, expert systems, intelligent systems, computerization and
information systems. The contribution may also be significant in the
process of education of the present and future university mechanic and
electric engineers in the field of simulation modelling of complex
organisational, natural and technical systems.
|
|
Title: |
THE USEFULNESS OF A GENERIC PROCESS MODEL STRUCTURE |
Author(s): |
Alta van der Merwe, Paula Kotzé and Johannes Cronjé |
Abstract: |
Defining process model structures for reuse in
different activities, such as re-engineering, may seem to be an
innovative idea. There is, however, a danger that these models are
created with no proof that they are useful in practice. In this paper,
we give an overview of a re-engineering procedure developed from
existing reengineering procedures combined with Goldratt’s theory of
constraints, to investigate the usefulness of process model structures
in such an activity. The usefulness is measured against an ordinal
measurement defined. |
|
Title: |
HOW STYLE CHECKING CAN IMPROVE BUSINESS PROCESS MODELS |
Author(s): |
Volker Gruhn and Ralf Laue |
Abstract: |
Business process analysts prefer to build
business process models (BPM) using graphical languages like BPMN or UML
Activity Diagrams. Several researchers have presented validation
methologies for such BPMs. In order to use these verification techniques
for BPMs written in graphical languages, the models must be translated
into the input language of a model checker or simulation tool. By
analyzing 285 BPMs (modelled as Event driven Process Chains (EPC)), we
found that checking restrictions for "good modeling style" before
starting the translation process has three positive effects: It can make
the translation algorithm much easier, can improve the quality of the
BPM by substituting "bad constructs" automatically and can help to
identify erroneous models. |
|
Title: |
DESIGN AND EVALUATION CRITERIA FOR LAYERED ARCHITECTURES |
Author(s): |
Aurona Gerber, Andries Barnard and Alta van der Merwe |
Abstract: |
The architecture of a system is an
indispensable mechanism required to map business processes to
information systems. The terms architecture, layered architecture and
system architecture are often used by researchers, as well as system
architects and business process analysts inconsistently. Furthermore,
the concept architecture is commonplace in discussions of software
engineering topics such as business process management and system
engineering, but agreedupon design and evaluation criteria are lacking
in literature. Such criteria are on the one hand valuable for the
determination of system architectures during the design phase, and on
the other hand, provides a valuable tool for the evaluation of already
existing architectures. The goal of this paper is thus to extract from
literature and best-practices such a list of criteria. We applied these
findings to two prominent examples of layered architectures, notably the
ISO/OSI network model and the Semantic Web language architecture. |
|
Title: |
MODEL CHECKING SUSPENDIBLE BUSINESS PROCESSES VIA STATECHART
DIAGRAMS AND CSP |
Author(s): |
W. L. Yeung, K. R. P. H. Leung, Ji Wang and Wei Dong |
Abstract: |
When modelling business processes in statechart
diagrams, history states can be useful for handling suspension and
resumption, as illustrated by the examples in this paper. However,
previous approaches to model checking statechart diagrams often ignore
history states. We enhanced such a previous approach based on
Communicating Sequential Processes (CSP) and developed a support tool
for it. |
|
Title: |
A SYSTEM DYNAMICS APPROACH FOR AIRPORT TERMINAL PERFORMANCE
EVALUATION |
Author(s): |
Ioanna E. Manataki and Konstantinos G. Zografos |
Abstract: |
Performance modelling of highly complex
large-scale systems constitutes a challenging task. The airport terminal
is a highly dynamic and stochastic system with a large number of
entities and activities involved. In this context, developing
models/tools for assessing and monitoring airport terminal perform-ance
with respect to various measures of effectiveness is critical for
effective decision-making in the field of airport operations planning,
design and man-agement. The objective of this paper is to present the
conceptual framework for the development of a generic, yet flexible tool
for the analysis and evaluation of airport terminal performance. The
tool provides the capability of being easily customizable to the
specific needs and characteristics of any airport terminal. For the
development of the tool, a hierarchical model structure is adopted,
which enables a module-based modelling approach. The underlying
theoretical basis used to model airport terminal domain is System
Dynamics. |
|
Title: |
PIXL: APPLYING XML STANDARDS TO SUPPORT THE INTEGRATION OF ANALYSIS
TOOLS FOR PROTOCOLS |
Author(s): |
María del Mar Gallardo, Jesús Martínez, Pedro Merino, Pablo Nuñez
and Ernesto Pimentel |
Abstract: |
This paper presents our experiences on using
XML technologies and standards for the integration of analysis tools for
protocols. The core proposal consists in the design of a new XML-based
language named PiXL (Protocol Interchange using XML Languages),
responsible for interchanging the whole specification of the protocol
(data and control) among different existing tools. The structure and
flexibility of XML has proven to be very useful when implementing new
tools such as abstract model checkers. In addition, the suitability of
the proposal has been applied to achieve a new kind of analysis, where
PiXL and new MDA methodologies have been proposed to build integrated
environments for reliability and performance analysis of Active Network
protocols. |
|
Title: |
SPECIFICATION OF DETERMINISTIC REGULAR LIVENESS PROPERTIES |
Author(s): |
Frank Nießner |
Abstract: |
Great many systems are formally describable by
nondeterministic Buechi automata. The Complexity of model checking then
essentially depends on deciding subset conditions on languages which are
acceptable by these automata and which represent the system behavior and
the desired properties of the system. The involved complementation
process may lead to an exponential blow-up in the size of the automata.
Therefore, we investigate a rich subclass of properties, called
deterministic regular liveness properties, for which the above mentioned
blow-up can be avoided. In this paper we will present a characterization
that describes the structure of this language class and their automata. |
|
Title: |
HOW TO DETECT RISKS WITH A FORMAL APPROACH? FROM PROPERTY
SPECIFICATION TO RISK EMERGENCE |
Author(s): |
Vincent Chapurlat and Saber Aloui |
Abstract: |
The research work at the origin of this paper
has two goals. The first one is to define a modelling framework allowing
representing a system by using multi views and multi languages paradigms
in a unified way and including knowledge and model enrichment by
defining properties. The second one consists to define some formal
properties verification mechanisms in order to help a modeller to detect
dangerous situations and inherent risks which can occur to the system.
The same mechanisms are then used to improve the quality of the
representation which is the classical verification goal. This paper
focus then on the set of formal properties modelling concepts and
analysis mechanisms mainly based on Conceptual Graphs, which are
proposed. In order to illustrate these concepts, the approach is
currently applied to healthcare organisations. |
|
Title: |
TESTING OF SEMANTIC PROPERTIES IN XML DOCUMENTS |
Author(s): |
Dominik Jungo, David Buchmann and Ulrich Ultes-Nitsche |
Abstract: |
XML is a markup language with a clear
hierarchical structure. Validating an XML document against a schema
document is an important part in the work flow incorporating XML
documents. Most approaches use grammar based schema languages. Grammar
based schemas are well suited for the syntax definition of an XML
document, but come to their limits when semantic properties are to be
defined. This paper presents a rule based, first order schema language,
complementary to grammar based schema languages, demonstrating its
strength in defining semantic properties for an XML document. |
|
Title: |
TOWARDS MODEL CHECKING C CODE WITH OPEN/CÆSAR |
Author(s): |
María del Mar Gallardo, Pedro Merino and David Sanán |
Abstract: |
Verification technologies, like model checking,
have obtained great success in the context of formal description
techniques (FDTs), however there is still a lack of tools for applying
the same approach to real programming languages. One promising approach
in this second scenario is the use of well known and stable software
architectures originally designed for FDTs, like OPEN/CÆSAR. OPEN/CÆSAR
is based on a core notation for Labeled Transitions Systems (LTSs) and
contains several modules that can help to implement tasks such as
reachability analysis, bisimulation, test generation. All these
functions are accessible with a standard API that makes possible the
generation of specific model checkers for new languages. In this paper,
we discuss how to construct a model checker for C distributed
applications using OPEN/CÆSAR. |
|
Title: |
MODELLING HISTORY-DEPENDENT BUSINESS PROCESSES |
Author(s): |
Kees van Hee, Olivia Oanea, Alexander Serebrenik, Natalia Sidorova
and Marc Voorhoeve |
Abstract: |
Choices in business processes are often based
on the process history saved as a log-file listing events and their time
stamps. In this paper we introduce a finite-path variant of the timed
propositional logics with past for specifying guards in business process
models. The novelty is due to the introduction of boundary points start
and now corresponding to the starting and current observation points.
Reasoning in presence of boundary points requires three-valued logics as
one needs to distinguish between temporal formulas that hold, those that
do not hold and ``unknown'' ones corresponding to ``open cases''.
Finally, we extend a sub-language of the logics to take uncertainty into
account. |
|
Title: |
EFFICIENT INTERPRETATION OF LARGE QUANTIFICATIONS IN A PROCESS
ALGEBRA |
Author(s): |
Benoît Fraikin and Marc Frappier |
Abstract: |
This paper describes three optimization
techniques for a process algebra interpreter called EB3PAI. This
interpreter supports the EB3 method, which was developed for the purpose
of automating the development of information systems through code
generation and efficient interpretation of abstract specifications.
EB3PAI supports non-deterministic process expressions, automatic
internal action execution and quantified operators in order to allow
efficient execution of large process expressions involving thousands of
persistent entities in an information system. For general information
system patterns, EB3PAI executes in linear time with respect to the
number of terms and operators in the process expression and in
logarithmic time with respect to the number of entities in the system. |
|
Title: |
ARCHITECTURAL HANDLING OF MANAGEMENT CONCERNS IN SERVICE-DRIVEN
BUSINESS PROCESSES |
Author(s): |
Ahmed Al-Ghamdi and José Luiz Fiadeiro |
Abstract: |
To be effective and meet organisational goals,
service-driven applications require a clear specification of the
management concerns that establish business level agreements among the
parties involved in given business processes. In this paper, we show how
such concerns can be modelled explicitly and separately from other
concerns through a set of new semantic primitives that we call
management laws. These primitives support a methodological approach that
consists in extracting management concerns from business rules and
representing them explicitly as connectors in the conceptual
architecture of the application. |
|
Title: |
AN OBSERVATION-BASED ALGORITHM FOR WORKFLOW MATCHING |
Author(s): |
Kais Klai, Samir Tata and Issam Chebbi |
Abstract: |
The work we present here is in line with the
CoopFlow approach dedicated for inter-organizational workflow
cooperation that consists of workflow advertisement, workflow
interconnection, and workflow cooperation. This approach is inspired by
the Service-oriented Architecture and allows for partial visibility of
workflows and their resources. Varying degrees of visibility of
workflows enable organizations to retain required levels of privacy and
security of internal workflows. Degrees of visibility are described in
term of an abstraction of workflow's behavior using symbolic observation
graph. The building of such graph uses the Ordered Binary Decision
Diagram technique in order to represent and manage efficiently workflow
abstraction within a registry. Advertised abstractions are then matched
for workflow interconnections. |
|
Title: |
SIMULATOR FOR REAL-TIME ABSTRACT STATE MACHINES |
Author(s): |
Pavel Vasilyev |
Abstract: |
We describe a concept and design of a simulator
of Real-Time Abstract State Machines. Time can be continuous or
discrete. Time constraints are defined by linear inequalities. Two
semantics are considered: with and without non-deterministic bounded
delays between actions. The simulator is easily configurable. Simulation
tasks can be generated according to descriptions in a special language.
The simulator will be used for on-the-fly verification of formulas in an
expressible timed predicate logic. Several features that facilitate the
simulation are described: external functions definition, delays
settings, constraints specification, and others. |
|
Title: |
VALIDATION OF VISUAL CONTRACTS FOR SERVICES |
Author(s): |
José D. de la Cruz, Lam-Son Lê and Alain Wegmann |
Abstract: |
Visual modeling languages have specialized
diagrams to represent behavior and concepts. This diagram specialization
has drawbacks like the difficulty to represent the effects of actions.
We claim that visual contracts can describe actions in a more complete
and integrated way. In this paper, we propose a visual contract
notation. Its semantics is illustrated by a mapping to Alloy. Thanks to
this notation, the modeler can specify, within one diagram, an action
and its effects. The modeler can also simulate the contract. These
visual con-tracts can be used to specify IT services and check their
specifications. As such they contribute to business/IT alignment. Our
visual contracts take elements from several UML diagrams and are based
on set-theory and on RM-ODP. |
|
Title: |
FORMAL SPECIFICATION OF REAL-TIME SYSTEMS BY TRANSFORMATION OF
UML-RT DESIGN MODELS |
Author(s): |
Kawtar Benghazi Akhlaki, Manuel I. Capel Tuñón, Juan A. Holgado
Terriza and Luis E. Mendoza Morales |
Abstract: |
We are motivated to complement our methodology
by integrating collaboration diagrams to facilitate the specification of
capsules in UML-RT design models. An improved systematic transformation
method to derive a correct and complete formal system specification of
real-time systems is established. This article aims at integrating
temporal requirements in the design stage of the life cycle of a
real-time system, so that scheduling and dependability analysis can be
performed at this stage. The application of CSP+T process algebra to
carry out a systematic transformation from a UML-RT model of a well
known manufacturing-industry paradigmatic case, the “Production-Cell”,
is also presented. |
|
Title: |
A PETRI NET BASED METHODOLOGY FOR BUSINESS PROCESS MODELING AND
SIMULATION |
Author(s): |
Joseph Barjis and Han Reichgelt |
Abstract: |
Research into business processes has recently
seen a re-emergence as evidenced by the increasing number of
publications and corporate research initiatives. In this paper we
introduce a modeling methodology for the study of business processes,
including their design, redesign, modeling, simulation and analysis. The
methodology, called TOP, is based on the theoretical concept of a
transaction which we derive from the Language Action Perspective (LAP).
In particular, we regard business processes in an organization as
patterns of communication between different actors to represent the
underlying actions. TOP is supported by a set of extensions of basic
Petri nets notations, resulting in a tool that can be used to build
models of business processes and to analyze these models. The result is
a more practical methodology. Through a small case study, this paper not
only introduces TOP, but demonstrates how the concept of a transaction
can be used to capture business processes in an organization. Each
transaction captures an atomic process (activity), which is part of a
larger business process, and identifies the relevant actors for this
process. The paper demonstrates how transactions can be used as building
blocks for modeling a set of larger businesses process and then Petri
nets can be used to model the identified transactions into a complete
model with respect to their time order. Actually, the proposed graphical
extension to Petri nets aims to make the resulting models more natural
and readable. |
|
Title: |
ANIMATED SIMULATION FOR BUSINESS PROCESS IMPROVEMENT
|
Author(s): |
Joseph Barjis and Bryan D. MacDonald |
Abstract: |
This paper is a research in progress conducted
in the framework of an undergraduate research program. In this paper we
demonstrate simulation of business processes of a drug store that is
planning IT innovations. In this practical project we apply the TOP
Methodology previously introduced to the community of this workshop. The
methodology is based on a type of Petri nets adapted for business
process modeling. The result of our study in this paper is a gradual
improvement of business processes of the drug store through series of
“what-if” scenarios. We run animated simulation of each “what if”
scenario and compare the simulation results for business process
improvement. The purpose of using animated simulation was to visualize
the dynamic behavior of each scenario and demonstrate it to the business
owner. |
|
Title: |
TEST PURPOSE OF DURATION SYSTEMS |
Author(s): |
Lotfi Majdoub and Riadh Robbana |
Abstract: |
The aim of conformance testing is to check
whether an implementation conforms to its specification. We are
interested to duration systems, we consider a specification of duration
system that is described by a duration graph. Duration graphs are an
extension of timed systems and are suitable for modeling the accumulated
times spent by computations in the duration systems. In this paper, we
propose a framework to generate automatically test cases according to a
test purpose for duration graphs. In the first, we define the
synchronous product of the specification and the test purpose of an
implementation under test. In the second, we demonstrate that timed
words recognized by the synchronous product is also recognized by both
specification and test purpose. This result allows us to generate tests
according to test purpose from the synchronous product. |
|
Workshop on Natural Language Understanding
and Cognitive Science (NLUCS-2006) |
Title: |
ONTOLOGY, TYPES AND SEMANTICS |
Author(s): |
Walid S. Saba |
Abstract: |
In this paper we argue that many problems in
the semantics of natural language are due to a large gap between
semantics (which is an attempt at understanding what we say in language
about the world) and the way the world is. This seemingly monumental
effort can be grossly simplified if one assumes, as Hobbs (1985)
correctly observed some time ago, a theory of the world that reflects
the way we talk about it. We demonstrate here that assuming such a
strongly-typed ontology of commonsense knowledge reduces certain
problems to near triviality. |
|
Title: |
AN ALGORITHM FOR ARABIC LEXICON GENERATOR USING MORPHOLOGICAL
ANALYSIS |
Author(s): |
Samer Nofal |
Abstract: |
Several natural language processing systems
(NLPS) use a lexicon. Lexicon is the file that stores information about
words such as: word category, word gender, word number and word tense.
Arabic language words are divided into frozen words and derived words.
Frozen words have no or little morphological analysis so it must be
manually written with full information to NLPS. On the other hand,
derived words can be analyzed morphologically. This is possible because
in derivation a word we follow predefined schemes, which is called
templates. These templates give morphological information about derived
words. In addition, the prefix and the suffix of the derived word gives
indications about the lexicon information of the word. Consequently, by
analyzing the derived words we are free from manually writing any
derived word in the lexicon. This work designs, implements and examines
an algorithm for morphological analyzer and lexicon generator. The
algorithm is based on segmenting the word into prefix, template and
suffix. Then the algorithm tries to decide the fillers of the lexicon
entries from the information contained in these segments. The
segmentation must be correct so this algorithm makes several tests on
the compatibility between the word components: prefix, suffix and
template. This algorithm consults three lists for assertion purposes:
prefixes list, suffixes list and templates list. This algorithm was
tested on three social and political articles, these articles contain
nearly 1300 words. Evaluation shows that we can depend on computational
morphological analysis at least an 80 percent, the 20 percent failure is
due to language exceptions and the hidden diacritics of Arabic words. |
|
Title: |
USING SEQUENCE PACKAGE ANALYSIS AS A NEW NATURAL LANGUAGE
UNDERSTANDING METHOD FOR MINING GOVERNMENT RECORDINGS OF TERROR SUSPECTS |
Author(s): |
Amy Neustein |
Abstract: |
Three years after 9/11, the Justice Department
made the astounding revelation that more than 120,000 hours of
potentially valuable terrorism-related recordings had yet to be
transcribed. Clearly, the government’s efforts to obtain such recordings
have continued. Yet there is no evidence that the contents of the
recorded calls have been analyzed any more efficiently. Perhaps analysis
by conventional means would be of limited value in any event. After all,
terror suspects tend to avoid words that might alarm intelligence
agents, thus “outsmarting” conventional mining programs, which heavily
rely on word-spotting techniques. One solution is the application of a
new natural language understanding method, known as Sequence Package
Analysis, which can transcend the limitations of basic parsing methods
by mapping out the generic conversational sequence patterns found in the
dialog. The purpose of this paper is show how this new method can
efficiently mine a large volume of government recordings of the
conversations of terror suspects – with the goal of reducing the backlog
of unanalyzed calls. |
|
Title: |
A GOOD INDEX, PREREQUISITE FOR EASY ACCESS OF INFORMATION STORED IN
A DICTIONARY |
Author(s): |
Michael Zock |
Abstract: |
A dictionary is a vital component for any
natural language processing system, be it a human being or a machine.
Yet, what for an outsider seems to be one and the same object, turns out
to be something very different viewed by an insider. Hence, the reached
conclusions may not only be different, but also irreconcilable, which
makes knowledge transfer difficult. The goal of this paper is to present
three views, discuss their respective qualities and shortcomings, and
offer some suggestions as of how to move on from here. |
|
Title: |
AN N-GRAM BASED DISTRIBUTIONAL TEST FOR AUTHORSHIP IDENTIFICATION |
Author(s): |
Kostas Fragos and Christos Skourlas |
Abstract: |
In this paper, a novel method for the
authorship identification problem is presented. Based on character level
text segmentation we study the disputed text’s N-grams distributions
within the authors’ text collections. The distribution that behaves most
abnormally is identified using the Kolmogorov - Smirnov test and the
corresponding Author is selected as the correct one. Our method is
evaluated using the test sets of the 2004 ALLC/ACH Ad-hoc Au-thorship
Attribution Competition and its performance is comparable with the best
performances of the participants in the competition. The main advantage
of our method is that it is a simple, not parametric way for authorship
attribu-tion without the necessity of building authors’ profiles from
training data. Moreover, the method is language independent and does not
require segmenta-tion for languages such as Chinese or Thai. There is
also no need for any text pre-processing or higher level processing,
avoiding thus the use of taggers, parsers, feature selection strategies,
or the use of other language dependent NLP tools. |
|
Title: |
QUESTION ANSWERING USING SYNTAX-BASED |
Author(s): |
Demetrios G. Glinos and Fernando Gomez |
Abstract: |
This paper presents a syntax-based formalism
for representing atomic propositions extracted from textual documents.
We describe a method for constructing a hierarchy of concept nodes for
indexing such logical forms based on the discourse entities they
contain. We show how meaningful factoid and list questions can be
decomposed into boolean expressions of question patterns using the same
formalism, with free variables representing the desired answers. We also
show how this formalism can be used for robust question answering using
the concept hierarchy and WordNet synonym, hypernym, and antonym
relationships. Finally, we describe the encouraging performance of an
implementation of this formalism for the factoid questions from TREC
2005, which operated upon the AQUAINT document corpus. |
|
Title: |
REQUIREMENTS-DRIVEN AUTOMATIC CONFIGURATION OF NATURAL LANGUAGE
APPLICATIONS |
Author(s): |
Dan Cristea, Corina Forăscu and Ionuţ Pistol |
Abstract: |
The paper proposes a model for dynamical
building of architectures intended to process natural language. The
representation that stays at the base of the model is a hierarchy of XML
annotation schemas in which the parent-child links are defined by
subsumption relations. We show how the hierarchy may be augmented with
processing power by marking the edges with names of processors, each
realising an elementary NL processing step, able to transform the
annotation corresponding to the parent node onto that corresponding to
the child node. The paper describes a navigation algorithm in the
hierarchy, which computes paths linking a start node to a destination
node, and which automatically configures architectures of serial and
parallel combinations of processors. |
|
Title: |
BUILDING DOMAIN ONTOLOGIES FROM TEXT ANALYSIS: AN APPLICATION FOR
QUESTION ANSWERING |
Author(s): |
Rodolfo Delmonte |
Abstract: |
In the field of information extraction and
automatic question answering access to a domain ontology may be of great
help. But the main problem is building such an ontology, a difficult and
time consuming task. We propose an approach in which the domain ontology
is learned from the linguistic analysis of a number of texts which
represent the domain itself. We have used the GETARUNS system do make
NLP analysis of texts. GETARUNS can build a Discourse Model and is able
to assign a relevance score to each entity. The Discourse Model is then
used to extract best candidates to become concepts in the domain
ontology. To arrange concepts in the correct hierarchy we use WordNet
taxonomy. Once the domain ontology is built we reconsider the texts to
extract information. In this phase the entities recognized at discourse
level are use to create instances of the concepts. The
predicate-argument structure of the verb is used to construct instance
slots for concepts. Eventually, the question answering task is performed
by translating the natural language question in a suitable form and use
that to query the Discourse Model enriched by the ontology. |
|
Title: |
A DIVERGENCE FROM RANDOMNESS FRAMEWORK OF WORDNET SYNSETS’
DISTRIBUTION FOR WORD SENSE DISAMBIGUATION |
Author(s): |
Kostas Fragos and Christos Skourlas |
Abstract: |
We describe and experimentally evaluate a
method for word sense disambiguation based on measuring the divergence
from randomness of Word-Net synsets distribution in the context of a
word that is to be disambiguated (target word). Firstly, for each word
appearing in the context we collect its re-lated synsets from WordNet
using WordNet relations, creating thus the bug of the related synsets
for the context. Secondly, for each one of the senses of the target word
we study the distribution of its related synsets in the context bug.
Assigning a theoretical random process for those distributions and
measuring the divergence from that random process we conclude for the
correct sense of the target word. The method is evaluated on English
lexical sample data from the Senseval-2 word sense disambiguation
competition, exhibiting remarkable performance; outperforming most known
WordNet relations based measures for word sense disambiguation.
Moreover, the method is general and can per-form disambiguation
assigning any random process for the distribution of the related synsets
and use any measure to quantify the divergence from random-ness. |
|
Title: |
INFORMATION EXTRACTION FROM MEDICAL REPORTS |
Author(s): |
Liliana Ferreira, António Teixeira and João Paulo da Silva Cunha |
Abstract: |
Information extraction technology, as defined
and developed through the U.S. DARPA Message Understanding Conferences
(MUCs), has proved successful at extracting information primarily from
newswire texts and in domains concerned with human activity. This paper
presents an Information Extraction (IE) system, intended to extract
structured information from medical reports written in Portuguese. A
first evaluation is performed and the results are discussed. |
|
Title: |
CLUSTERING BY TREE DISTANCE FOR PARSE TREE NORMALISATION |
Author(s): |
Martin Emms |
Abstract: |
The application of tree distance to clustering
is considered. Factors that have been found to favourably effect the use
of tree distance in question answering are found also to favourably
effect cluster quality. A potential application is to systems to
transform interrogative to indicative sentences, and it is argued that
clustering provides a means to navigate the space of parses assigned to
a large question set. A tree-distance analogue of the vector space
notion of centroid is proposed which derives from a cluster a kind of
pattern tree summarising the cluster. |
|
Title: |
A COGNITIVE-BASED APPROACH TO LEARNING INTEGRATED LANGUAGE
COMPONENTS |
Author(s): |
Charles Hannon and Jonathan Clark |
Abstract: |
The learning component of a cognitive-based
language model (LEAP) designed to easily integrate into agent systems is
presented. Building on the Interlaced Micro-Patterns (IMP) theory and
the Alchemy/Goal Mind environment, the LEAP research improves
agent-to-human and agent-to-agent communication by incorporating aspects
of human language development within the framework of general cognition.
Using a corpus of child through youth fiction, we provide evidence that
micro-patterns can be used to simultaneously learn a lexicon, syntax,
thematic roles and concepts. |
|
Title: |
AN APPROACH TO QUERY-BASED ADAPTATION OF SEMI-STRUCTURED DOCUMENTS |
Author(s): |
Corinne Amel Zayani and and Florence Sèdes |
Abstract: |
Generally, when semi-structured documents are
queried by user, the results are relevant but not always adapted.
Indeed, the research works in the area of semi-structured documents
always try to improve the relevance of results delivered by user’s
query. On the other hand, the research works in the area of adaptation
don't take into account how querying semi-structured documents. So, in
this paper we propose an approach to adapt the query results according
to the user model that is initialized and updated by user’s queries.
This adaptation approach is made up of two steps: (i) upstream of the
querying step, enriching user's query from the user’s profile (ii)
downstream of the querying step, adapting document units according to
user’s characteristics (interest, preference, etc). These two steps are
included in our proposal of an architecture for Adaptive Hypermedia
Systems extended from a previous one in [Zayani and al., 05]. |
|
Title: |
CHATTERBOX CHALLENGE 2005: GEOGRAPHY OF THE MODERN ELIZA |
Author(s): |
Huma Shah |
Abstract: |
The geography of a modern Eliza provides an
illusion of natural language understanding, as can be seen in the best
of the hundred-plus programmes entered into Chatterbox Challenge 2005
(CBC 2005), an alternative to Loebner’s Contest for artificial
intelligence, Turing’s measure for intelligence through textual
dialogue. These artificial conversational entities (ACE) are able to
maintain lengthy textual dialogues. This paper presents the experience
of the author as one of the Judges in CBC 2005. Not ‘bathed in language
experience’ like their human counterparts, Eliza’s descendants respond
at times humorously and with knowledge but they lack metaphor use, the
very feature of everyday human discourse. However ACE find success as
virtual e-assistants in single topic domains. Swedish furniture company
IKEA’s animated avatar Anna, a virtual customer service agent engages in
twenty thousand conversations daily across eight country sites in six
languages, including English. It provides IKEA’s customers with an
alternative and more natural query system than key-word search to find
products and prices. The author’s findings show that modern Eliza’s
appear to have come a long way from their ancestor but understanding
remains in the head of the human user. Until metaphor design is included
ACE will remain machine-like as Weizenbaum’s original. |
|
Title: |
ON LEXICAL COHESIVE BEHAVIOR OF HEADS OF DEFINITE DESCRIPTIONS: A
CASE STUDY |
Author(s): |
Beata Beigman Klebanov and Eli Shamir |
Abstract: |
This paper uses materials from annotation
studies of lexical cohesion (Beigman Klebanov and Shamir, 2005) and of
definite reference (Poesio and Vieira, 1998; Vieira, 1998) to discuss
the complementary nature of the two processes. Juxtaposing the two kinds
of annotation provides a unique perspective for observing the workings
of the reader's common-sense knowledge at two levels of text
organization: in patterns of lexis and in realization of discourse
entities.
|
|
Title: |
UNDERSTANDING LONG SENTENCES |
Author(s): |
Svetlana Sheremetyeva |
Abstract: |
This paper describes a natural language
understanding component for parsing long sentences. The NLU component
includes a generation module so that the results of understanding can be
displayed to the user in a natural language and interactively corrected
before the final parse is sent to a subsequent module of a particular
application. Parsing proper is divided into a phrase level and a level
of individual clauses included in a sentence. The output of the parser
is an interlingual representation that captures the content of a whole
sentence. The load of detecting the sentence clause hierarchy level is
shifted to the generator. The methodology is universal in the sense that
it could be used for different domains, languages and applications. We
illustrate it on the example of parsing a patent claim, - an extreme
case of a long sentence. |
|
Workshop on Ubiquitous Computing (IWUC-2006) |
Title: |
NOMADIC SHARING OF MEDIA: PROXIMITY DELIVERY OF MASS CONTENT WITHIN
P2P SOCIAL NETWORKS |
Author(s): |
Balázs Bakos and Lóránt Farkas |
Abstract: |
P2P file sharing systems are primarily designed
for personal computers with broadband connections to the Internet.
Despite the fact that mobile phones are increasingly starting to
resemble computers, they are still different in many ways. In our paper
we introduce a novel P2P file sharing system optimized for mobile
phones. We discuss issues that we have found to be important when
proximity technology and mobile phone specific context data is used to
deliver mass content within P2P social networks. Such issues are social
group management, enhanced peer discovery and efficient multicasting
ensuring reliable end-to-end delivery over multi-hops and social
interactions. The experiment with the proof-of-concept implementation on
Series S60 Symbian platform shows that content sharing in social
proximity can happen in a cost and resource effective way and it leads
to new social interactions and mobile communities. |
|
Title: |
DISCOVERING RELEVANT SERVICES IN PERVASIVE ENVIRONMENTS USING
SEMANTICS AND CONTEXT |
Author(s): |
Luke Steller, Shonali Krishnaswamy and Jan Newmarch |
Abstract: |
Recent advances, have enabled provision and
consumption of mobile services by small handheld devices. These devices
have limited capability in terms of processing ability, storage space,
battery life, network connectivity and bandwidth, which presents new
challenges for service discovery architectures. As a result, there is an
increased imperative to provide service requestors with services which
are the most relevant to their needs, to mitigate wastage of precious
device capacity and bandwidth. Service semantics must be captured to
match services with requests, on meaning not syntax. Furthermore,
requestor and service context must be utilized during the discovery
process. Thus, there is a need for a service discovery model that brings
together ‘semantics’ and ‘context’. We present a case for bringing
together semantics and context for pervasive service discovery by
illustrating improved levels of precision and recall, or in other words
increased relevance. We also present our model for integrating semantics
and context for pervasive service discovery. |
|
Title: |
IMPLEMENTING A PERVASIVE MEETING ROOM: A MODEL DRIVEN APPROACH |
Author(s): |
Javier Muñoz, Vicente Pelechano and Carlos Cetina |
Abstract: |
Current pervasive systems are developed ad-hoc
or using implementation frameworks. These approaches could be not enough
when dealing with large and complex pervasive systems. This paper
introduces an implementation of a pervasive system for managing a
meetings room. This system has been developed using a model driven
method proposed by the authors. The system is specified using PervML, a
UML-like modeling language. Then, a set of templates are applied to the
specification in order to automatically produce Java code that uses an
OSGi-based framework. The final application integrates several
technologies like EIB and Web Services. Three different user interfaces
are provided for interacting with the system. |
|
Title: |
POSITION ESTIMATION ON A GRID, BASED ON INFRARED PATTERN RECEPTION
FEATURES |
Author(s): |
Nikos Petrellis, Nikos Konofaos and George Alexiou |
Abstract: |
The estimation of the position of a moving
target on a grid plane is studied in this paper. The estimation method
is based on the success rate the infrared patterns transmitted from two
constant positions, is received by the moving target. Several aspects of
the pattern reception, such as the success rate of the expected ones or
the scrambled, play an important role in determining the target
coordinates. Our system requires ultra low cost commercial components.
Since the position of the target is determined by the success rate
instead of an analog signal intensity, no high precision sensors and
measurements are required and the whole coordinate estimation can be
carried out by a simple microcontroller on the moving target. The speed
of the estimation is adjustable according to the desired accuracy. An
error of less than 5% could be reached in most of the covered area. The
presented system can be used in a number of automation, robotics and
virtual reality applications where position estimation in an indoor area
of several meters should be performed in regular intervals. |
|
Title: |
VISUALISATION OF FUZZY CLASSIFICATION OF DATA ELEMENTS IN
UBIQUITOUS DATA STREAM MINING |
Author(s): |
Brett Gillick, Shonali Krishnaswamy, Mohamed Medhat Gaber and
Arkady Zaslavsky |
Abstract: |
Ubiquitous data mining (UDM) allows data mining
operations to be performed on continuous data streams using resource
limited devices. Visualisation is an essential tool to assist users in
understanding and interpreting data mining results and to aide the user
in directing further mining operations. However, there are currently no
on-line real-time visualisation tools to complement the UDM algorithms.
In this paper we investigate the use of visualisation techniques, within
an on-line real-time visualisation framework, in order to enhance UDM
result interpretation on handheld devices. We demonstrate a proof of
concept implementation for visualising degree of membership of data
elements to clusters produced using fuzzy logic algorithms. |
|
Title: |
DESIGN GUIDELINES FOR ANALYSIS AND SAFEGUARDING OF PRIVACY THREATS
IN UBICOMP APPLICATIONS |
Author(s): |
Elena Vildjiounaite, Petteri Alahuhta, Pasi Ahonen, David Wright
and Michael Friedewald |
Abstract: |
Realization of Ubiquitous Computing vision in
real world creates high threats to personal privacy due to constant
information collection by numerous tiny sensors, active information
exchange over short and long distances, long-term storage of large
quantities of data and reasoning on collected and stored data. However,
analysis based on over 100 Ubicomp scenarios shows that also nowadays
applications are often developed without considering privacy problems.
This work suggests guidelines for estimation of threats to privacy,
depending on real world application settings and on choice of
technology; and guidelines for developing technological safeguards
against privacy threats. |
|
Title: |
M-TRAFFIC - A TRAFFIC INFORMATION AND MONITORING SYSTEM FOR MOBILE
DEVICES |
Author(s): |
Teresa Romão, Luís Rato, Pedro Fernandes, Nuno Alexandre, Antão
Almada and Nuno Capeta |
Abstract: |
Traffic information is crucial in metropolitan
areas, where a high concentration of moving vehicles causes traffic
congestion and blockage. Appropriate traffic information received at the
proper time helps users to avoid unnecessary delays, choosing the
fastest route that serves their purposes. This paper presents Mobile
Traffic (M-Traffic), a multiplatform online traffic information system,
which provides real time traffic information based on image processing,
sensor's data and traveller behaviour models. In order to estimate route
delay and feed the optimal routing algorithm a traffic microscopic
simulation model is developed and simulation results are presented. This
mobile information service ubiquitously provides users with traffic
information regarding their needs and preferences, according to an alert
system, which allows a personalised pre-definition of warning messages. |
|
Title: |
ON-DEMAND LOADING OF PERVASIVE-ORIENTED APPLICATIONS USING
MASS-MARKET CAMERA PHONES |
Author(s): |
Marco Avvenuti and Alessio Vecchio |
Abstract: |
Camera phones are the first realistic platform
for the development of pervasive computing applications: they are
personal, ubiquitous, and the built-in camera can be used as a
context-sensing equipment. Unfortunately, currently available systems
for pervasive computing, emerged from both academic and industrial
research, can be adopted only on a small fraction of the devices already
deployed or in production in the next future. In this paper we present
an extensible programming infrastructure that turns mass-marke phones
into a platform for pervasive computing. |
|
Title: |
A DESIGN THEORY FOR PERVASIVE INFORMATION SYSTEMS |
Author(s): |
Panos E. Kourouthanassis and George M. Giaglis |
Abstract: |
Pervasive Information Systems (PIS) constitute
an emerging class of Information Systems where Information Technology is
gradually embedded in the physical environment, capable of accommodating
user needs and wants when desired. PIS differ from Desktop Information
Systems (DIS) in that they encompass a complex, dynamic environment
composed of multiple artefacts instead of Personal Computers only,
capable of perceiving contextual information instead of simple user
input, and supporting mobility instead of stationary services. This
paper aims at proposing a design theory for PIS. In particular, we have
employed (Walls et al. 1992)’s framework of Information Systems Design
Theories (ISDT) to develop a set of prescriptions that guide the design
of PIS instances. The design theory addresses both the design product
and the design process by specifying four meta-requirements, nine
meta-design considerations, and five design method considerations. The
paper emphasises mainly on the design theory itself and does not address
issues concerning its validation. However, in the concluding remarks we
briefly discuss the activities we undertook to validate our theoretical
suggestions. |
|
Title: |
AN APPROACH FOR APPLICATIONS SUITABILITY ON PERVASIVE ENVIRONMENTS |
Author(s): |
Andres Flores and Macario Polo |
Abstract: |
This work is related to the area of
Component-based Software Development, particularly to largely
distributed systems as Pervasive Computing Environments. We are focused
on the automation of a Component Integration Process as a support for
run-time adjustments of applications when the environment involves
highly dynamic changes of requirements. Such integration implies to
evaluate whether components may or may not satisfy a given model. The
Assessment procedure is based on syntactic and semantic aspects, where
the latter involves assertions, and usage protocols. We have implemented
on the .Net technology the current state of our approach to gain
understanding about the complexity and effectiveness of our approach. |
|
Workshop on Security In Information Systems
(WOSIS-2006) |
Title: |
GRID AUTHORIZATION BASED ON EXISTING AAA ARCHITECTURES |
Author(s): |
Manuel Sánchez, Gabriel López, Óscar Cánovas and Antonio F.
Gómez-Skarmeta |
Abstract: |
Grid computing has appeared as a new paradigm
to cover the needs of modern scientific applications. A lot of research
has been done in this field, but several issues are still open. One of
them, the Grid authorization, is probably one of the most important
topics regarding to resource providers, because they need to control the
users accessing their resources. Several authorization architectures
have been proposed, including in some cases new elements which introduce
redundant components to the system. In this paper, we propose a new
scheme which takes advantage of a previously existing underlying
authorization infrastructure among the involved organizations, the
NAS-SAML system, to build a Grid environment with an advanced and
extensible authorization mechanism. |
|
Title: |
A SECURE UNIVERSAL LOYALTY CARD |
Author(s): |
Sébastien Canard, Fabrice Clerc and Benjamin Morin |
Abstract: |
In this paper, we propose a generic loyalty
system based on smart cards which may be implemented in existing devices
like cell phones or PDAs. Our loyalty system is secure and offers some
desirable features both to customers and vendors, and may further the
adoption of such win-win marketing operations. In particular, the system
is universal in the sense that there is a one-to-many relationship
between a customer's loyalty card and the vendors and the system is
reliable for both parties. |
|
Title: |
A DISTRIBUTED KERBERIZED ACCESS ARCHITECTURE FOR REAL TIME GRIDS |
Author(s): |
A. Moralis, A. Lenis, M. Grammatikou, S. Papavassiliou and B.
Maglaris |
Abstract: |
Authentication, authorization and encryption in
large scale distributed Grids are usually based on a Public Key
Infrastructure (PKI) with asymmetric encryption and X.509 – Proxy
certificates for user single sign-on to resources. This approach,
however, introduces processing overhead, that may be undesirable in near
real time Grid applications (e.g. Grids used for time critical
instrument monitoring and control). To alleviate this we introduce in
this paper a Symmetric Key – Kerberos based approach that scales in
large Grid environments. We present a Use Case Scenario to test and
validate the proposed Architecture, in case of numerous time-critical
requests running in parallel. |
|
Title: |
A MODEL DRIVEN APPROACH FOR SECURE XML DATABASE DEVELOPMENT |
Author(s): |
Belén Vela, Eduardo Fernández-Medina, Esperanza Marcos and Mario
Piattini |
Abstract: |
In this paper, we propose a methodological
approach for the model driven development of secure XML Databases (DB).
This proposal is under the framework of MIDAS, a model driven
methodology for the development of Web Information Systems (WIS) based
on the Model Driven Architecture (MDA) proposed by the Object Management
Group (OMG). The XML DB development process in MIDAS proposes to use as
Platform Independent Model (PIM) the data conceptual model and as
Platform Specific Model (PSM) the XML Schema model, both of them
represented in UML. In this work, such models will be modified to be
able to add security aspects if the stored information is considered as
critical. On the one hand, it is proposed the use of a UML extension to
incorporate security aspects at the conceptual secure DB development
(PIM) level and on the other hand, the previously-defined XML schema
profile will be modified with the purpose of incorporating security
aspects in the logical secure XML DB development (PSM) level. In
addition, the semi-automatic mappings to pass from PIM to PSM for secure
XML DB will be defined. The development process of a secure XML DB will
be shown through a case study: a WIS for the management of hospital
information in an XML DB. |
|
Title: |
SECTET – AN EXTENSIBLE FRAMEWORK FOR THE REALIZATION OF SECURE
INTER-ORGANIZATIONAL WORKFLOWS |
Author(s): |
Michael Hafner, Ruth Breu, Berthold Agreiter and Andrea Nowak |
Abstract: |
SECTET is an extensible framework for the
model-driven realization of security-critical, inter-organizational
workflows. The framework is based on a methodology that focuses on the
correct implementation of security-requirements and consists of a suite
of tools that facilitates the cost-efficient realization and management
of decentralized, security-critical workflows. After giving a
description of the framework, we show how it can be adapted to
incorporate advanced security patterns like the Qualified Signature,
which implements a legal requirement specific to e-government. It
extends the concept of digital signature by requiring that the signatory
be a natural person.
|
|
Title: |
ROBUST-AUDIO-HASH SYNCHRONIZED AUDIO WATERMARKING |
Author(s): |
Martin Steinebach, Sascha Zmdzinski and Sergey Neichtadt
|
Abstract: |
Digital audio watermarking has become an
accepted technology for e.g. protection of music downloads. While common
challenges to robustness, like lossy compression or analogue
transmission have been solved in the past, loss of synchronization due
to time stretching is still an issue. We present a novel approach to
audio watermarking synchronization where a robust audio hash is applied
to identify watermarking positions. |
|
Title: |
SECURING MOBILE HEALTHCARE SYSTEMS BASED ON INFORMATION
CLASSIFICATION: DITIS CASE STUDY |
Author(s): |
Eliana Stavrou and Andreas Pitsillides |
Abstract: |
Healthcare applications require special
attention regarding security issues since healthcare is associated with
mission critical services that are connected with the well being of
life. Security raises special considerations when mobility is introduced
in the healthcare environment. This research work proposes a security
framework for mobile healthcare systems based on information
classification into security levels. By categorizing the information
used in mobile healthcare systems and linking it with the security
objectives and security technologies, we aim in balancing the trade-off
between security complexity and performance. Furthermore, this paper
discusses a number of issues that are raised in the healthcare
environment: privacy, confidentiality, integrity, legal and ethical
considerations. |
|
Title: |
INFORMATION SECURITY AND BUSINESS CONTINUITY IN SMES |
Author(s): |
Antti Tuomisto and Mikko Savela |
Abstract: |
How SMEs could allocate their scarce resources
into information and business security issues in a manner which keeps
prevention and recovery activities in a balance from the business
continuity perspective? This study investigates the role of information
and communication technology (ICT) in the evolving businesses of small-
and medium-sized enterprises (SMEs). The use of IT in SMEs is generally
reported mainly in technical terms. However, the multifaceted
development of global information society pushes more pressure to all
lines of business, whether knowledge work intensive and high education
dependent or not. Our framework is independent from the business branch,
and we attempt to reach all types and sizes of SMEs and even
entrepreneurs, in order to be able to describe the many characteristics
of the current information security situation. The empirical research in
this study constructs upon the fact that the key unit of analysis is the
business branch knowledge and the persons possessing that knowledge. The
role of ICT is more or less normalized to a few areas of core business
data. The trends of globalization and digitalization of the networked
and mobile information society are under way. But the question is what
is the everyday business and everyday work meanwhile. Our framework has
three levels: i) current business situation from the traditional
information security perspective, ii) tensions of business continuity
and upcoming technological visions of the near future, and iii) the
knowledge of the workers. This article emphasises the first two levels,
but the third level justifies the concepts. Information intensive work
in SMEs is yet to come, but still we must ensure that the awareness of
existing systems’ importance to the business continuity is clarified.
Securing the resources for practical and effective information and
appropriate concretization this to the relevant work roles and persons
is one of the key findings. Further, based on the results of the
empirical study, we found that the scarce resources should be used for
actions which are in line with the technological infrastructure and the
business itself (information and knowledge requirements of the actors).
These findings suggest that SMEs comprehension of current and
forthcoming challenges in business security are only partly ICT related,
although at the same time the very continuity of business (and efficient
performance and high customer satisfaction) involves more and more ICT.
The research results of the current and near future state of affairs
were used to construct a collection of guidelines for a practical and
inexpensive information security policy in SMEs. However, the
competition in business is heavy and increasingly international, and
thus there is no room for giving any advance to competitors. |
|
Title: |
THE PLACE AND ROLE OF SECURITY PATTERNS IN SOFTWARE DEVELOPMENT
PROCESS |
Author(s): |
Oleksiy Mazhelis and Anton Naumenko |
Abstract: |
Security is one of the key quality attributes
for many contemporary software products. Designing, developing, and
maintaining such software necessitates the use of a secure-software
development process which specifies how achieving this quality goal can
be supported throughout the development lifecycle. In addition to
satisfying the explicitly-stated functional security requirements, such
process is aimed at minimising the number of vulnerabilities in the
design and the implementation of the software. The secure software
development is a challenging task spanning various stages of the
development process. This inherent difficulty may be to some extent
alleviated by the use of the so-called security patterns, which
encapsulate the knowledge about successful solutions to recurring
security problems. In this paper, the state of the art in the secure
software development processes is overviewed, and the role and place of
security patterns in these processes is described. The current usage of
patterns in secure software development is analysed, taking into account
both the role of these patterns in the development processes, and the
limitations of the security patterns available. |
|
Title: |
THE SOFTWARE INFRASTRUCTURE OF A JAVA CARD BASED SECURITY PLATFORM
FOR DISTRIBUTED APPLICATIONS |
Author(s): |
Serge Chaumette, Achraf Karray and Damien Sauveron |
Abstract: |
The work presented in this paper is part of the
Java Card Grid project carried out at LaBRI, Laboratoire Bordelais de
Recherche en Informatique. The aim of this project is to build a
hardware platform and the associated software components to experiment
on the security features of distributed applications. To achieve this
goal we use the hardware components that offer the highest security
level: smart cards. We do not pretend that the resulting platform can
compare to a real grid in terms of computational power, but it serves as
a proof of concept for what a grid with secure processors could be and
could do. As of writing, the hardware platform comprises 32 card readers
and two PCs to manage them. The applications that we run on our platform
are applications that require a high level of confidentiality regarding
their own binary code, the input data that they handle, and the results
that they produce. Even though we know that we cannot expect our grid to
achieve high speed computation, we believe that it is a good testbed to
experiment on the security features that one would require in a real
grid environment. This paper focuses on the software infrastructure that
we have set up to manage the platform and on the framework that we have
designed and implemented to develop real applications on it. |
|
Title: |
A NEW METHOD FOR EMBEDDING SECRET DATA TO THE CONTAINER IMAGE USING
‘CHAOTIC’ DISCRETE ORTHOGONAL TRANSFORMS |
Author(s): |
Vladimir Chernov and Oleg Bespolitov |
Abstract: |
In this paper a method for embedding the secret
image into the container is considered. The method is based on specifics
of the spectral properties of specific two-dimensional discrete
orthogonal transform. The values of functions forming the basis of this
transforms are `chaotically’ distributed. Two ideas ground the synthesis
of these basises. Firstly, on 1D M-transforms, that were introduced and
investigated in certain particular cases by H.-J. Grallert. Secondly, on
application of introduced by I. Kàtai canonical number systems in finite
fields to numerating the input image pixels. |
|
Title: |
MODELING DECEPTIVE ACTION IN VIRTUAL COMMUNITIES |
Author(s): |
Yi Hu and Brajendra Panda |
Abstract: |
Trust and shared interest are the building
blocks for most relationships in human society. A deceptive action and
the associated risks can affect many people. Although trust relationship
in virtual communities can be built up more quickly and easily, it is
more fragile. This research concentrates on analyzing the Information
Quality in the open rating systems; especially studying the way
deceptive data spread in virtual communities. In this paper, we have
proposed several novel ideas on assessing deceptive actions and how the
structure of the virtual community affects the information flow among
subjects in the web of trust. Furthermore, our experiments illustrate
how deceptive data would spread and to what extent the deceptive data
would affect subjects in virtual communities. |
|
Title: |
SREP: A PROPOSAL FOR ESTABLISHING SECURITY REQUIREMENTS FOR THE
DEVELOPMENT OF SECURE INFORMATION SYSTEMS |
Author(s): |
Daniel Mellado, Eduardo Fernández-Medina2 and Mario Piattini |
Abstract: |
Nowadays, security solutions are mainly focused
on providing security defences, instead of solving one of the main
reasons for security problems that refers to an appropriate Information
Systems (IS) design. In this paper a proposal for establishing security
requirements for the development of secure IS is presented. Our approach
is an asset-based and risk-driven method, which is based on the reuse of
security requirements, by providing a security resources repository,
together with the integration of the Common Criteria into traditional
software lifecycle model, so that it conforms to ISO/IEC 15408. Starting
from the concept of iterative software construction, we will propose a
micro-process for the security requirements analysis, that is repeatedly
performed at each level of abstraction throughout the incremental
development. In brief, we will present an approach which deals with the
security requirements at the first stages of software development in a
systematic and intuitive way, and which conforms to ISO/IEC 17799:2005 |
|
Title: |
HONEYNETS IN 3G – A GAME THEORETIC ANALYSIS |
Author(s): |
Christos K. Dimitriadis |
Abstract: |
Although security improvements were implemented
in the air interface of Third Generation (3G) mobile systems, important
security vulnerabilities remain in the mobile core network, threatening
the whole service provision path. This paper presents an overview of the
results of a security assessment on the Packet Switched domain of a
mobile operator’s core network and proves the benefits from implementing
a Honeynet in 3G infrastructures, by the deployment of game theory. |
|
Title: |
PROTECTING NOTIFICATION OF EVENTS IN MULTIMEDIA SYSTEMS |
Author(s): |
Eva Rodríguez, Silvia Llorente and Jaime Delgado |
Abstract: |
Protection of multimedia information is an
important issue for current actors in the multimedia distribution value
chain. Security techniques exist for protecting the multimedia contents
itself, like encryption, watermarking, fingerprinting and so on.
Nevertheless, at a higher level, other mechanisms could be used for the
protection and management of multimedia information. One of these
mechanisms is the notification of events of actions, done by the
different actors of the multimedia value chain (from content creator to
final user), for the different delivery channels. It is possible to
describe event notifications by standard means by using MPEG 21 event
reporting. However, the current standard initiative does not take into
account the security of the events being notified. In this paper we
present a possible solution to this problem by combining two different
parts of the MPEG 21 standard, Event Reporting (ER) and Intellectual
Property Management and Protection (IPMP). |
|
Title: |
SECURITY PATTERNS RELATED TO SECURITY REQUIREMENTS |
Author(s): |
David G. Rosado, Carlos Gutiérrez, Eduardo Fernández-Medina and
Mario Piattini |
Abstract: |
In the information technology environment,
patterns give information system architects a method for defining
reusable solutions to design problems. The purpose of using patterns is
to create a reusable design element. We can obtain, in a systematic way,
a security software architecture that contains a set of security design
patterns from the security requirements found. Several important aspects
of building software systems with patterns are not addressed yet by
today’s pattern descriptions. Examples include the integration of a
pattern into a partially existing design, and the combination of
patterns into larger designs. Now, we want to use these patterns in our
architectures, designs, and implementations. |
|
Title: |
TOWARDS A UML 2.0 PROFILE FOR RBAC MODELING IN ACTIVITY DIAGRAMS |
Author(s): |
Alfonso Rodríguez, Eduardo Fernández-Medina and Mario Piattini |
Abstract: |
Business Processes are a crucial issue for many
companies because they are the key to maintain competitiveness.
Moreover, business processes are important for software developers,
since they can capture from them the necessary requirements for software
design and creation. Besides, business process modeling is the center
for conducting and improving how the business is operated. Security is
important for business performance, but traditionally, it is considered
after the business processes definition. Empirical studies show that, at
the business process level, customers, end users, and business analysts
are able to express their security needs. In this work, we will present
a proposal aimed at integrating security requirements and role
identification for RBAC, through business process modeling. We will
summarize our UML 2.0 profile for modeling secure business process
through activity diagrams, and we will apply this approach to a typical
health-care business process. |
|
Title: |
NAMES IN CRYPTOGRAPHIC PROTOCOLS |
Author(s): |
Simone Lupetti, Feike W. Dillema and Tage Stabell-Kulø |
Abstract: |
Messages in cryptographic protocols are made up
of a small set of elements; keys, nonces, timestamps, and names, amongst
others. These elements must possess specific properties to be useful for
theirintended purpose. Some of these properties are prescribed as part
of the protocol specification, while others are assumed to be inherited
from the execution environment. We focus on this latter category by
analyzing the security properties of names. We argue that to fulfill
their role in cryptographic protocols, names must be unique across
parallel sessions of the same protocol and that uniqueness must be
guaranteed to hold for each participant of these runs. We discuss how
uniqueness can be provided and verified by the interested parties. To do
so, two different mechanisms are shown possible, namely local and global
verification. In both cases we discuss the implications of uniqueness on
the execution environment of a cryptographic protocol, pointing out the
inescapable issues related to each of the two mechanisms. Finally, we
argue that such implications should be given careful consideration as
they represent important elements in the evaluation of a cryptographic
protocol itself. |
|
Title: |
SECURE DEPLOYMENT OF APPLICATIONS TO FIELDED DEVICES AND SMART
CARDS |
Author(s): |
William G. Sirett, John A. MacDonald, Keith Mayes and Konstantinos
Markantonakis |
Abstract: |
This work presents a process of securely
deploying applications to fielded devices and smart cards whilst taking
into consideration the possibility that the fielded device could be
malicious. Advantages of the proposed process are caching functionality
upon the device, optimal use of resources, employment of nested security
contexts whilst addressing fielded infrastructures using a homogenous
solution. This work outlines a targeted scenario, details existing
malicious device activity and defines an attacker profile. Assumptions
and requirements are drawn and analysis of the proposal and attack
scenarios is conducted. Advantages and deployment scenarios are
presented with an implementation the process using Java and existing
standards. |
|
Title: |
IMPROVING INTRUSION DETECTION THROUGH ALERT VERIFICATION |
Author(s): |
Thomas Heyman, Bart De Win, Christophe Huygens and Wouter Joosen |
Abstract: |
Intrusion detection systems (IDS) suffer from a
lack of scalability. Alert correlation has been introduced to address
this challenge and is generally considered to be the major part of the
solution. One of the steps in the correlation process is the
verification of alerts. We present a generic intrusion detection
architecture. We have identified the relationships and interactions
between correlation and verification. An overview of verification tests
proposed in literature is presented and refined. Our contribution is to
integrate these tests in a generic framework for verification. A
proof-of-concept implementation is presented and a first evaluation is
made. We conclude that verification is a viable extension to the
intrusion detection process. Its effectiveness is highly dependent on
contextual information. |
|
Title: |
AN AUDIT METHOD OF PERSONAL DATA BASED ON REQUIREMENTS ENGINEERING |
Author(s): |
Miguel A. Martínez, Joaquín Lasheras, Ambrosio Toval and Mario
Piattini |
Abstract: |
Security analysis of computer systems studies
the vulnerabilities that affect an organization from various points of
view. In recent years, a growing interest in guaranteeing that the
organization makes a suitable use of personal data has been identified.
Furthermore, the privacy of personal data is regulated by the Law and is
considered important in a number of Quality Standards. This paper
presents a practical proposal to make a systematic audit of personal
data protection - within the framework of CobiT audit - based on SIREN.
SIREN is a method of Requirements Engineering based on standards of this
discipline and requirements reuse. The requirements predefined in the
SIREN catalog of Personal Data Protection (PDP), along with a method of
data protection audit, based on the use of this catalog, can provide
organizations with a guarantee of ensuring the privacy and the good use
of personal data. The audit method proposed in this paper has been
validated following the Action Research method, in a case study of a
medical center, which has a high level of protection in the personal
data that it handles. |
|
Title: |
AN ONTOLOGY-BASED DISTRIBUTED WHITEBOARD TO DETERMINE LEGAL
RESPONSES TO ONLINE CYBER ATTACKS |
Author(s): |
Leisheng Peng, Duminda Wijesekera, Thomas C. Wingfield and James B.
Michael |
Abstract: |
Today’s cyber attacks come from many Internet
and legal domains, requiring a coordinated swift and legitimate
response. Consequently, determining the legality of a response requires
a coordinated consensual legal argument that weaves legal sub-arguments
from all participating domains. Doing so as a precursor for forensic
analysis is to provide legitimacy to the process. We describe a tool
that can be used to weave such a legal argument using the WWW securely.
Our tool is a legal whiteboard that allows participating group of
attorneys to meet in Cyberspace in real time and construct a legal
argument graphically by using a decision tree. A tree constructed this
way and verified to hold anticipated legal challenges can then be used
to guide forensic experts and law enforcement personnel during their
active responses and off-line examinations. In our tool the group of
attorneys that construct the legal argument elects a leader (say the
super builder) that permits (through access control) the group to
construct a decision tree that, when populated by actual parameters of a
cyber incident will output a decision. During the course of the
construction, all participating attorneys can construct sub-parts of the
arguments that can be substantiated with relevant legal documents from
their own legal domains. Because diverse legal domains use different
nomenclatures, we provide the capability to index and search legal
documents using a complex International legal Ontology that goes beyond
the traditional NeuxsLexus like legal databases. This Ontology itself
can be created using the tool from remote locations. Once the sub
arguments are made, they are submitted to the master builder through a
ticketing mechanism that has the final authority to approve and
synchronize the sub-trees to become the final decision tree with all its
attached legal documents. Our tool has been fine tuned with numerous
interviews with practicing attorneys in the subject area of cyber crime.
|
|
Title: |
AN ELECTRONIC VOTING SYSTEM SUPPORTING VOTEWEIGHTS |
Author(s): |
Charlott Eliasson and André Zúquete |
Abstract: |
Typically each voter contributes with one vote
for an election. But there are some elections where voters can have
different weights associated with their vote. In this paper we provide a
solution for allowing an arbitrary number of weights and weight values
to be used in an electronic voting system. We chose REVS, Robust
Electronic Voting System, a voting system designed to support Internet
voting processes, as the start point for studying the introduction of
vote weights. To the best of our knowledge, our modified version of REVS
is the first electronic voting system supporting vote weights. Another
novelty of the work presented in this paper is the use of sets of RSA
key pairs with a common modulus per entity, for saving both generation
time and space. |
|
Title: |
DEVELOPING A MATURITY MODEL FOR INFORMATION SYSTEM SECURITY
MANAGEMENT WITHIN SMALL AND MEDIUM SIZE ENTERPRISES |
Author(s): |
Luis Enrique Sánchez, Daniel Villafranca, Eduardo Fernández-Medina
and Mario Piattini |
Abstract: |
For enterprises to be able to use information
and communication technologies with guarantees, it is necessary to have
an adequate security management available. This requires that
enterprises always know their current maturity level and to what extend
their security must evolve. Current maturity models are showing us that
they are inefficient in small and medium size enterprises since these
enterprises have a series of additional problems when implementing
security management systems. In this paper, we will make an analysis of
the maturity models oriented to security existing in the market by
analysing their main disadvantages regarding small and medium size
enterprises using as a reference framework ISO/IEC 17799. This approach
is being directly applied to real cases, thus obtaining a constant
improvement in its application. |
|
Title: |
ANALYZING PREAUTHENTICATION TIMESTAMPS TO CRACK KERBEROS V
PASSWORDS |
Author(s): |
Ahmed Alazzawe, Anis Alazzawe, Asad Nawaz and Duminda Wijesekera |
Abstract: |
Kerberos V is widely deployed to provide
authentication service, which is included in the popular Microsoft
Windows 2000/2003 Servers. Kerberos V introduced several improvements
over its previous version. One of these improvements is a
pre-authentication scheme that makes an offline password attack more
difficult. When pre-authentication is used with a pass phrase though, it
too becomes susceptible to a similar type of offline attack. By
capturing and using the timestamp information, from the
pre-authentication data, we can decrease the time needed to obtain the
password. This paper examines the computations saved by using this
knowledge of the timestamp in attacking Kerberos 5 pre-authentication
data to obtain the password. We wrote a program, Kerb_Cruncher, which
breaks apart the pre-authentication data in an attempt to recover the
client’s password. It uses a well- known cryptographic library and
operates in two modes to perform the decryption of the data. One mode
performs the attack, without using the timestamp. The other mode would
skip the last HMAC computation which is used in the verification
process, and instead looks for a timestamp to determine that the
decryption process succeeded. Our findings show that by performing the
timestamp check rather than the final HMAC computation we save a
noticeable amount of time, as well as processing cycles. |
|
Workshop on Computer Supported Activity
Coordination (CSAC-2006) |
Title: |
ELECTRONIC DATA INTERCHANGE SYSTEM FOR SAFETY CASE MANAGEMENT |
Author(s): |
Alan Eardley, Oleksy Shelest and Saeed Fararooy |
Abstract: |
In this paper the theory of Electronic Data
Interchange (EDI) is applied to the safety case management domain on
order to evaluate its benefits and to assess the potential issues which
relate to data exchange within certain industry sectors. The work is
undertaken to establish best practice and to examine successful
techniques identified by other researchers in the area of XML/EDI and to
implement them in data transmission from a safety case management tool
based on MS Visio to the Integrated Safety Case Development Environment
(ISCaDE) residing on a Dynamic Object Oriented Requirements System
(DOORS) database. Goal Structuring Notation (GSN) and its XML dialect
Goal Structuring Mark-up Language (GSML), developed at the University of
York, is used to produce messaging specifications to represent the
safety case model. Furthermore, these specifications are used to build
EDI software to transport data across different safety packages. |
|
Title: |
MINING SELF-SIMILARITY IN TIME SERIES |
Author(s): |
Song Meina, Zhan Xiaosu and Song Junde |
Abstract: |
Self-similarity can successfully characterize
and forecast intricate, non-periodic and chaos time series avoiding the
limitation of traditional methods on LRD (Long-Range Dependence). The
potential principals will be found and the future unknown time series
will be forecasted through foregoing training. Therefore it is important
to mine the LRD by self-similarity analysis. In this paper, mining
self-similarity of time series is introduced. And the practical value
can be found from two cases study respectively for season-variable trend
forecast and network traffic. |
|
Title: |
SYSTEM ARCHITECTURE DESIGN FOR WAP SERVICES BASED ON MISC PLATFORM |
Author(s): |
Qun Yu, Meina Song, Junde Song and Xiaosu Zhan |
Abstract: |
WAP services have become the available method
for subscribers to access mobile Internet through mobile terminal
anywhere and anytime. In this paper, a logical architecture of WAP [1]
(Wireless application protocol) service systems based on MISC [2]
(Mobile Information Service Centre) platform is discussed. The whole
system is designed and developed on J2EE [3] (Java 2 Enterprise Edition)
architecture and deployed on BEA WebLogic Platform [4]. The WAP services
MM(Maintenance and Management) system is implemented by
JavaBean[3]、JSP[3](Java Server Pages). While the service logical
interfaces are implemented through EJB [3][12](Enterprise Java Bean)
which is more flexible, portable and scalable. And they can make the WAP
services interface with the designed system through much more styles.
Besides, the client display WAP page is developed with Java
Servlet[3][13]. Furthermore, several WAP services based on this WAP
System Platform have been developed, which validates the practice value
of this system. An optimal scheme is presented to conclude in this
paper, which will reduce the development complexity, deployment risk and
so on.
|
|
Title: |
MANAGING NETWORK TROUBLES WHILE INTERACTING WITHIN COLLABORATIVE
VIRTUAL ENVIRONMENTS |
Author(s): |
Thierry Duval and Chadi El Zammar |
Abstract: |
We are interested in real time collaborative
interactions within CVEs. This domain relies on the low latency offered
by high speed networks. Participants of networked collaborative virtual
environments can suffer from misunderstanding weird behavior of some
objects of the virtual universe, especially when low level network
troubles occur during a collaborative session. Our aim is to make such a
virtual world easier to understand by using some graphic visualizations
(dedicated 3D metaphors) in such a way that the users become aware of
these problems. In this paper we present how two independant mechanisms
may be coupled together for a better management and awareness of network
troubles while interacting within a networked collaborative virtual
environment. The first mechanism is an awareness system that visualizes,
through special metaphors, the existence of a network trouble as strong
delay or disconnection. The second mechanism is a virtual object
migration system that allows the migration of an object from one site to
another to ensure a non interrupted manipulation in case of network
troubles. We will detail only this awareness system and we will show how
it uses the migration system to allow users to go on interacting while
network breakdowns occur. |
|
Title: |
AN ACTIVE DATABASE APPROACH TO COMPUTERISED CLINICAL GUIDELINE
MANAGEMENT |
Author(s): |
Kudakwashe Dube and Bing Wu |
Abstract: |
This paper presents a generic approach, and a
case study practising the approach, based on a unified framework
harnessing the event-condition-action (ECA) rule paradigm and the active
database for the management of computer-based clinical practice
guidelines and protocols (CPGs). The CPG management is cast into three
perspectives: specification, execution and manipulation, making up three
management planes of our framework. The ECA rule paradigm is the core
CPG representational formalism while the active database serves as the
kernel within the CPG management environment facilitating integration
with electronic healthcare records and clinical workflow. The benefits
of the approach are: flexibility of CPG management; integration of CPGs
with electronic patient records and clinical workflows; and
incorporation of CPG management system into computerised healthcare
systems.
|
|
Title: |
TRUST AND VIRTUAL ORGANISATIONS - EMERGENT CONSIDERATIONS FOR
VIRTUAL INTERORGANISATIONAL WORK IN THE GLOBAL CHEMICALS INDUSTRY |
Author(s): |
Paul Lewis and Maria Katsorchi-Hayes |
Abstract: |
The development of Grid computing technologies
has stimulated additional interest in the concept of the virtual
organization, with the promise of ‘always available’ processing power
seeming to offer sufficient processing power to overcome any technical
obstacles to transparent global inter-organizational working. However,
whilst the academic literature has given much attention to the theory of
virtual organization there have been few viable real-life examples. This
paper reports on research undertaken in the UK Chemicals industry where
the technical design of Grid middleware was supported by an interpretive
investigation of the ‘fit’ between the needs of industry and the forms
of interorganisational working that the middleware was intended to
support. The research suggests that this discrepancy between interest
in, and implementation of, virtual organizations may arise from a
misunderstanding of the role trust plays in existing business practices
and the consequent requirements for supporting trust in a virtual
organization. Business relationships emerge to be deeply rooted in
personal contact and popular and elusive views of looking at virtual
organizing need to be reconsidered in favor of a more context-bounded
approach. |
|
Title: |
MULTIAGENT BASED SIMULATION TOOL FOR TRANSPORTATION AND LOGISTICS
DECISION SUPPORT |
Author(s): |
Janis Grundspenkis and Egons Lavendelis |
Abstract: |
A transportation and logistics domain belongs
to complex problems domains because there are many geographically
distributed companies who may enter or leave the system at any time.
Analysis of the great number of publications reveals that although
traditional mathematical modelling and simulation techniques still
dominate, new approaches start to appear. Agent technologies and
multiagent systems emerge into transportation and logistics domain only
recently. The paper proposes the developed multiagent based simulation
tool for decision support in transportation and logistics domain. The
multiagent system consists from clients’ agents and logistics companies
agents which may participate in four types of auctions, namely, English
auction, Dutch auction, First-price sealed-bid auction and Vickrey
auction. A client is an auctioneer who is making decision about the best
offer of delivering goods. The simulation tool is implemented using
Borland C++ Builder and MS Access database. |
|
Title: |
AYLLU: AGENT-INSPIRED COOPERATIVE SERVICES FOR HUMAN INTERACTION |
Author(s): |
Oskar Cantor, Leonardo Mancilla and Enrique González |
Abstract: |
Nowadays, people not only needs to communicate,
but also require to cooperate in order to achieve common goals. Thus,
groupware applications are more complex, they have to satisfy new
requirements of cooperation between people in many areas such as:
organizations, industries or entertainment. Besides, due to the
remarkable increment in the use of mobile devices and other
technologies, groupware has found new environments to provide solutions
and better services to end users. The Ayllu human cooperation model
states that people must interact using well defined and structured
protocols as rational cooperative agents do. This paper presents the
Ayllu architecture, which is intended to develop groupware applications
with a cooperative approach, called 5C paradigm; based on software
agents that act as mediators between users working cooperatively.
Cooperative services are constructed and executed by a mechanism called
volatile group. Ayllu supports the execution of cooperative services,
people can cooperate immersed in a pervasive environment, interact in an
organized fashion, conform communities and pursue common objectives
without concerning about their individual context. |
|
Title: |
ORGANIZATIONAL STRUCTURE AND RESPONSIBILITY |
Author(s): |
Lambèr Royakkers, Davide Grossi and Frank Dignum |
Abstract: |
We analyze the organizational structure of
multi-agent systems and explain the precise added value and the effects
of such organizational structure on the involved agents. To pursue this
aim, contributions from social and organization theory are considered
which provide a solid theoretical foundation to this analysis. We argue
that organizational structures should be seen along at least three
dimensions, instead of just one: power, coordination, and control. In
order to systematize the approach, formal tools are used to describe the
organizational structure as well as the effect of such structures on the
activities in multi-agent systems, and especially the responsibilities
within organizations of agents. The main aim of the research is to
provide a formal analysis of the connections between collective
obligations to individual responsibilities. Which individual agent in a
group should be held responsible if an obligation directed to the whole
group is not fulfilled? We will show how the three dimensions of an
organizational structure together with a specific task decomposition
determine the responsibilities within a (norm-governed) organization. |
|
Title: |
CROCODIAL: CROSSLINGUAL COMPUTER-MEDIATED DIALOGUE |
Author(s): |
Paul Piwek and Richard Power |
Abstract: |
We describe a novel approach to crosslingual
dialogue which allows for highly accurate communication of semantically
complex content. The approach is introduced through an application in a
B2B scenario. We are currently building a browser-based prototype for
this scenario. The core technology underlying the approach is natural
language generation. We also discuss how the proposed approach can
complement Machine Translation-based solutions to crosslingual dialogue. |
|
Title: |
A DEFEASIBLE DEONTIC MODEL FOR INTELLIGENT SIMULATION |
Author(s): |
Kazumi Nakamatsu |
Abstract: |
We introduce an intelligent drivers' model for
traffic simulation in a small area including some intersections, which
is formalized in a paraconsistent annotated logic program EVALPSN. The
intelligent drivers' model can infer drivers' speed control actions such
as ``slow down" based on EVALPSN defeasible deontic reasoning and deal
with minute speed change of cars in the simulation system.
|
|
Title: |
A DESIGN METHOD FOR INTER-ORGANIZATIONAL SERVICE PROCESSES
|
Author(s): |
Rainer Schmidt |
Abstract: |
Service processes play a more and more
important part in modern economies. However, their design does not
achieve the flexibility and efficiency known from ordinary business
processes. Furthermore, their double identity of being process and
product at the same time is not properly represented in present design
methods. Therefore, a new method for the design and the support of
inter-organizational service processes is introduced. It is based on so
called perspectives for separating independently evolving parts of the
service processes. Based on it, a component-oriented approach for
process design is developed. |
|
Title: |
AN ABSTRACT ARCHITECTURE FOR SERVICE COORDINATION IN IP2P
ENVIRONMENTS |
Author(s): |
Cesar Caceres, Alberto Fernandez, Sascha Ossowski and Matteo
Vasirani |
Abstract: |
Intelligent agent-based peer-to-peer (IP2P)
environments provide a means for pervasively providing and flexibly
co-ordinating ubiquitous business application services to the mobile
users and workers in the dynamically changing contexts of open,
large-scale, and pervasive settings. In this paper, we present an
abstract architecture for service delivery and coordination in IP2P
environments that has been developed within the CASCOM project.
Furthermore, we outline the potential benefits of a role-based
interaction modelling approach for a concrete application of this
abstract architecture based on a real-world scenario for emergency
assistance in the healthcare domain |
|
Title: |
TOWARD A PI-CALCULUS BASED VERIFICATION TOOL FOR WEB SERVICES
ORCHESTRATIONS |
Author(s): |
Faisal Abouzaid |
Abstract: |
Web services constitute a dynamic field of
research about technologies of the Internet. Web services orchestration
and choreography are related to Web services composition and are a way
of defining a complex service out of simpler ones. Several languages for
describing composition of business processes have been presented in the
last years, but WS-BPEL 2.0, is in the way for becoming a standard. To
check the good behaviour of the produced compositions, but also to check
equivalence between services, formalization is necessary. In this paper
a contribution to the field of Formal Verification of web services
composition is presented using a pi-calculus-based approach for the
verification of composite web services by applying model checking
methods. We adopt the possibility of exploiting benefits from existing
works by translating an Business Process Language such as BPEL to a
system model of the pi-calculus for which analysis and verification
techniques have already been well established and there are existing
tools for model checking systems. We therefore present the basis of a
framework aimed to specification and verification, related to some
temporal logic, of web services composition. |
|
Workshop on Pattern Recognition in
Information Systems (PRIS-2006) |
Title: |
ARTIFICIAL INTELLIGENCE METHODS APPLICATION IN LIVER DISEASES
CLASSIFICATION FROM CT IMAGES |
Author(s): |
Daniel Smutek, Akinobu Shimizu, Ludvik Tesar, Hidefumi Kobatake and
Shigeru Nawano |
Abstract: |
An application of artificial intelligence in
the field of automatization in medicine is described. A computer-aided
diagnostic (CAD) system for focal liver lesions automatic classification
in CT images is being developed. The texture analysis methods are used
for the classification of hepatocellular cancer and liver cysts. CT
contrast enhanced images of 20 adult subjects with hepatocellular
carcinoma or with non-parasitic solitary liver cyst were used as entry
data. A total number of 130 spatial and second-order probabilistic
texture features were computed from the images. Ensemble of Bayes
classifiers was used for the tissue classification. Classification
success rate was as high as 100% when estimated by leave-one-out method.
This high success rate was achieved with as few as one optimal
descriptive feature representing the average deviation of horizontal
curvature computed from original pixel gray levels. This promising
result allows next amplification of this approach in distinguishing more
types of liver diseases from CT images and its further integration to
PACS and hospital information systems. |
|
Title: |
LARGE SCALE FACE RECOGNITION WTH KERNEL CORRELATION FEATURE
ANALYSIS WITH SUPPORT VECTOR MACHINES |
Author(s): |
Jingu Heo, Marios Savvides and B. V. K. Vijayakumar |
Abstract: |
Recently, Direct Linear Discriminant Analysis
(LDA) and Gram-Schmidt LDA methods have been proposed for face
recognition. By utilizing the smallest eigenvalues in the within-class
scatter matrix they exhibit better performance compared to Eigenfaces
and Fisherfaces. However, these linear subspace methods may not
discriminate faces well due to large nonlinear distortions in the face
images. Redundant class dependence feature analysis (CFA) method
exhibits superior performance compared to other methods by representing
nonlinear features well. We show that with a proper choice of nonlinear
features in the CFA, the performance is significantly improved.
Evaluation is performed with PCA, KPDA, KDA, and KCFA using different
distance measures on a large scale database from the Face Recognition
Grand Challenge (FRGC). By incorporating the SVM for a new distance
measure, the performance gain is dramatic regardless of all algorithms. |
|
Title: |
THE WAY OF ADJUSTING PARAMETERS OF THE EXPERT SYSTEM SHELL MCESE:
NEW APPROACH |
Author(s): |
I. Bruha and F. Franek |
Abstract: |
We have designed and developed a general
knowledge representation tool, an expert system shell called McESE
(McMaster Expert System Environment); it derives a set of production
(decision) rules of a very general form. Such a production set can be
equivalently symbolized as a decision tree. McESE exhibits several
parameters such as the weights, thresholds, and the certainty
propagation functions that have to be adjusted (designed) according to a
given problem, for instance, by a given set of training examples. We can
use the traditional machine learning (ML) or data mining (DM) algorithms
for inducing the above parameters can be utilized. In this
methodological case study, we discuss an application of genetic
algorithms (GAs) to adjust (generate) parameters of the given tree that
can be then used in the rule-based expert system shell McESE. The only
requirement is that a set of McESE decision rules (or more precisely,
the topology of a decision tree) be given. |
|
Title: |
FACIAL FEATURE TRACKING AND OCCLUSION RECOVERY IN AMERICAN SIGN
LANGUAGE |
Author(s): |
Thomas J. Castelli, Margrit Betke and Carol Neidle |
Abstract: |
Facial features play an important role in
expressing grammatical information in signed languages, including
American Sign Language (ASL). Gestures such as raising or furrowing the
eyebrows are key indicators of constructions such as yes-no questions.
Periodic head movements (nods and shakes) are also an essential part of
the expression of syntactic information, such as negation (associated
with a side-to-side headshake). Therefore, identification of these
facial gestures is essential to sign language recognition. One problem
with detection of such grammatical indicators is occlusion recovery. If
the signer's hand blocks his/her eyebrows during production of a sign,
it becomes difficult to track the eyebrows. We have developed a system
to detect such grammatical markers in ASL that recovers promptly from
occlusion. Our system detects and tracks evolving templates of facial
features, which are based on an anthropometric face model, and
interprets the geometric relationships of these templates to identify
grammatical markers. It was tested on a variety of ASL sentences signed
by various Deaf native signers and detected facial gestures used to
express grammatical information, such as raised and furrowed eyebrows as
well as headshakes. |
|
Title: |
TRACKING AND PREDICTION OF TUMOR MOVEMENT IN THE ABDOMEN |
Author(s): |
Margrit Betke, Jason Ruel, Gregory C. Sharp, Steve B. Jiang, David
P. Gierga and George T. Y. Chen |
Abstract: |
Methods for tracking and prediction of
abdominal tumor movement under free breathing conditions are proposed.
Tumor position is estimated by tracking surgically implanted clips
surrounding the tumor. The clips are segmented from fluoroscopy videos
taken during pre-radiotherapy simulation sessions. After the clips have
been tracked during an initial observation phase, motion models are
computed and used to predict tumor position in subsequent frames. Two
methods are proposed and compared that use Fourier analysis to evaluate
the quasi-periodic tumor movements due to breathing. Results indicate
that the methods have the potential to estimate mobile tumor position to
within a couple of millimeters for precise delivery of radiation. |
|
Title: |
IMPROVED SINGULAR VALUE DECOMPOSITION FOR SUPERVISED LEARNING IN A
HIGH DIMENSIONAL DATASET |
Author(s): |
Ricco Rakotomalala and Faouzi Mhamdi |
Abstract: |
Singular Value Decomposition (SVD) is a useful
technique for dimensionality reduction with a controlled loss of
information. This paper makes the very simple but worth-while
observation that many attributes that contain no information about the
class label, may thus be selected erroneously for a supervised learning
task. We propose to first use a very tolerant filter to select on a
univariate basis which attributes to include in the subsequent SVD. The
features, ``the latent variables'', extracted from relevant descriptors
allow to build a better classifier with a significant improvement of the
generalization error rate and less cpu time. We show the efficiency of
this combination of feature selection and construction approaches on a
protein classification context. |
|
Title: |
A HYBRID APPROACH USING SET THEORY (HAST) FOR MAGNETIC RESONANCE
(MR) IMAGE SEGMENTATION |
Author(s): |
Liu Jiang, Chee Kin BanP, Tan Boon Pin, Shuter Borys and Wang
Shih-Chang |
Abstract: |
This paper describes a new Hybrid Approach
using Set Theory (HAST) for Magnetic Resonance (MR) Image segmentation
based on two existing tech-niques, region-based and level set methods.
In our approach, instead of using the typical pipeline methodology to
integrate the two techniques, a hybrid set-based methodology will be
proposed. To evaluate the effectiveness of HAST, MR images taken from a
national hospital that reflects the quality of real world medical images
are used. A comparison between the two individual techniques and HAST
will also be made to demonstrate the effectiveness of the latter. |
|
Title: |
FACE SEGREGATION AND RECOGNITION BY CORTICAL MULTI-SCALE LINE AND
EDGE CODING |
Author(s): |
João Rodrigues and J. M. Hans du Buf |
Abstract: |
Models of visual perception are based on image
representations in cortical area V1 and higher areas which contain many
cell layers for feature extraction. Basic simple, complex and
end-stopped cells provide input for line, edge and keypoint detection.
In this paper we present an improved method for multi-scale line/edge
detection based on simple and complex cells. We illustrate the line/edge
representation for object reconstruction, and we present models for
multi-scale face (object) segregation and recognition that can be
embedded into feedforward dorsal and ventral data streams (the ``what"
and ``where" subsystems) with feedback streams from higher areas for
obtaining translation, rotation and scale invariance. |
|
Title: |
EAR BIOMETRICS IN PASSIVE HUMAN IDENTIFICATION SYSTEMS |
Author(s): |
Michal Choras |
Abstract: |
The article discusses various issues concerning
ear biometrics in identification systems. The major advantage of ear as
the source of data for human identification is the ease of image
acquisition, which can be performed even without examined person's
knowledge. Moreover, user's acceptability and easy interaction with the
system make ear biometrics a perfect solution of secure authentication
for example in access-control applications. In the article the focus is
on the ear biometrics motivation, ear identification system design and
user interaction. Feature extraction methods from ear images are also
discussed. |
|
Title: |
IMAGE RETRIEVAL USING MULTISCALAR TEXTURE CO-OCCURRENCE MATRIX |
Author(s): |
Sanjoy Kumar Saha, Amit Kumar Das and Bhabatosh Chanda |
Abstract: |
We have designed and implemented a texture
based image retrieval system that uses multiscalar texture co-occurrence
matrix. The pixel array corresponding to an image is divided into a
number of blocks of size 2 x 2 and a scheme is proposed to compute
texture value for each of these blocks and then the texture
co-occurrence matrix is formed. Image texture features are determined
based on this matrix. Finally, a multiscalar version of the method is
presented to cope with the texture pattern of various scale. Experiment
using Brodatz texture database shows that retrieval performance of the
proposed features is better than that of gray-level co-occurrence matrix
and wavelet based features. |
|
Title: |
MULTINOMIAL MIXTURE MODELLING FOR BILINGUAL TEXT CLASSIFICATION |
Author(s): |
Jorge Civera and Alfons Juan |
Abstract: |
Mixture modelling of class-conditional
densities is a standard pattern classification technique. In text
classification, the use of class-conditional multinomial mixtures can be
seen as a generalisation of the Naive Bayes text classifier relaxing its
(class-conditional feature) independence assumption. In this paper, we
describe and compare several extensions of the class-conditional
multinomial mixture-based text classifier for bilingual texts. |
|
Title: |
PREDICTION OF PROTEIN TERTIARY STRUCTURE CLASS FROM SYNCHROTRON
RADIATION CIRCULAR DICHROISM SPECTRA |
Author(s): |
Andreas Procopiou, Nigel M. Allinson, Gareth R. Jones and David T.
Clarke |
Abstract: |
A new approach to predict the tertiary
structure class of proteins from synchrotron radiation circular
dichroism (SRCD) spectra is presented. A protein’s SRCD spectrum is
first approximated using a Radial Basis Function Network (RBFN) and the
resulting set is used to train different varieties of Support Vector
Machine (SVM). The performance of three well known multi-class SVM
schemes are evaluated and a method presented that takes into account the
properties of spectra for each of the structure classes. |
|
Title: |
SEMANTIC-BASED SIMILIARITY OF MUSIC |
Author(s): |
Michael Rentzsch and Frank Seifert |
Abstract: |
Existing approaches to music identification
such as audio fingerprinting are generally data-driven and based on
statistical information. They require a particular pattern for each
individual instance of the same song. Hence, these approaches are not
capable of dealing with the vast amount of music that is composed via
methods of improvisation and variation. Futhermore, they are unable to
measure the similarity of two pieces of music. This paper presents a
different, semantic-based view on the identification and structuring of
symbolic music patterns. This new method takes advantage of a conceptual
model for music perception. Thus, it allows us to detect different
instances of the same song and acquire their degree of similarity. |
|
Title: |
USER SPECIFIC PARAMETERS IN ONE-CLASS PROBLEMS: THE CASE OF
KEYSTROKE DYNAMICS |
Author(s): |
Sylvain Hocquet, Jean-Yves Ramel and Hubert Cardot1 |
Abstract: |
In this paper, we propose a method to find and
use user-dependant parameters to increase the performance of a keystroke
dynamic system. These parameters include the security threshold and
fusion weights of different classifiers. We have determined a set of
global parameters, which increase the performance of some keystroke
dynamics methods. Our experiments show that parameter personalization
greatly increases the performances of keystroke dynamics systems. The
main problem is how to estimate the parameters from only a small user
training set containing ten login sequences. This problem is a promising
way to increase performance in biometric but is still open today |
|
Title: |
FACE RECOGNITION IN DIFFERENT SUBSPACES: A COMPARATIVE STUDY |
Author(s): |
Borut Batagelj and Franc Solina |
Abstract: |
Face recognition is one of the most successful
applications of image analysis and understanding and has gained much
attention in recent years. Among many approaches to the problem of face
recognition, appearance-based subspace analysis still gives the most
promising results. In this paper we study the three most popular
appearance-based face recognition projection methods (PCA, LDA and ICA).
All methods are tested in equal working conditions regarding
preprocessing and algorithm implementation on the FERET data set with
its standard tests.We also compare the ICA method with its whitening
preprocess and find out that there is no significant difference between
them. When we compare different projection with different metrics we
found out that the choice of appropriate projection-metric combination
is depended of the nature of the recognition task. Our results are
compared to other studies and some discrepancies are pointed out. |
|
Title: |
MULTI-MODAL CATEGORIZATION OF MEDICAL IMAGES FOR AUTOMATIC INDEXING
OF ON-LINE HEALTH-RESOURCES |
Author(s): |
Filip Florea, Eugen Barbu, Alexandrina Rogozan and Abdelaziz
Bensrhair |
Abstract: |
Our work is focused on the automatic
categorization of medical images according to their visual content for
indexing and retrieval purposes in the context of the CISMeF
health-catalogue. The aim of this study is to assess the performance of
our medical image categorization algorithm according to the image's
modality, anatomic region and view angle. For this purpose we
represented the medical images using texture and statistical features.
The high dimensionality led us to transform this representation into a
symbolic description, using block labels obtained after a clustering
procedure. A medical image database of 10322 images, representing 33
classes was selected by an experienced radiologist. The classes are
defined considering the images medical modality, anatomical region and
acquisition view angle. An average precision of approximately 83% was
obtained using KNN classifiers, and a top performance of 91.19% was
attained with 1NN when categorizing the images with respect to the
defined 33 classes. The performances raise to 93.62% classification
accuracy when only the modality is needed. The experiments we present in
this paper show that the considered image representation obtains high
recognition rates, despite the difficult context of medical imaging. |
|
Title: |
WEIGHTED EVIDENCE ACCUMULATION CLUSTERING USING SUBSAMPLING |
Author(s): |
F. Jorge F. Duarte, Ana L. N. Fred, Fátima Rodrigues, João M. M.
Duarte and João Lourenço |
Abstract: |
We introduce an approach based on evidence
accumulation (EAC) for combining partitions in a clustering ensemble.
EAC uses a voting mechanism to produce a co-association matrix based on
the pairwise associations obtained from N partitions and where each
partition has equal weight in the combination process. By applying a
clustering algorithm to this co-association matrix we obtain the final
data partition. In this paper we propose a clustering ensemble
combination approach that uses subsampling and that weights differently
the partitions (WEACS). We use two ways of weighting each partition:
SWEACS, using a single validation index, and JWEACS, using a committee
of indices. We compare combination results with the EAC technique and
the HGPA, MCLA and CSPA methods by Strehl and Gosh using subsampling,
and conclude that the WEACS approaches generally obtain better results.
As a complementary step to the WEACS approach, we combine all the final
data partitions produced by the different variations of the method and
use the Ward Link algorithm to obtain the final data partition. |
|
Workshop on Model-Driven Enterprise
Information Systems (MDEIS-2006) |
Title: |
MODEL-BASED DEVELOPMENT WITH VALIDATED MODEL TRANSFORMATION |
Author(s): |
László Lengyel, Tihamér Levendovszky, Gergely Mezei and Hassan
Charaf |
Abstract: |
Model-driven software engineering is one of the
most focused research fields. Domain-specific modeling facilitates that
systems be specified on a higher level of abstraction. Model processors
automatically generate the lower level artifacts. Graph transformation
is a widely used technique for model transformations. Especially visual
model transformations can be expressed by graph transformations, since
graphs are well-suited to describe the underlying structures of models.
Model transformations often need to follow an algorithm that requires a
strict control over the execution sequence of the rewriting rules, with
the additional benefit of making the implementation more efficient.
Using a rather complex but illustrative case study from the field of
model-based development, this paper presents the visual control flow
support of the Visual Modeling and Transformation System (VMTS). The
VMTS Visual Control Flow Language (VCFL) uses stereotyped activity
diagrams to specify model-driven control flow structures and OCL
constraints to choose between different control flow branches. The
presented approach facilitates composing complex model transformations
of simple transformation steps and executing them. |
|
Title: |
ATC: A LOW-LEVEL MODEL TRANSFORMATION LANGUAGE |
Author(s): |
Antonio Estévez, Javier Padrón, E. Victor Sánchez and José Luis
Roda |
Abstract: |
Model Transformations constitute a key
component in the evolution of Model Driven Software Development (MDSD).
MDSD tools base their full potential on transformation specifications
between models. Several languages and tools are already in production,
and OMG's MDA is currently undergoing a standardization process of these
specifications. In this paper, we present Atomic Transformation Code
(ATC), an imperative low-level model transformation language meant to be
placed as an intermediate layer between the user transformation
languages and the underlying transformation engine, to effectively
decouple their dependencies. Therefore work invested on this engine is
protected against variations on the high-level transformation languages
supported. This approach can ease the adoption of QVT and other language
initiatives. Also it provides MDA modeling tools with a valuable benefit
by supporting the seamless integration of a variety of transformation
languages simultaneously. |
|
Title: |
MDA APPROACH FOR THE DEVELOPMENT OF EMBEDDABLE APPLICATIONS ON
COMMUNICATING DEVICES |
Author(s): |
Eyob Alemu, Dawit Bekele and Jean-Philippe Babau |
Abstract: |
One of the major sources of software complexity
is heterogeneity and evolution of platforms. Platform variation is
highly impacting the lifetime support of software products. As a
solution, a new software development methodology called MDA (Model
Driven Architecture) has been recently introduced by OMG . MDA is a
strategy of separating the specification of the software system from the
specification of its implementation on platforms as two different
concerns of development. These two concerns are described as Platform
Independent Model (PIM) and Platform Specific Model (PSM). MDA is now
being successfully used as a promising solution at enterprise level
software systems. This success of MDA has made it a viable choice for
other domains that face similar or even worse level of complexity such
as the domain of embedded systems. However, recent efforts focused on
extending the modeling capability of the core standards of MDA,
particularly UML, towards the concepts in embedded systems such as
Resource and Quality of Service (QoS). Since there is no abstraction or
middleware layer that can encapsulate all the variation in this domain,
all of the platforms appear as different implementation choices.
Therefore, adapting the MDA towards this domain requires a new approach
that recognizes such peculiarities. Focusing on the communications
subsystem of embedded platforms, this paper introduces an MDA based
approach for the development of embeddable communicable applications. A
QoS aware and resource oriented approach, which exhibits the runtime
interaction between applications and platforms, is proposed. Reservation
based (typically connection oriented) networks are considered. The
applicability of the approach is also presented for Bluetooth and IrDA
that shows the separation of application level reservation request from
the actual network level reservation provided through a mapping layer.
|
|
Title: |
MODEL-DRIVEN ERP IMPLEMENTATION |
Author(s): |
Philippe Dugerdil and Gil Gaillard |
Abstract: |
Nowadays, ERP systems provide an efficient
solution to the company’s standard IT needs. However, when faced with
the decision to implement an ERP system, managers must often trade IT
system control for IT system efficiency and standardization. In fact,
ERP systems are very complex and it is often the case that the internal
IT team of a company will not be able to master the full system. To
obtain a fair level of understanding it is necessary to model the system
at a higher level of abstraction: the business processes. However,
another problem is the accuracy of the mapping between this view and the
actual implementation. A solution is to make use of the OMG’s MDA
framework. In fact, this framework lets the developer model his system
at a high abstraction level and allows the MDA tool to generate the
implementation details. We therefore decided to investigate the idea to
build a prototype that would provide a semi-automatic way to customize
an ERP system from a high level model of the business processes. This
paper presents our results in applying the MDA framework to ERP
implementation. We also show how our prototype is structured and
implemented in the IBM/Rational® XDE® environment |
|
Title: |
A PRACTICAL EXPERIENCE ON MODEL-DRIVEN HETEROGENEOUS SYSTEMS
INTEGRATION |
Author(s): |
Antonio Estévez, José D. García, Javier Padrón, Carlos López, Marko
Txopitea, Beatriz Alustiza and José L. Roda |
Abstract: |
The integration of heterogeneous systems is
usually a complex task. In this study we present a strategy which can be
followed for the integration of a framework based on Struts and J2EE,
the transactional system CICS and the document manager FileNet. The
principal aim of the project was to redefine the work methodology of the
developers in order to improve productivity. Following model-based
development strategies, especially MDA, a single framework for the three
environments has been developed. Independent metamodels were created for
each one of the environments, which finally led to a flexible, open and
unified metamodel. The developer could then increase his productivity by
abstracting from the particular implementation details related to each
environment, and putting his efforts in creating a business model that
is able to represent the new system. |
|
Title: |
MODELING ODP CORRESPONDENCES USING QVT |
Author(s): |
José Raúl Romero, Nathalie Moreno and Antonio Vallecillo |
Abstract: |
Viewpoint modeling is currently seen as an
effective technique for specifying complex software systems. However,
having a set of independent viewpoints on a system is not enough. These
viewpoints should be related, and these relationships made explicit in
order to count with a set of complete and consistent specifications of
the system. RM-ODP defines five complementary viewpoints for the
specification of open distributed systems, and establishes
correspondences between viewpoint elements. ODP correspondences provide
statements that relate the various different viewpoint specifications,
expressing their semantic relationships. However, ODP does not provide
an exhaustive set of correspondences between viewpoints, nor defines any
language or notation to represent correspondences. In this paper we
explore the use of MOF QVT for representing ODP correspondences in the
context of ISO/IEC 19793, i.e., when the ODP viewpoint specifications of
a system are represented as UML models. We show that QVT is expressive
enough to represent them, and discuss some of the issues that we have
found when modeling ODP correspondences with QVT relations. |
|
Title: |
MODEL QUALITY IN THE CONTEXT OF MODEL-DRIVEN DEVELOPMENT |
Author(s): |
Ida Solheim and Tor Neple |
Abstract: |
Model-Driven Development (MDD) poses new
quality requirements to models. This paper presents these requirements
by specializing a generic framework for model quality. Of particular
interest are transformability and maintainability, two main quality
criteria for models to be used in MDD. These two are decomposed into
quality criteria that can be evaluated or measured. Another pertinent
discussion item is the positive implication of MDD-related tools, both
on the models in particular and on the success of the MDD process. |
|
Title: |
TOWARDS RIGOROUS METAMODELING |
Author(s): |
Benoît Combemale, Sylvain Rougemaille, Xavier Crégut, Frédéric
Migeon, Marc Pantel, Christine Maurel and Bernard Coulette |
Abstract: |
MDE has provided several significant
improvements in the development of complex systems by focusing on more
abstract preoccupation than programming. However, some more steps are
needed on the semantic side in order to reach high-level certification
such as the one currently required for critical embedded systems (which
will also probably be required in the near future for Information
Systems as application of Basel II kind of agreements). This paper
presents different means to specify models semantics at the meta-model
level. We will focus on the definition of executable SPEM-based
development process models (work-flow related models) using an approach
defined for the TOPCASED project. |
|
Title: |
ABSTRACT PLATFORM AND TRANSFORMATIONS FOR MODEL-DRIVEN
SERVICE-ORIENTED DEVELOPMENT |
Author(s): |
Joao Paulo A. Almeida, Luis Ferreira Pires and Marten van Sinderen |
Abstract: |
In this paper, we discuss the use of abstract
platforms and transformation for designing applications according to the
principles of the service-oriented architecture. We illustrate our
approach by discussing the use of the service discovery pattern at a
platform-independent design level. We show how a trader service can be
specified at a high-level of abstraction and incorporated in an abstract
platform for service-oriented development. Designers can then build
platform-independent models of applications by composing application
parts with this abstract platform. Application parts can use the trader
service to publish and discover service offers. We discuss how the
abstract platform can be realized into two target platforms, namely Web
Services (with UDDI) and CORBA (with the OMG trader). |
|
Workshop on Technologies for Collaborative
Business Processes (TCoB-2006) |
Title: |
TOWARDS A COORDINATION MODEL FOR WEB SERVICES |
Author(s): |
Zakaria Maamar, Nanjangud C. Narendra and Philippe Thiran |
Abstract: |
The increasing popularity of Web services for
application integration has strengthened the need for automated Web
services composition. In order for this to succeed, however, the joint
execution of Web services requires a coordination model. Coordination’s
main use is to solve conflicts between Web services. Conflicts could be
on sharable resources, order dependencies, or communication delays. The
proposed coordination model tackles these conflicts with three
inter-connected blocks defined as conflict, exception, and management.
Conflicts among Web services raise exceptions that are handled using
appropriate mechanisms as part of the coordination model. The deployment
of this model is illustrated using a simple yet realistic example.
|
|
Title: |
EXTRACTING AND MAINTAINING PROJECT KNOWLEDGE USING ONTOLOGIES |
Author(s): |
Panos Fitsilis, Vassilios Gerogiannis and Achilles Kameas |
Abstract: |
One of the most valuable resources for
organizations today is knowledge developed and held within their teams
during the execution of their projects. Although the need for maximal
reuse of lessons learned and knowledge accumulated for keeping companies
at the leading edge is evident, this knowledge is often lost because it
can be difficult or impossible to articulate. k.PrOnto framework infuses
the process of project management with knowledge management technology
and provides project managers with concepts and tools to support them in
decision making and project control. The tools operate at a stand-alone
mode, but, in the context of k.PrOnto architecture, can also be used as
components of a distributed system operating at a higher organizational
level. Thus the k.PrOnto framework assists large organizations in
identifying best practices, metrics and guidelines, starting from
individual projects and in amplifying their efforts to achieve
organizational maturity and build corporate culture and memory. |
|
Title: |
COLLABORATIVE BUSINESS PROCESS LIFECYCLES |
Author(s): |
Philipp Walter and Dirk Werth |
Abstract: |
Business process lifecycle management is
established for the continuous improvement of internal business
processes that do not exceed company borders. Therefore, the concept
could also be applied to enhance collaborative business processes
spanning over multiple enterprises. In contrast to the
intra-organizational case, lifecycle management of cross-organizational
collaborative processes imposes several organizational and technological
challenges that results from the multiple-independent-actors-environment
of collaborations. In this article, we address these challenges and
present a conceptual solution for the different phases of this
lifecycle. Finally, we propose a technical architecture that
prototypically implements these concepts. |
|
Title: |
A MULTI-LEVEL MODELING FRAMEWORK FOR DESIGNING AND IMPLEMENTING
CROSS-ORGANIZATIONAL BUSINESS PROCESSES |
Author(s): |
Ulrike Greiner, Sonia Lippe, Timo Kahl, Jörg Ziemann and
Frank-Walter Jäkel |
Abstract: |
Increasing cooperation of organizations leads
to the necessity of efficient modeling and implementation of
cross-organizational business processes (CBPs). Various stakeholders
that pursue different perspectives on processes are involved in the
design of CBPs. Enterprise modeling supports a common understanding of
business processes for different stakeholders across organizations and
serves as basis to generate executable models. Models include knowledge
of internal processes as well as demands for CBPs. The paper presents
concepts and a first prototype of a modeling framework supporting
process designers to get a common agreement on their processes across
different companies on different levels of abstraction. |
|
Title: |
ON COLLABORATIONS AND CHOREOGRAPHIES |
Author(s): |
Giorgio Bruno, Giulia Bruno and Marcello La Rosa |
Abstract: |
This paper analyzes binary collaborations and
multi-party collaborations in the context of business processes and
proposes a lifecycle in which collaborations are first represented with
abstract models called collaboration processes, then embodied in
business processes and finally implemented in BPEL. In particular this
paper discusses how to represent multi-party collaborations and presents
two approaches: one is based on binary collaborations complemented with
choreographies, and the other draws upon the notion of extended binary
collaborations. |
|
Title: |
SEMANTIC WEB SERVICES COMPOSITION FOR THE MASS CUSTOMIZATION
PARADIGM |
Author(s): |
Yacine Sam, Omar Boucelma and Mohand-Saïd Hacid |
Abstract: |
In order to fulfill current customers’
requirements, companies and services providers need to supply a large
panel of their products and services. This situation has led recently to
the Mass Customizing Paradigm, meaning that products and services should
be designed in such a way that makes it possible to deliver and adapt
different configurations. The increasing number of services available on
the Web, together with the heterogeneity of Web audiences, are among the
main reasons that motivate the adoption of this paradigm to Web services
technology. In this paper we describe a solution that allows automatic
customization of Web services: a supplier configuration, published in a
services repository, is automatically translated into another
configuration that is better suitable for fulfilling customers' needs. |
|
Title: |
WORKFLOW SEMANTIC DESCRIPTION FOR INTER-ORGANIZATIONAL COOPERATION |
Author(s): |
Nomane Ould Ahmed M'Bareck and Samir Tata |
Abstract: |
The work we present here is in line with a
novel approach for inter-organizational workflow cooperation that
consists of workflow advertisement, workflow interconnection, and
workflow cooperation. For advertisement, workflows should be described.
Nevertheless, by using a description language like XPDL only syntactic
problems can be solved. In this paper, we propose a three steps method
for semantic description of workflows based on XPDL and OWL. First,
workflows described using XPDL, are annotated to distinguish cooperative
activities and non cooperative ones. Second, to preserve privacy, a
view, that we call cooperative interface, for each different partner is
generated. Third, cooperatives interfaces are described using OWL
according to an ontology we have defined for cooperatives workflows. |
|
Title: |
MIGRATING BDIFS FROM A PEER-TO-PEER DESIGN TO A GRID SERVICE
ARCHITECTURE |
Author(s): |
Tom Kirkham and Thomas Varsamidis |
Abstract: |
This paper documents the transition of a
distributed Peer-to-Peer based business to business enterprise
application integration framework, to one using Grid Services. In the
context of an E-Business environment we examine the practical strengths
of Grid Service development and implementation as opposed to
Peer-to-Peer implementation. By exploring the weakness in the BDIFS
Peer-to-Peer architecture and workflow we illustrate how we have
improved the system using Grid Services. The final part of the paper
documents the new Grid Service design and workflow; in particular the
creation of the new automated trading mechanism within BDIFS. |
|
Title: |
USING MULTI-AGENT SYSTEMS FOR CHANGE MANAGEMENT PROCESSES IN THE
CONTEXT OF DISTRIBUTED SOFTWARE DEVELOPMENT PROCESSES |
Author(s): |
Kolja Markwardt, Daniel Moldt, Sven Offermann and Christine Reese |
Abstract: |
Today software engineering is facing the
problem of the development of distributed software systems. Due to
distribution these systems inherit specific problems that need to be
tackled during the development. Our approach to handle the problem is to
provide an integrated development environment that is based on clear and
powerful concepts and which allows to structure a system in a
domain-oriented way. As the conceptual basis we apply agent-oriented
Petri nets. For the practical part of the control of such applications
we use agent-based workflow management system (WFMS) technology. In this
paper we illustrate for Change Management, as an important part of the
development process, how to apply our approach. The complex application
scenarios allow for the illustration of the expressional power of
agent-oriented Petri nets and of WFMS technology. |
|