Area 1 - Databases and
Information Systems Integration
|
Title: |
ARCHITECTURE FOR A SME-READY
ERP-SOLUTION BASED ON WEB-SERVICES AND PEER-TO-PEER-NETWORKS |
Author(s): |
Jorge Marx Gómez and Claus
Rautenstrauch |
Abstract: |
Although the requirements of small-
to medium-sized enterprises (SMEs) for enterprise resource planning
systems (ERP) are very similar to those of big corporations, there
is still a lack of solutions for SMEs, because the roll-out as well
as the maintanance is very expensive. It has become clear that EDP
branch solutions, application service providing and stripped-down
software versions do not offer satisfying solutions. For solving
these problems we propose an architecture for a distributed
ERP-system based on web-services and peer-to-peer-network technology
whose roll-out and maintanance is better affordable for SMEs than
traditional systems. |
|
Title: |
USING CORRESPONDENCE ASSERTIONS TO
SPECIFY THE SEMANTICS OF VIEWS IN AN OBJECT-RELATIONAL DATA
WAREHOUSE |
Author(s): |
Valéria Magalhăes Pequeno and Joaquim
Nunes Aparício |
Abstract: |
An information integration system
provides a uniform query interface for collecting of distributed and
heterogeneous, possibly autonomous, information sources, giving
users the illusion that they interrogate a centralized and
homogeneous information system. One approach that has been used for
integrating data from multiple databases consists in creating
integrated views \cite{BLN86,ZHK96,GM97,CEMW01}, which allows for
queries to be made against them. In this paper we propose the use of
correspondence assertions to formally specify the relationship
between the integrated view schema and the source database schemas.
In this way, correspondence assertions are used to assert that the
semantic of some schema's components are related to the semantic of
some components of another schema. Our formalism has the advantages
of proving a better understanding of the semantic of integrated
view, and of helping to automate some aspects of data integration. |
|
Title: |
THE DESIGN AND IMPLEMENTATION OF
DATABASE INTERFACE FOR LOGIC LANGUAGE BASED MOBILE AGENT SYSTEM |
Author(s): |
JingBo Ni, Xining Li and Lei Song |
Abstract: |
Mobile Agent system creates a new way
for sharing distributed resources and providing multi-located
services. With the idea of moving calculation towards resources,
generally it occupies less network traffics than the traditional
Client/Server model and achieves more flexibilities than the Remote
Procedure Call (RPC) architecture. In order to endow agents with the
ability of accessing remote data resources, in this paper we discuss
the design strategies of Database Interface between a logic
programming language (such as Prolog) based Mobile Agent system and
a remote DBMS. Multi-threading Database Connection Management
architecture is introduced especially for heavy-duty database
operations. Moreover three levels of Physical Database Connection
assignment (predicate level, agent level and module level) are
presented and compared. Different strategies for temporarily holding
the database searching results are also given in the paper, where
the Result Memory Pool can be built locally, remotely or both. At
last two compatible methods are adopted for releasing system
resources charged during database operations manually and
automatically. |
|
Title: |
UNDERSTANDING THE PROBLEMS OF
ENTERPRISE SYSTEM IMPLEMENTATIONS: BEYOND CRITICAL SUCCESS FACTORS |
Author(s): |
Sue Newell, Gary David, Traci Logan,
Linda Edelman and Jay Cooprider |
Abstract: |
Many companies continue to implement
Enterprise Systems (ES) in order to take advantage of the
integrating potential of having a single common system across the
organization that can replace a multitude of independent legacy
systems. While increasingly popular, research continues to show that
such systems are difficult to implement successfully. A number of
studies have identified the critical success factors for such
implementations. However, in practice, it is often difficult to
ensure that these critical factors are in place and are maintained
in place across the lifespan of the implementation project. In this
paper we identify the socio-political and cultural issues that
explain why this is difficult and suggest some meta-level processes
(induction, informality and improvisation) that can help to offset
the problems with maintaining the critical success factors.
|
|
Title: |
A FORMAL DEFINITION FOR
OBJECT-RELATIONAL DATABASE METRICS |
Author(s): |
Aline Baroni, Coral Calero, Mario
Piattini and Fernando Brito e Abreu |
Abstract: |
Relational databases are the most
important in the database world and are evolving to
object-relational databases in order to allow the possibility of
working with new and complex data and applications. One widely
accepted mechanism for assuring the quality of an object-relational
database is the use of metrics formally and empirically validated.
Also it is important to formalize the metrics for having a better
understanding of their definitions. Metrics formalization assures
the reliable repetition of their computation and facilitates the
automation of metrics collection. In this paper we present the
formalization of a set of metrics defined for object-relational
databases described using SQL:2003. For doing the formalization we
have produced the ontology of the SQL:2003 as a framework for
representing the SQL schema definitions. The ontology has been
represented using UML and the definition of the metrics has been
done using OCL (Object-Constraint Language) which is part of the UML
2.0 standard. |
|
Title: |
MAPPING TEMPORAL DATA WAREHOUSE
CONCEPTS |
Author(s): |
Ahmed Hezzah and A. Min Tjoa |
Abstract: |
SAP Business Information Warehouse
(BW) today is a suitable and viable option for enterprise data
warehousing and one of the few data warehouse products that offer an
integrated user interface for administering and monitoring data. In
previous works we introduced design and modeling techniques for
representing time and temporal information in enterprise data
warehouses and discussed generic problems linked to the design and
implementation of the Time dimension, which have to be considered
for global business processes, such as handling different time zones
and representing holidays and daylight saving time (DST). This paper
investigates supporting the global exchange of time-dependent
business information by mapping those temporal data warehouse
concepts to SAP BW components, such as InfoCubes and master data
tables. |
|
Title: |
QUANTITATIVE EVALUATION OF ENTERPRISE
INTEGRATION PATTERNS |
Author(s): |
Tariq Al-Naeem, Feras Dabous, Fethi
Rabhi and Boualem Benatallah |
Abstract: |
The implementation of e-business
applications is becoming a widespread practice among competitive
organizations. The primary advantage of these applications is in
supporting the core organizational Business Processes (BPs) that may
span different departments, sometimes organizations. We refer to
such applications as Business Process-Intensive applications (BPIAs)
where they implement the organization's strategic BPs. A cornerstone
activity in implementing BPIA is the architectural design task,
which embodies many architectural design decisions, e.g.
functionality exposure, access method, new functionality
implementation, etc. What makes this task quite complex is the
presence of several design approaches that vary considerably in
their consequences on various quality attributes. In addition, since
BPIAs often embody BPs that are scattered among different
departments and organizations, it is natural that more than one
stakeholder will be involved in the design process with different,
often conflicting, quality goals. To aid in the design process, this
paper discusses a number of alternative architectural patterns that
can be reused during the architectural design of BPIA. It also
proposes a systematic method for selecting among these patterns
according to their satisfaction to the quality preferences desired
by different stakeholders. To support making informed decisions, we
leveraged rigorous Multiple-Attribute Decision Making (MADM)
methods, particularly the AHP method. We validate the applicability
of this approach using a real capital markets system from the domain
of e-finance. |
|
Title: |
DWG2XML: GENERATING XML NESTED TREE
STRUCTURE FROM DIRECTED WEIGHTED GRAPH |
Author(s): |
Kate Y. Yang, Anthony Lo, Tansel
Özyer and Reda Alhajj |
Abstract: |
The overall XML file length is one of
the critical factors when we need to transfer a large amount of data
from relational database into XML. Especially in the nested tree
structure of XML file, redundant data in the XML file can add more
cost on database access, network traffic and XML query processing.
Most previous automated relational to XML conversion research
efforts use directed graphs to present relations in the database and
nested trees in the XML structure. However, they all ignore that
different combinations of tree structures in a graph can have a big
impact on the XML data file size. This paper addresses this nested
structure data file size problem. It proposes a module that can find
the most convenient tree structure for the automated relational to
XML conversion process. It provides a plan generator algorithm to
list all the possible tree structures in a given directed weighted
graph. Also it analyzes the data size of each plan and shows the
convenient tree structure to the user. It can finally create the
targeted XML documents for the user. |
|
Title: |
SIMULTANEOUS QUERYING OF XML AND
RELATIONAL CONTEXTS |
Author(s): |
Madani Kenab and Tayeb Ould Braham |
Abstract: |
The presentation of the results of
relational queries is flat. The prime objective of this work is to
query an XML view of relational data in order to have nesting
results of data implemented in the form of flat data. The second
objective is to combine, in query results, structured data of a
relational database and semi-structured data of an XML database. A
FLWR expression (For Let Where Return) of the XQuery language can be
nested at various levels in another FLWR expression. In our work, we
especially are interested in the nesting of a FLWR expression in the
Return clause of another FLWR expression in order to imbricate data
in the result. In this paper, we will describe all necessary stages
in order to carry out these two objectives. |
|
Title: |
SECURE CONCURRENCY CONTROL ALGORITHM
FOR MULTILEVEL SECURE DISTRIBUTED DATABASE SYSTEMS |
Author(s): |
Navdeep Kaur, Rajwinder Singh and
Hardeep Kaur sidhu |
Abstract: |
Majority of the research in
multilevel secure database management systems (MLS/DBMS) focuses
primarily on centralized database systems. However, with the demand
for higher performance and higher availability, database systems
have moved from centralized to distributed architectures, and the
research in multilevel secure distributed database management
systems (MLS/DDBMS) is gaining more and more prominence. Concurrency
control is an integral part of database systems. Secure concurrency
control algorithms [18], [29], [15], [17] proposed in literature
achieve correctness and security at the cost of declined performance
of high security level transactions. These algorithms infringe the
fairness in processing transactions at different security levels.
Though the performance of different concurrency control algorithms
have been explored extensively for centralized multilevel secure
database management systems [11], [31] but to the best of author’s
knowledge the relative performance of transactions at different
security levels using different secure concurrency control algorithm
for MLS/DDBMS has not been reported yet. To fill this gap, this
paper presents a detailed simulation model of a distributed database
system and investigates the performance price paid for maintaining
security with concurrency control in a distributed database system.
The paper investigates the relative performance of transactions at
different security levels. |
|
Title: |
ON THE TREE INCLUSION AND QUERY
EVALUATION IN DOCUMENT DATABASES |
Author(s): |
Yangjun Chen and Yibin Chen |
Abstract: |
In this paper, a method to evaluate
queries in document databases is proposed. The main idea of this
method is a new top-down algorithm for tree-inclusion. In fact, a
path-oriented query can be considered as a pattern tree while an XML
document can be considered as a target tree. To evaluate a query S
against a document T, we will check whether S is included in T. For
a query S, our algorithm needs O(|T|Ţ|leaves(S)|) time and no extra
space to check the containment of S in document T, where |T| stands
for the number of nodes in T and leaves(S) for the leaf nodes of S.
Especially, the signature technique can be integrated into a
top-down tree inclusion to cut off useless subtree checkings as
early as possible. |
|
Title: |
SCENARIO-BASED EVALUATION OF
ENTERPRISE - A TOP-DOWN APPROACH FOR CHIEF INFORMATION OFFICER
DECISION MAKING |
Author(s): |
Mĺrten Simonsson, Ĺsa Lindström,
Pontus Johnson, Lars Nordström, John Grundbäck and Olof Wijnbladh |
Abstract: |
As the primary stakeholder for the
Enterprise Architecture, the Chief Information Officer (CIO) is
responsible for the evolution of the enterprise IT system. An
important part of the CIO role is therefore to make decisions about
strategic and complex IT matters. This paper presents a cost
effective and scenario-based approach for providing the CIO with an
accurate basis for decision making. Scenarios are analyzed and
compared against each other by using a number of problem-specific
easily measured system properties identified in literature. In order
to test the usefulness of the approach, a case study has been
carried out. One CIO needed guidance on how to assign functionality
and data within four overlapping systems. The results are
quantifiable and can be presented graphically, thus providing a
cost-efficient and easily understood basis for decision making. The
study shows that the scenario-based approach can make complex
Enterprise Architecture decisions understandable for CIOs and other
business-orientated stakeholders. |
|
Title: |
NONPARAMETRIC ANALYSIS OF SOFTWARE
RELIABILITY: REVEALING THE NATURE OF SOFTWARE FAILURE DATASERIES |
Author(s): |
Andreas S. Andreou and Constantinos
Leonidou |
Abstract: |
Software reliability is directly
related to the number and time of occurrence of software failures.
Thus, if we were able to reveal and characterize the behavior of the
evolution of actual software failures over time then we could
possibly build more accurate models for estimating and predicting
software reliability. This paper focuses on the study of the nature
of empirical software failure data via a nonparametric statistical
framework. Six different time-series data expressing times between
successive software failures were investigated and a random behavior
was detected with evidences favoring a pink noise explanation. |
|
Title: |
A PRACTICAL IMPLEMENTATION OF
TRANSPARENT ENCRYPTION AND SEPARATION OF DUTIES IN ENTERPRISE
DATABASES - PROTECTION AGAINST EXTERNAL AND INTERNAL ATTACKS ON
DATABASES |
Author(s): |
Ulf Mattsson |
Abstract: |
Security is becoming one of the most
urgent challenges in database research and industry, and there has
also been increasing interest in the problem of building accurate
data mining models over aggregate data, while protecting privacy at
the level of individual records. Instead of building walls around
servers or hard drives, a protective layer of encryption is provided
around specific sensitive data items or objects. This prevents
outside attacks as well as infiltration from within the server
itself. This also allows the security administrator to define which
data stored in databases are sensitive and thereby focusing the
protection only on the sensitive data, which in turn minimizes the
delays or burdens on the system that may occur from other bulk
encryption methods. Encryption can provide strong security for data
at rest, but developing a database encryption strategy must take
many factors into consideration. We present column-level database
encryption as the only solution that is capable of protecting
against external and internal threats, and at the same time meeting
all regulatory requirements. We use the key concepts of security
dictionary, type transparent cryptography and propose solutions on
how to transparently store and search encrypted database fields.
Different stored data encryption strategies are outlined, so you can
decide the best practice for each situation, and each individual
field in your database, to handle different security and operating
requirements. Application code and database schemas are sensitive to
changes in the data type and data length. the paper presents a
policy driven solution that allows transparent data level encryption
that does not change the data field type or length. |
|
Title: |
BENCHMARKING AN XML MEDIATOR |
Author(s): |
Florin Dragan and Georges Gardarin |
Abstract: |
In the recent years, XML has become
the universal interchange format. Many investigations have been made
on storing, querying and integrating XML with existing applications.
Many XML-based commercial DBMSs have appeared lately. This paper
reports on the analysis of an XML mediator federating several
existing XML DBMSs. We measure their storage and querying
capabilities directly through their Java API and indirectly through
the XLive mediation tool. For this purpose we have created a simple
benchmark consisting in a set of queries and a variable test
database. The main scope is to reveal the weaknesses and the
strengths of the implemented indexing and federating techniques. We
analyze two commercial native XML DBMS and an open-source relational
to XML mapping middleware. We first pass directly the queries to the
DBMSs and second we go through the XLive XML mediator. Results
suggest that text XML is not the best format to exchange data
between a mediator and a wrapper, and also shows some possible
improvements of XQuery support in mediation architectures. |
|
Title: |
THE HYBRID DIGITAL TREE: A NEW
INDEXING TECHNIQUE FOR LARGE STRING DATABASES |
Author(s): |
Qiang Xue, Sakti Pramanik, Gang Qian
and Qiang Zhu |
Abstract: |
There is an increasing demand for
efficient indexing techniques to support queries on large string
databases. In this paper, a hybrid RAM/disk-based index structure,
called the Hybrid Digital tree (HD-tree), is proposed. The HD-tree
keeps internal nodes in the RAM to minimize the number of disk I/Os,
while maintaining leaf nodes on the disk to maximize the capability
of the tree for indexing large databases. Experimental results using
real data have shown that the HD-tree outperformed the Prefix B-tree
for prefix and substring searches. In particular, for distinctive
random queries in the experiments, the average number of disk I/Os
was reduced by a factor of two to three, while the running time was
reduced in an order of magnitude. |
|
Title: |
JDSI: A SOFTWARE INTEGRATION STYLE
FOR INTEGRATING MS-WINDOWS SOFTWARE APPLICATIONS IN A JAVA-BASED
DISTRIBUTED SYSTEM |
Author(s): |
Jim-Min Lin, Zeng-Wei Hong and
Guo-Ming Fang |
Abstract: |
Developing software systems by
integrating the existing applications/systems over the network is
becoming mature and practical. Microsoft Windows operating systems
today support a huge number of software applications. It may
accelerate the construction of components, if these commercial
software applications could be transformed to software components.
This paper proposes an architectural style to support a 3-phases
process for migrating MS-Windows applications towards a distributed
system using Java technologies. This style is aimed to provide a
solution with clear documentation and sufficient information that is
helpful to a software developer for rapidly integration of
MS-windows applications. In final, an example parking lot management
system that assembles two MS-Windows applications was developed in
this work to demonstrate the usage of this style. |
|
Title: |
TOWARDS PROCESS-AWARE ENTERPRISE
SOFTWARE ENVIRONMENTS |
Author(s): |
Bela Mutschler, Johannes Bumiller and
Manfred Reichert |
Abstract: |
To stay competitive at the market
companies must tightly interlink their software systems with their
business processes. While the business process paradigm has been
widely accepted in practice, the majority of current software
applications is still not yet implemented in a process-oriented way.
But even if, process logic is “hard-wired” in the application code
leading to inflexible and rigid software systems that do not reflect
business needs. In such a scenario quick adaptation of the software
systems to changed business processes is almost impossible.
Therefore, many software systems are already out of date at the time
they are introduced into practice, and they generate high
maintenance costs in the following. Due to this unsatisfactory
business process support a software system’s return on investment is
often low. By contrast technologies which enable the realization of
process-aware enterprise environments will significantly contribute
to improve the added value of IT to a company’s business. In this
paper we characterize process-ware enterprise environments.
Additionally we identify promising technologies that particularly
enable process-awareness and leading to lower development and
maintenance costs as well as higher benefits. We present a
conceptual framework, which describes process-ware enterprise
environments, and discuss relevant topics. |
|
Title: |
A FRAMEWORK FOR ERP INTEGRATION |
Author(s): |
Delvin Grant and Qiang Tu |
Abstract: |
A conceptual framework for better
understanding of ERP integration issues is proposed based on
existing literature. Its implications for practice and future
research are discussed. |
|
Title: |
CRITICAL SUCCESS FACTORS IN ERP
PROJECTS: CASE STUDIES IN TWO INDUSTRIAL ORGANIZATIONS IN THE
NETHERLANDS |
Author(s): |
Jos J.M. Trienekens, Wouter Kuijpers
and Ruud Hendriks |
Abstract: |
Over the past decade many
organizations are increasingly concerned with the implementation of
Enterprise Resource Planning (ERP) systems. Implementation can be
considered to be a process of organizational change influenced by
different factors of type organizational, technological and human.
This paper reports on critical success factors (CSFs) in two actual
ERP implementation projects in industry. Critical success factors
are being recognized and used in these projects and serve as a
reference base for monitoring and controling the implementation
projects. The paper identifies both (dis)advantages of CSFs and
shortcomings of ERP implementation project management. |
|
Title: |
USING CRITICAL SUCCESS FACTORS FOR
ASSESSING CRITICAL ACTIVITIES IN ERP IMPLEMENTATION WITHIN SMES |
Author(s): |
Paolo Faverio, Donatella Sciuto and
Giacomo Buonanno |
Abstract: |
Aim of this research is the
investigation and analysis of the critical success factors (CSF) in
the implementation of ERP systems within SMEs. Papers in the ERP
research field have focused on successes and failures of
implementing systems into large organizations. Within the highly
differentiated set of computer based systems available, the ERP
systems represent the most common solution adopted by large
companies to pursue their strategies. On the contrary, until now
small and medium enterprises (SMEs) have shown little interest in
ERP systems due to the lack of internal competence and resources
that characterize those companies. Nevertheless, now that ERP
vendors’ offer shows a noteworthy adjustment to SMEs organizational
and business characteristics it seems of a certain interest to study
and deeply analyze the reasons that can inhibit or foster ERP
adoption within SMEs. This approach cannot leave out of
consideration the analysis of the Critical Success Factors (CSFs) in
ERP implementation: despite their wide outline in the most qualified
literature, very seldom these research efforts have been addressed
to SMEs. This paper aims at proposing a methodology to support the
small medium entrepreneur in identifying the critical factors to be
monitored along the whole ERP adoption process. |
|
Title: |
MUSICAL RETRIEVAL IN P2P NETWORKS
UNDER THE WARPING DISTANCE |
Author(s): |
Ioannis Karydis, Alexandros
Nanopoulos, Apostolos N. Papadopoulos and Yannis Manolopoulos |
Abstract: |
Peer-to-peer (P2P) networks present
the advantages of increased size of the overall database offered by
a the network nodes, fault-tolerance support to peer failure, and
workload distribution. Music file storage and exchange has long
abandoned the traditional centralised server-client approach for the
advantages of P2P networks. In this paper, we examine the problem of
searching for similar acoustic data over unstructured decentralised
P2P networks. As distance measure, we utilise the time warping. We
propose a novel algorithm, which efficiently retrieves similar audio
data. The proposed algorithm takes advantage of the absence of
overhead in unstructured P2P networks and minimises the required
traffic for all operations with the use of an intelligent sampling
scheme. Detailed experimental results show the efficiency of the
proposed algorithm compared to an existing baseline algorithm. |
|
Title: |
A VIDEO DELIVERY METHOD USING
AVAILABLE BANDWIDTH OF LINKS WITH BUFFERS AND DISKS |
Author(s): |
Hideaki Ito and Teruo Fukumura |
Abstract: |
Scheduling policies and methods are
required to deliver videos through network structure since the
videos are key contents, and they are continuous media, in order to
design the networked multimedia systems. These systems allocate
resources before video clips leave their servers for guaranteeing
continuous play of the videos. The policies for achieving video
delivery play an important role in sense of effective delivery. The
method for utilizing the links is a momentous problem, since their
capabilities are restricted, and extensions of their capabilities
are a difficult issue. The policy shown in this paper is that
available network bandwidth is used for delivering one video clip at
once. The bandwidth of a link is exclusively used to deliver only
one video clip. On the other hand, buffers and disks are established
easier than the links. The policy treats these resources to deliver
videos complementary in sense that these resources store the
delivered video and that they are used for prevent link overflow.
Moreover, some simulating results are shown. Then, the amount of
buffer space is restricted, and disks are used for storing the video
in temporal. |
|
Title: |
AN INTEGRATIVE FRAMEWORK TO ASSESS
AND IMPROVE INFORMATION QUALITY MANAGEMENT IN ORGANIZATIONS |
Author(s): |
Ismael Caballero, Jesús Rodríguez and
Mario Piattini |
Abstract: |
Information quality has become a
decisive factor in organizations since it is the basis for the
strategic decisions. So, many researching lines over the last decade
have looked at specific data and information quality issues from
different standpoints. Taking care about data and information
quality goes beyond the definition of data quality dimensions, and
today, there is still lack of an integrative framework, which can
guide organizations in the assessment and improvement of data and
information quality in a coordinated and global way. This paper
tries to fulfil this gap by proposing a framework using the
Information Management Process (IMP) concept. It consists of two
main components: an Information Quality Management Model structured
in Maturity Levels (CALDEA) and an Assessment and Improvement
Methodology (EVAMECAL). The methodology allows the assessment of an
IMP in terms of maturity levels given by CALDEA, which is used as
guidance for improvements. |
|
Title: |
DYNAMIC DATABASE INTEGRATION IN A
JDBC DRIVER |
Author(s): |
Terrence Mason and Ramon Lawrence |
Abstract: |
Current integration techniques are
unsuitable for large-scale integrations involving numerous
heterogeneous data sources. Existing methods either require the user
to know the semantics of all data sources or they impose a static
global view that is not tolerant of schema evolution. These
assumptions are not valid in many environments. We present a
different approach to integration based on annotation. The
contribution is the elimination of the bottleneck of global view
construction by moving the complicated task of identifying semantics
to local annotators instead of global integrators. This allows the
integration to be more automated, scaleable, and rapidly deployable.
The algorithms are packaged in an embedded database engine contained
in a JDBC driver capable of dynamically integrating data sources.
Experimental results demonstrate that the Unity JDBC driver
efficiently integrates data located in separate data sources with
minimal overhead. |
|
Title: |
AN INTERNET ACCOUNTING SYSTEM: A
LARGE SCALE SOFTWARE SYSTEM DEVELOPMENT USING MODEL DRIVEN
ARCHITECTURE |
Author(s): |
Kenji Ohmori |
Abstract: |
Software development should be
changed from a handcraft industry to industrialization like
manufacturing to obtain high productivity. In knowledge creating
industry of software development, engineers have to concentrate on
core works. Peripheral works should be avoided as much as possible.
Model driven architecture helps programmers work mainly in analysis
and designing without considering much about implementation.
Internet Accounting System, which is a standard model of enterprise
systems have been developed with model driven architecture with high
productivity. |
|
Title: |
ESTIMATING PATTERNS CONSEQUENCES FOR
THE ARCHITECTURAL DESIGN OF E-BUSINESS APPLICATIONS |
Author(s): |
Feras T. Dabous, Fethi A. Rabhi,
Hairong Yu and Tariq Al-Naeem |
Abstract: |
Quality requirements play an
important role in the success of enterprise e-business applications
that support the automation of essential Business Processes (BPs).
The functionality of each application may correspond to specific
parts of the functionalities in a number of quality-proven
monolithic and heterogeneous legacy systems. We refer to the
development of such applications as BP Automation. In previous work,
we have identified a range of patterns that capture best practices
for the architectural design of such applications with the presence
of legacy functionality. In this paper, we present and discuss
quantitative patterns' consequences models to systematically
estimate a number of quality attributes mainly the development
effort and maintenance effort. The estimations for these qualities
and the preferences provided by the stakeholders would affect the
nomination of the architectural approach. A real life case study in
the domain of e-finance and in particular capital markets trading is
used in this paper to validate these models. |
|
Title: |
BUILDING APPLICATIONS ABLE TO COPE
WITH PROBLEMATIC DATA USING A DATAWARP APPROACH |
Author(s): |
Stephen Crouch, Peter Henderson and
Robert John Walters |
Abstract: |
As Enterprise systems develop and
become ever more interconnected, they have to work with and store
ever increasing quantities of data. Inevitably some proportion of
this data is incorrect or contains inconsistencies. In general,
toady’s systems struggle to cope when they encounter such situations
as their logic and operation is based on the implicit assumption
that the data they use is consistent if not actually correct. The
naďve solution is to strive to eliminate errors and inconsistencies
from the data. However, it is clear that no matter how tough we make
our procedures and mechanisms for data collection and maintenance
activities, we cannot hope to eliminate them entirely. Instead, we
need to build tolerance into our applications to permit them to
operate notwithstanding shortcomings they may encounter in the data
they use. In a series of experiments, we have shown that an
application using our “DataWarp” approach to data enjoys a real
advantage in one specific environment. This paper describes applying
the approach more widely. |
|
Title: |
A FRAMEWORK FOR PARALLEL QUERY
PROCESSING ON GRID-BASED ARCHITECTURE |
Author(s): |
Khin Mar Soe, Than Nwe Aung, Aye Aye
Nwe, Thinn Thu Naing and Nilar Thein |
Abstract: |
With relations growing larger,
distributed, and queries becoming more complex, parallel query
processing is an increasingly attractive option for improving the
performance of database systems. Distributed and parallel query
processing has been widely used in data intensive applications where
data of relevance to users are stored at multiple locations. It is
becoming a reality. It can also be important in Grid since grid
technologies have enabled sophisticated interaction and data sharing
between resources that may belong to different departments or
organizations. In this paper, we propose a three-tier middleware
system for optimizing and processing of distributed queries in
parallel on Cluster Grid architecture. The main contribution of this
paper is providing transparent and integrated access to distributed
heterogeneous data resources, getting performance improvements of
implicit parallelism by extending technologies from parallel
databases. We also proposed the dynamic programming algorithm for
query optimization and site selection algorithm for resource
balancing. An example query for employee databases is used
throughout the paper to show the benefits of the system. |
|
Title: |
ONTOLOGY BASED EXTRACTION AND
INTEGRATION OF INFORMATION FROM UNSTRUCTURED DOCUMENTS |
Author(s): |
Naychi Lai Lai Thein, Khin Haymar Saw
Hla and Ni Lar Thein |
Abstract: |
The Semantic Web is an extension of
the current Web in which information is given well-defined meaning,
better enabling computers and people to work in cooperation One of
the basic problems in the development of Semantic Web is information
integration. Indeed, the web is composed of a variety of information
sources, and in order to integrate information from such sources,
their semantic integration and reconciliation is required. Also, web
pages are formatted with HTML which is only a human readable format
and the agents cannot understand their meaning. In this paper, we
present an approach to extract information from unstructured
documents (e.g. HTML) and are converted to standard format (XML) by
using source ontology. Then, we translate XML output to local
ontology. This paper also describes a key technology for mapping
between ontologies to compute similarity measures to express complex
relationships among concepts. In order to address this problem, we
apply machine learning approach for semantic interoperability in the
real, commercial and governmental world. |
|
Title: |
AN APPLICATION TO INTEGRATE
RELATIONAL AND XML DATA SOURCES |
Author(s): |
Ana MŞ Fermoso García, Roberto Berjón
Gallinas and MŞ José Gil Larrea |
Abstract: |
Nowadays, special with the Internet
explosion, enterprises have to work with data from heterogeneous
sources, such as data from conventional databases, or from new
sources of Internet world like XML or HTML documents. Organizations
have to work with these different data sources at the same time, so,
it’s necessary to find some way to integrate this heterogeneous
information. In this paper we are going to centre in two main types
of data, conventional data from relational databases, and the new
web data format XML. Traditional relational database continues being
the main data store and XML has become the main format to exchange
and representation data on the web. At the end our purpose would be
that the necessary data in each moment were in the same and common
format, in XML, because this is the most used format on the web.
This paper proposes an efficient environment for accessing
relational databases from a web perspective using XML. Our
environment defines a query system based on XML for relational
databases, called XBD. XBD has a full XML appearance, query language
and query results are in XML format. For the end user it is similar
to query a XML document. This system includes a model to adapt any
relational database in order it could be queried in two new query
languages, derived from XSL and XQuery languages, and a software
tool to implement the functionality of the XBD environment. |
|
Title: |
CHANGE IMPACT ANALYSIS APPROACH IN A
CLASS HIERARCHY |
Author(s): |
Khine Khine Oo |
Abstract: |
Change impact analysis is a technique
for determining the potential effects of changes on a software
system. As software system evolves, changes made to those systems
can have unintended impacts elsewhere. Although, object-oriented
features such as encapsulation, inheritance, polymorphism, and
dynamic binding contribute to the reusability and extensibility of
systems. However, we have to face the more difficult to identify the
effected components due to changes because there exits complex
dependencies between classes and attributes. In this paper, we
propose change impact analysis approach for a class hierarchy. Our
approach is based on the program slicing techniques to extract the
impact program fragment with respect to the slicing criterion of
change information but aim to minimize unexpected side effects of
change. We believe that our impact analysis approach provides the
software developer in their maintaining process as well as debugging
and testing processes. |
|
Title: |
CHANGE DETECTION AND MAINTENANCE OF
AN XML WEB WAREHOUSE |
Author(s): |
Ching-Ming Chao |
Abstract: |
The World Wide Web is a popular
broadcast medium that contains a huge amount of information. The web
warehouse is an efficient and effective means to facilitate
utilization of information on the Web. XML has become the new
standard for semi-structured data exchange over the Web. In this
paper, therefore, we study the XML web warehouse and propose an
approach to the problems of change detection and warehouse
maintenance in an XML web warehouse system. This paper has three
major contributions. First, we propose an object-oriented data model
for XML web pages in the web warehouse as well as system
architecture for change detection and warehouse maintenance. Second,
we propose a change detection method based on mobile agent
technology to actively detect changes of data sources of the web
warehouse. Third, we propose an incremental and deferred maintenance
method to maintain XML web pages in the web warehouse. We compared
our approach with a rewriting approach to storage and maintenance of
the XML web warehouse by experiments. Performance evaluation shows
that our approach is more efficient than the rewriting ap-proach in
terms of the response time and storage space of the web warehouse. |
|
Title: |
TOWARDS DATA WAREHOUSES FOR NATURAL
HAZARDS |
Author(s): |
Hicham Hajji, Mohand-Said Hacid and
Hassan Badir |
Abstract: |
Data warehousing has emerged as an
effective technique for converting data into useful information. It
is an improved approach to integrate data from multiple, often very
large, distributed, heterogeneous databases and other information
sources. This paper examines the possibility of using data
warehousing technique in the natural hazards management framework to
integrate various functional and operational data which are usually
scattered across multiple, dispersed and fragmented systems. We
present a conceptual data model for the data warehouse in the
presence of various data formats such as geographic and multimedia
data. We propose OLAP operations for browsing information in the
data warehouse. |
|
Title: |
XML-BASED SEMANTIC DATABASE
DEFINITION LANGUAGE |
Author(s): |
Naphtali Rishe, Malek Adjouadi, Maxim
Chekmasov, Dmitry Vasilevsky, Scott Graham, Dayanara Hernandez and
Ouri Wolfson |
Abstract: |
The current paper analyzes different
options for semantic database presentation and describes a
presentation format XSDL. Presentation of semantic database in a
certain format implies that the format fully preserves the database
content. If the database is exported to this format and then
imported back to the database engine, the resulting database should
be equivalent to the one that was exported. XSDL is used for
information exchange, reviewing data from databases, debugging
database applications and for recovery purposes. Among other
requirements that XSDL meets are support of both schema and data,
readability by the user (therefore XSDL is a text format), full
preservation of database content, support for simple and fast
export/import algorithms, portability across platforms, and
facilitation of data exchange. |
|
Title: |
TOWARDS AN AUTOMATIC DATA MART DESIGN |
Author(s): |
Ahlem Nabli, Ahlem Soussi, Jamel
Feki, Hanęne Ben Abdallah and Faďez Gargouri |
Abstract: |
The manual design of data warehouse
and data mart schemes can be a tedious, error-prone, and
time-consuming task. In fact, it is a highly complex engineering
task that calls for a methodological support. This paper lays the
grounds for an automatic, stepwise approach for the generation of
data warehouse and data mart schemes. For this, it first proposes a
standard format for OLAP requirement acquisition. Secondly, it
defines an algorithm that transforms automatically the OLAP
requirements into data marts modelled either as star or
constellation schemes. Thirdly, it defines a set of unification
rules that merge the generated data mart schemes to construct the
data warehouse schema. Finally, it outlines the mapping rules
between the data sources and the data marts schemes |
|
Title: |
AN EFFICIENT APPROACH FOR WEB-SITE
ADAPTATION |
Author(s): |
Seema Jani, Sam Makki and Xiaohua Jia |
Abstract: |
This paper implements a novel
approach defined as the Preference-function Algorithm (PFA) for
web-site adaptation. The algorithm extracts future preferences from
the users’ past web navigational activities. Server web logs are
used to identify users’ navigation behaviors by examining the
traverses of various web pages. In this approach, the sessions are
modeled as a finite state graph, where each visited web page is
defined as a state. Then, traversing among various states provides
the framework for determining the interest of the users’. |
|
Title: |
INTEGRATING WORKFLOW EXTENSIONS INTO
A PROCESS-INTEGRATED ENVIRONMENT FOR CHEMICAL ENGINEERING |
Author(s): |
Michalis Miatidis and Matthias Jarke |
Abstract: |
Design is one of the most complex and
creative tasks undertaken by chemical engineers. The early
production stages of chemical design require an adequate support
because of their critical impact on the competitiveness of the final
products, as well as their environmental impact. In cooperation with
researchers and industries from the chemical engineering domain, we
have created an integrated flowsheet-centered environment for the
support of the early stages of design. This environment has been
build on top of the PRIME (Process-Integrated Modelling
Environments) framework which empowers the delivery of direct
fine-grained method guidance to the engineers through
process-integrated tools. In order to address the global need for
enterprise integration observed in today's highly competitive global
economy, we had to make our system more aware of further
organizational aspects of the executed processes. As a solution to
this challenge, we integrated a number of workflow extensions inside
our system. These extensions enabled PRIME to provide its method
guidance further across the inter- and intra-enterprise environment
of our enacted processes, with the future goal of seamless
interoperating with other external systems of the overall enterprise
environment. In this paper, after capturing the rationale behind the
need for this integration, we successively describe the integrated
environment support built upon PRIME and detail the extensions
employed. Finally, we illustrate our approach on a small case study
from our experience. |
|
Title: |
AN INTEGRATED DECISION SUPPORT TOOL
FOR EU POLICIES ON HEALTH, TRANSPORT AND ARTISTIC HERITAGE RECOVERY
|
Author(s): |
Kanana Ezekiel and Farhi Marir |
Abstract: |
In this paper, we describe an ongoing
EU funded project (ISHTAR) that develops an advance integrated
decision tool (ISHTAR suite) for the analysis of the effects of
long-term and short-term policies to improve the quality of the
environment, citizen’s health and preservation of heritage
monuments. From the background of the project, the paper goes on to
explain the integration of a large number of tools aimed at
knowledge management and knowledge sharing to allow European cities
to make balanced decisions on a wide range of issues such as health,
noise, pollution, transport, and monumental heritage. We also
identify solutions to various problems and difficulties when
attempting to represent and share knowledge. |
|
Title: |
A UNIFIED FRAMEWORK FOR APPLICATION
INTEGRATION - AN ONTOLOGY-DRIVEN SERVICE-ORIENTED APPROACH |
Author(s): |
Saďd Izza, Lucien Vincent and Patrick
Burlat |
Abstract: |
The crucial problem of the enterprise
application integration (EAI) is the semantic integration. This
problem is not correctly addressed by today's EAI solutions that
focus mainly on the technical and syntactical integration.
Addressing the semantic aspect will promote EAI by providing it more
consistency and robustness. Some efforts are suggested to solve the
semantic problem, but they are still not mature. This article will
propose an approach that combines both ontologies and web services
in order to overcome the integration problem. |
|
Title: |
CHOOSING GROUPWARE TOOLS AND
ELICITATION TECHNIQUES ACCORDING TO STAKEHOLDERS' FEATURES |
Author(s): |
Gabriela N. Aranda, Aurora Vizcaíno,
Alejandra Cechich and Mario Piattini |
Abstract: |
The set of groupware tools used
during a distributed development process is usually chosen by taking
into account predetermined business politics, managers’ personal
preferences, or people in charge of the project. However, perhaps
the chosen groupware tools are not the most appropriate for all the
group members and it is possible that some of them would not be
completely comfortable with them. To avoid this situation we have
built a model and its supporting prototype tool which, based on
techniques from psychology, suggests an appropriate set of groupware
tools and elicitation techniques according to stakeholders’
preferences. |
|
Title: |
CWM-BASED INTEGRATION OF XML
DOCUMENTS AND OBJECT-RELATIONAL DATA |
Author(s): |
Iryna Kozlova, Martin Husemann,
Norbert Ritter, Stefan Witt and Natalia Haenikel |
Abstract: |
In today’s networked world, a
plenitude of data is spread across a variety of data sources with
different data models and structures. In order to leverage the
potential of distributed data, effective methods for the integrated
utilization of heterogeneous data sources are required. In this
paper, we propose a model for the integration of the two predominant
types of data sources, (object-)relational and XML databases. It
employs the Object Management Group’s Common Warehouse Metamodel to
resolve structural heterogeneity and aims at an extensively
automatic integration process. Users are presented with an SQL view
and an XML view on the global schema and can thus access the
integrated data sources via both native query languages, SQL and
XQuery. |
|
Title: |
QL-RTDB: QUERY LANGUAGE FOR REAL-TIME
DATABASES |
Author(s): |
Cicília R. M. Leite, Yáskara Y. M. P.
Fernandes, Angelo Perkusich, Pedro F. R. Neto and Maria L. B.
Perkusich |
Abstract: |
Although some research directed for
real-time database, some functionalities provided for these as:
control of concurrency, scheduling and query language still are
being searched. In order to solve this problems, we consider to
extend structured query language (SQL) to be used in a database in
real-time, that we will call of query language for database in
real-time (QL-RTDB). This article presents the implementation of
QL-RTDB. As results, the best execution sequence of the transactions
operations must be produced, where the transactions maximum amount
attends it deadlines using valid data. |
|
Title: |
THE INDEX UPDATE PROBLEM FOR XML DATA
IN XDBMS |
Author(s): |
Beda Christoph Hammerschmidt, Martin
Kempa and Volker Linnemann |
Abstract: |
Database Management Systems are a
major component of almost every information system. In relational
Database Management Systems (RDBMS) indexes are well known and
essential for the performant execution of frequent queries. For XML
Database Management Systems (XDBMS) no index standards are
established yet; although they are required not less. An inevitable
side effect of any index is that modifications of the indexed data
have to be reflected by the index structure itself. This leads to
two problems: first it has to be determined whether a modifying
operation affects an index or not. Second, if an index is affected,
the index has to be updated efficiently - best without rebuilding
the whole index. In recent years a lot of approaches were introduced
for indexing XML data in an XDBMS. All approaches lack more or less
in the field of updates. In this paper we give an algorithm that is
based on finite automaton theory and determines whether an XPath
based database operation affects an index that is defined
universally upon keys, qualifiers and a return value of an XPath
expression. In addition, we give algorithms how we update our KeyX
indexes efficiently if they are affected by a modification. The
Index Update Problem is relevant for all applications that use a
secondary XML data representation (e.g. indexes, caches, XML
replication/synchronization services) where updates must be
identified and realized. |
|
Title: |
AN ARCHITECTURE FOR
LOCATION-DEPENDENT SEMANTIC CACHE MANAGEMENT |
Author(s): |
Heloise Manica, Murilo S. de Camargo
and M.A.R. Dantas |
Abstract: |
Advances in mobile computing and
wireless communications are allowing the development of some
approaches which consider the geographical position of a mobile user
to access data dependent on it. Location-Dependent Information
Services is an emerging class of application that allows new types
of queries such as location-dependent queries and continuous
queries. In these systems, data caching plays an important role in
data management due to its ability to improve system performance and
availability in case of disconnection. In mobile environment, data
cached can become obsolete when the client moves from a location to
a new one. Therefore, cache management requires more than
traditional solutions due to mobility and location. This paper
presents a new semantic cache scheme for location dependent systems
based on spatial property. The proposed architecture is called as
Location Dependent Semantic Cache Management – LDSCM. In addition,
we examine location-dependent query processing issues and propose a
solution for the reorganization of the cached semantic segments. |
|
Title: |
COCO: COMPOSITION MODEL AND
COMPOSITION MODEL IMPLEMENTATION |
Author(s): |
Naiyana Tansalarak and Kajal T.
Claypool |
Abstract: |
Component-based software engineering
attempts to address the ever increasing demand for new software
applications by enabling a compositional approach to software
construction in which applications are built from pre-fabricated
components, rather than developed from scratch. However, the success
of component-based development has been impeded by interoperability
concerns that often come into play when composing two or more
independently developed components. These concerns encompass five
incompatibility dimensions: component model, semantic, syntactic,
design and platform. In this paper we now propose a CoCo composition
model that elevates compositions to first class citizenship status
and defines the standard for describing the composition of
components transparently to any underlying incompatibilities between
the collaborating components; and a CoCo composition model
implementation that provides the required support to describe and
subsequently execute the composition to produce a composed
application. In particular, we advocate the use of XML Schemas as a
mechanism to support the composition model. To support the
composition model implementation we provide (1) a taxonomy of
primitive composition operators to describe the {\em connection}
between components; (2) XML documents as a description {\em
language} for the compositions; and (3) the development of a set of
deployment plugins that address any incompatibilities and enable the
generation of the composed application (or composite component) in
different languages and component models as well as on different
platforms. |
|
Title: |
SEFAGI: SIMPLE ENVIRONMENT FOR
ADAPTABLE GRAPHICAL INTERFACES - GENERATING USER INTERFACES FOR
DIFFERENT KINDS OF TERMINALS |
Author(s): |
Tarak Chaari and Frédérique Laforest |
Abstract: |
The SEFAGI project takes place in
domains where many different user interfaces are needed in the same
application. Instead of manually developing all the required
windows, we propose a platform that automatically generates the
needed code from high level descriptions of these windows. Code
generation is done for standard screens and for small screens on
mobile terminals. New windows are automatically taken in charge by
an execution layer on the terminal. Data adaptation to the different
terminals is also provided. A platform-independent window
description language has been defined |
|
Title: |
TABLE-DRIVEN PROGRAMMING IN SQL FOR
ENTERPRISE INFORMATION SYSTEMS |
Author(s): |
Hung-chih Yang and D. Stott Parker |
Abstract: |
In database systems, business logic
is usually implemented in the forms of external processes, stored
procedures, user-defined functions, components, objects,
constraints, triggers, etc. In this paper, we propose storing
business process logic in the attributes of tuples as functions
defined by SQL expressions (or user-defined functions). This idea is
to treat functions as data, and extend the type system of a
relational database to include function datatypes. In short, data
and functions are integrated in a relational manner. The
introduction of these \emph{lightweight functions} to relational
databases gives a basis for applying the software-engineering
methodology of \emph{table-driven programming} in SQL. This
methodology advocates storing functions and data in tables. The
query evaluation process then needs only to be extended with
mechanical evaluation of ``joined'' data and functions. This
approach can make understanding and maintenance of stored business
logic transparent as relational data. |
|
Title: |
ASPECT-ORIENTED DOMAIN SPECIFIC
LANGUAGES FOR ADVANCED TRANSACTION MANAGEMENT |
Author(s): |
Johan Fabry and Thomas Cleenewerck |
Abstract: |
Transaction management is a widely
used concurrency management technique in distributed systems,
although it has some known drawbacks. These have been researched in
the past, and many solutions in the form of advanced transaction
models have been proposed. However none of these models are
currently in use. An important reason for this is that they are too
difficult to be used by the application programmer because of their
complexity. In this paper we show how this can be solved by letting
the application programmer specify these advanced transactions at a
much higher abstraction level. To achieve this, we marry the
software engineering techniques of Aspect Oriented Programming and
Domain-Specific Languages. This allows the programmer to declare
advanced transactions separately in one concise specification. |
|
Title: |
ANALYTICAL AND EXPERIMENTAL
EVALUATION OF STREAM-BASED JOIN |
Author(s): |
Henry Kostowski and Kajal T. Claypool |
Abstract: |
Continuous queries over data streams
have gained popularity as the breadth of possible applications,
ranging from network monitoring to online pattern discovery, have
increased. Joining of streams is a fundamental issue that must be
resolved to enable complex queries over multiple streams. However,
as streams can represent potentially infinite data, it is infeasible
to have full join evaluations as is the case with traditional
databases. Joins in a stream environment are thus evaluated not over
entire streams, but on specific windows defined on the streams. In
this paper, we present windowed implementations of the traditional
nested loops and hash join algorithms. In our work we analytically
and experimentally evaluate the performance of these algorithms for
different parameters. We find that, in general, a hash join provides
better performance. We also investigate invalidation strategies to
remove stale data from the window buffers, and propose an optimal
strategy that balances processing time versus buffer size.
|
|
Title: |
WRAPPING AND INTEGRATING
HETEROGENEOUS RELATIONAL DATA WITH OWL |
Author(s): |
Seksun Suwanmanee, Djamal Benslimane,
Pierre-Antoine Champin and Philippe Thiran |
Abstract: |
The number of web-based information
systems has been increasing since Internet became a global open
network accessible for all. The Semantic Web vision aims at
providing supplementary meaningful information (meta-data) about Web
resources in order to facilitate automatic processing by machines
and interoperability between different systems. In this paper, we
focus on an integration of heterogeneous data sources in the
semantic Web context using a semantic mediation approach based on
ontologies. We use the ontology description language OWL to
formalize ontologies of different resources and to describe their
relations and correspondences in order to allow the semantic
interoperability between them. We propose an architecture adopting
mediator-wrapper approach for a mediator based on OWL. Some
illustrations of semantic mediation using OWL are also presented in
the paper. |
|
Title: |
A PROTOTYPE FOR INTEGRATION OF WEB
SERVICES INTO THE IRULES APPROACH TO COMPONENT INTEGRATION |
Author(s): |
Susan D. Urban, Vikram V. Kumar and
Suzanne W. Dietrich |
Abstract: |
The ANON environment provides a
framework for using events and rules in the integration of EJB
components. This research has investigated the extensions required
to integrate Web Services into the ANON architecture and execution
environment. The ANON language framework and metadata have been
extended for Web Services, with enhancements to Web Service
interfaces for describing services that represent object
manipulation operations as well as component enhancements such as
event generation, stored attributes, and externalized relationships
between distributed components. Web service wrappers provide the
additional ANON functionality for the enhanced Web service
interfaces, with a state management facility in the ANON environment
providing persistent storage of stored attributes and externalized
relationships. The ANON Web service wrappers are client-side,
component-independent wrappers for Web Services, thus providing a
more dynamic approach to the modification of service interfaces as
well as the dynamic entry and exit of participants in the
integration process. |
|
Title: |
VALUE ADDED WEB SERVICES FOR
INDUSTRIAL OPERATIONS AND MAINTENANCE |
Author(s): |
Mika Viinikkala, Veli-Pekka Jaakkola
and Seppo Kuikka |
Abstract: |
Efficient information management is
needed at industrial manufacturing plants that compete in the
present demanding business environment. Requirements to enhance
operation and maintenance (O&M) information management emerge from
problems within internal information flows of a plant, supporting
the networked organization of O&M, and accomplishing the new
demand-driven business model. O&M information management of an
industrial process plant is here proposed to be enhanced by value
added web services. A service framework will work as a supporting
architectural context for the value added services. Information from
existing systems, such as automation, maintenance, production
control, and condition monitoring systems, is analyzed, refined and
used in control activities by the value added services. |
|
Title: |
REAL-TIME SALES & OPERATIONS PLANNING
WITH CORBA: LINKING DEMAND MANAGEMENT WITH PRODUCTION PLANNING |
Author(s): |
Elias Kirche, Janusz Zalewski and
Teresa Tharp |
Abstract: |
Several existing mechanisms for order
processing, such as Available-to-Promise (ATP), Materials
Requirements Planning (MRP), or Capable-to-Promise (CTP), do not
really include simultaneous capacity and profitability
considerations. One of the major issues in the incorporation of
profitability analysis into the order management system is the
determination of relevant costs in the order cycle, and the
real-time access to production parameters (i.e., target quantities
based on current cycle time) to be included in the computation of
planning and profitability. Our study attempts to provide insights
into this novel area by developing a Decision Support System (DSS)
for demand management that integrates real-time information
generated by process control and monitoring systems into an
optimization system for profitability analysis in a distributed
environment via CORBA (Common Object Request Broker Architecture).
The model can be incorporated into current enterprise resource
planning (ERP) systems and dynamic use of real-time data from
various functional support technologies. |
|
Title: |
A TREE BASED ALGEBRA FRAMEWORK FOR
XML DATA SYSTEMS |
Author(s): |
Ali El bekai and Nick Rossiter |
Abstract: |
This paper introduces a framework in
algebra for processing XML data. We develop a simple algebra, called
TA (Tree Algebra), for processing storing and manipulating XML data,
modelled as trees. We present assumptions of the framework, describe
the input and the output of the algebraic operators, and define the
syntax of these operators and their semantics in terms of
algorithms. Furthermore we define the relational and their semantics
in terms of algorithms. Examples show that this framework is
flexible to capture queries expressed in the domain specific XML
query language. As can be seen the input and output of our algebra
is a tree, that is the input and output are XML document and the XML
document is defined as a tree. We also present algorithms for many
of the algebra operators; these algorithms show how the algebra
operators such as join, union, complement, project, select, expose
and vertex work on nodes of the XML tree or element and attributes
of an XML document. Detailed examples show how the algebraic
operators work. |
|
Title: |
DYNAMIC PRE-FETCHING OF VIEWS BASED
ON USER-ACCESS PATTERNS IN AN OLAP SYSTEM |
Author(s): |
Karthik Ramachandran, Biren Shah and
Vijay Raghavan |
Abstract: |
Materialized view selection plays an
important role in improving the efficiency of an OLAP system. To
meet the changing user needs, many dynamic approaches have been
proposed for solving the view selection problem. Most of these
approaches use some form of caching to store frequent queries and a
replacement policy to replace the infrequent ones. While some of
these approaches use demand fetching, where the query is computed
only when it is asked, a few others have used a pre-fetching
strategy, where certain additional information is used to pre-fetch
queries that are likely to be asked in the near future. In this
paper, we propose a global pre-fetching scheme that uses user access
pattern information to pre-fetch certain candidate views that could
be used for efficient query processing within the specified user
context. For specific kinds of query patterns, called drill-down
analysis, which is typical of an OLAP system, our approach
significantly improves the query performance by pre-fetching
drill-down candidates that otherwise would have to be computed from
the base fact table. We compare our approach against dynamat; a
demand fetching based dynamic view management system that is known
to outperform optimal static view selection. The comparison is based
on the detailed cost savings ratio, used for quantifying the
benefits of view selection against incoming queries. The
experimental results show that our approach outperforms dynamat and
thus, also the optimal static view selection. |
|
Title: |
SEMANTIC QUERY TRANSFORMATION FOR
INTEGRATING WEB INFORMATION SOURCES |
Author(s): |
Mao Chen, Rakesh Mohan and Richard T.
Goodwin |
Abstract: |
The heterogeneousness and dynamics of
web sources are the major challenges to Internet-scale information
integration. The information sources are different in contents and
query interfaces. In addition, the sources can be highly dynamic in
the sense that they can be added, removed, or updated with time.
This paper introduces a novel information integration framework that
leverages the industry standards on web services (WSDL/SOAP),
ontology description language (RDF/OWL), and a commercial database
(IBM DB2 Information IntegratorDB2 II [DB2 II]). Taking advantage
of the data integration and query optimization capability of DB2 II,
this paper focuses on the methodologies to transform a user query to
the queries on different sources and to combine the transformation
results into a query to DB2 II. Wrapping information sources using
web services and annotating them with regard to their contents,
query capabilities and the logical relations between concepts, our
query transformation engine is rooted in ontology-based reasoning.
To the best of our knowledge, this is the first framework that uses
web services as the interface of information sources and combines
ontology-based reasoning, web services, semantic annotation on web
services, as well as DB2 II to support Internet-scale information
integration. |
|
Title: |
A HYBRID CLUSTERING CRITERION FOR
R*-TREE ON BUSINESS DATA |
Author(s): |
Yaokai Feng, Zhibin Wang and Akifumi
Makinouchi |
Abstract: |
It is well-known that
multidimensional indices are efficient to improve the query
performance on OLAP data. As one successful multi-dimensional index
structure, R*-tree, a famous member of the R-tree family, is very
popular. The clustering pattern of the objects (i.e., tuples in
relational tables) among R*-tree leaf nodes is one of the deceive
factors on query performance of range queries (a popular kind of
queries on business data). Then, how is the clustering pattern
formed? In this paper, we point out that the insert algorithm of
R*-tree, especially, its criterion choosing subtrees for new coming
objects, determines the clustering pattern of the tuples among the
leaf nodes. According to our discussion and observations, it becomes
clear that the present insert algorithm of R*-tree can not lead to
good clustering pattern of tuples when R*-tree is applied to
business data, which greatly degrades query performance. After that,
a hybrid clustering criterion for the insert algorithm of R*-tree is
introduced. Our discussion and experiments indicate that query
performance of R*-tree on business data is improved clearly by the
new creation. |
|
Title: |
SECURING THE ENTERPRISE DATABASE |
Author(s): |
V. Radha, Ved P. Gulati and N.
Hemanth Kumar |
Abstract: |
Security is gaining importance once
computers became indispensable in every organization. As the new
concepts like E-Governance in Government and E-Commerce in business
circles etc are heading towards reality, security issues penetrated
even into the legal framework of every country. Database security
acts as the last line of defence to withstand insider attacks and
attacks from outside even if all the security controls like
perimeter, OS controls have been compromised. Data protection laws
such as HIPAA (Health Insurance Portability and Accountability Act),
Gramm-Leach-Bliley Act of 1999, Data protection Act, Sarbanes Oxleys
Act are demanding for the privacy and integrity of the data to an
extent that the critical information should be seen only by the
authorized users which means the integrity of the database must be
properly accommodated. Hence, we aim at providing an interface
service in between enterprise applications and enterprise database
that ensures the integrity of the data. This service acts as a
security wrapper around any enterprise database. |
|
Title: |
CONDITIONS FOR INTEROPERABILITY |
Author(s): |
Nick Rossiter and Michael Heather |
Abstract: |
Interoperability remains a
challenging area, both at the semantic and organisational levels.
The original three-level architecture for databases is replaced by a
categorical four-level one, based on concepts, constructions, schema
types and data and the mappings between them. Such an architecture
provides natural closure as further levels are superfluous. The
manipulation of the architecture is done through the Godement
calculus which enables arrows at any level to be composed with each
other. Two conditions have been identified for interoperability to
actually be achieved. Firstly there must be no breakdown of
commutativity as exhibited by punctured diagrams. Type forcing may
be needed to alleviate such problems. Secondly semantic annotation
needs to be at a high enough level. Heyting logic may assist in this
task. |
|
Title: |
EXTENDING OBJECT ORIENTED DATABASES
TO SUPPORT THE VIEWPOINT MECHANISM |
Author(s): |
Fouzia Benchikha and Mahmoud Boufaida |
Abstract: |
An important dimension in the
database technology evolution is the development of
advanced/sophisticated database models. In particular, the viewpoint
concept receives a widespread attention. Its integration to a data
model gives a flexibility for the conventional object-oriented data
model and allows one to improve the modeling power of objects. On
the other hand, the viewpoint concept can be used as a means to
master the complexity of the current systems permitting a
distributed manner to develop them. In this paper we propose a data
model MVDB (Multi-Viewpoint DataBase model) that extends the object
database model with the viewpoint mechanism. The viewpoint notion is
used as an approach for a distributed development of a database
schema, as a means for object multiple description and as a
mechanism for dealing with the integrity constraint problems
commonly met in distributed environment. |
|
Title: |
DATA INTEGRATION AND USER MODELLING:
AN APPROACH BASED ON TOPIC MAPS AND DESCRIPTION LOGICS |
Author(s): |
Mourad Ouziri, Christine Verdier and
André Flory |
Abstract: |
We present in this paper a new way
about semantic data integration. We coupled a Topic Maps approach
with Description Logics. We propose a Web-based interface of queries
based on Topic Maps and a specification of user profiles to complete
the interface. This interface adapts the data and the display to
each user and guarantees the security and the confidentiality of
data. The user profiles are built on description logics concepts to
enhance the consistency of the profile access rights and the user
affectation to profiles. |
|
Title: |
ARCO: A LONG-TERM DIGITAL LIBRARY
STORAGE SYSTEM BASED ON GRID COMPUTATIONAL INFRASTRUCTURE |
Author(s): |
Han Fei, Paulo Trezentos, Nuno
Almeida, Miguel Lourenço, José Borbinha and Joăo Neves |
Abstract: |
Over the past several years the large
scale digital library service has undergone enormous popularity.
Arco project is a digital library storage project in Portuguese
National library. To a digital library storage system like ARCO
system, there are several challenges, such as the availability of
peta-scale storage, seamless spanning of storage cluster,
administration and utilization of distributed storage and computing
resources, safety and stability of data transfer, scalability of the
whole system, automatic discovery and monitoring of metadata, etc.
Grid computing appears as an effective technology coupling
geographically distributed resources for solving large scale
problems in the wide area or local area network. The ARCO system has
been developed on the Grid computational infrastructure, and on the
basis of various other toolkits, such as PostgreSQL, LDAP, and the
Apache HTTP server. Main developing languages are C, PHP, and Perl.
In this paper, we discuss the logical structure sketch of the
digital library ARCO system, resources organization, metadata
discovering and usage, the system's operation details and some
operations examples, as also the solution of large file transfer
problem in Globus grid toolkit |
|
Title: |
ADAPTING ERP SYSTEMS FOR SUPPORTING
DEFENSE MAINTENANCE PROCESSES |
Author(s): |
Robert Pellerin |
Abstract: |
The defense sector represents one of
the largest potential areas for new ERP sales. Many defense
organizations have already implemented ERP solutions to manage and
integrate the acquisition, maintenance, and support processes. This
paper addresses specifically the defense maintenance management
functions that need to be integrated into an ERP solution by
adopting the view of a defense repair and overhaul facility. We
first discuss the specific nature of the defense maintenance
activities, and then we present the difficulties of integrating a
maintenance strategy into an ERP solution. We finally conclude by
proposing a coherent and integrated ERP structure model for the
management of the defense repair and overhaul processes. The model
has been partly applied in a Canadian repair and overhaul facility
and adapted into the SAP R/3 software. |
|
Title: |
SEMANTIC DATABASE ENGINE DESIGN |
Author(s): |
Naphtali Rishe, Armando Barreto,
Maxim Chekmasov, Dmitry Vasilevsky, Scott Graham, Sonal Sood and
Ouri Wolfson |
Abstract: |
New types of data processing
applications are no longer satisfied with the capabilities offered
by the relational data model. One example of this phenomenon is the
growing use of the Internet as a source of data. The data on the
Internet is inherently non-relational. As a result, demand developed
for database management systems natively built on advanced data
models. The semantic binary data model (Rishe, 1992), satisfies the
criteria for the models required for today’s applications by
providing the ability to build rich schemas with arbitrarily
flexible relationships between objects. In this paper, we discuss a
new design for a semantic database management system which is based
on the semantic binary data model. Our challenge was to design and
implement a database engine which, while being native to the model,
is reasonably efficient on a wide variety of industrial
applications, and which surpasses relational systems in performance
and flexibility on those applications that require non-relational
modelling. Special attention is given to multi-platform support by
the semantic database engine. |
|
Title: |
OBJECT ID DISTRIBUTION AND ENCODING
IN THE SEMANTIC BINARY ENGINE |
Author(s): |
Naphtali Rishe, Armando Barreto,
Maxim Chekmasov, Dmitry Vasilevsky, Scott Graham, Sonal Sood and
Ouri Wolfson |
Abstract: |
The semantic binary engine is a
database management system built on the principles of the semantic
binary data model (Rishe, 1992). A semantic binary database is a set
of facts about objects. Objects belong to categories, are connected
by relations, and may have attributes. Since the concept of an
object is at the core of the data model, upon implementation it is
crucial to design efficient algorithms that allow the semantic
binary engine to store, retrieve, modify and delete information
about objects in the semantic database. In this paper, we discuss
the concept of object IDs for object identification and methods for
object ID distribution and encoding in the database. Several
encoding schemes and their respective efficiencies are discussed:
Truncated Identical encoding, End Flag encoding, and Length First
encoding. |
|
Title: |
STORAGE TYPES IN THE SEMANTIC BINARY
DATABASE ENGINE |
Author(s): |
Naphtali Rishe, Malek Adjouadi, Maxim
Chekmasov, Dmitry Vasilevsky, Scott Graham, Dayanara Hernandez and
Ouri Wolfson |
Abstract: |
Modern database engines support a
wide variety of data types. Native support for all of the types is
desirable and convenient for the database application developer, as
it allows application data to be stored in the database without
further conversion. However, support for each data type adds
complexity to the database engine code. To achieve a compromise
between convenience and complexity, the semantic binary database
engine is designed to support only the binary data type in its
kernel. Other data types are supported in the user-level environment
by add-on modules. This solution allows us to keep the database
kernel small and ensures the stability and robustness of the
database engine as a whole. By providing extra database tools, it
also allows application designers to get database-wide support for
additional data types. |
|
Title: |
MODELING AND EXECUTING SOFTWARE
PROCESSES BASED ON INTELLIGENT AGENTS |
Author(s): |
M. Ahmed Nacer and F. Aoussat |
Abstract: |
This paper presents a new approach
for modeling and executing software processes based on the concept
of multi-agent system. We introduce the modeling process as one of
the most important goal of the agent, and we use the concept of
“intelligent agent” to give more flexibility when adapting software
processes to unexpected changes. This is possible thanks to the
multiple capacities of the agent like autonomy and reactivity.
|
|
Title: |
DATA INTEGRATION ISSUES FOR BUSINESS
INTELLIGENCE INTEGRATED ENTERPRISE INFORMATION SYSTEMS |
Author(s): |
Pierre F. Tiako |
Abstract: |
Business Intelligence (BI) provides
the ability to access any type of data inside or across enterprises
and to analyze and present them as usable information. To work on
business intelligence, an enterprise has to deal with important
problems relating to both (1) Data integration and (2) Analysis and
presentation of data for strategic decision-making. No matter what
the application, the need for business intelligence applies
universally. This position paper focuses on Data Integration Issues
for Business Intelligence Integrated Enterprise Information Systems. |
|
Title: |
ASSESSING THE IMPACT OF INTEGRATING A
MES TO AN ERP SYSTEM |
Author(s): |
Young B. Moon and Varun Bahl |
Abstract: |
Despite of claims by software vendors
on positive values of an integrated MES and ERP system, there has
been no systematic study conducted to assess and evaluate the impact
of such an integrated system on shop floor operations. This paper
presents a simulation study to evaluate the impact of the MES
integration with the ERP system on production lead times. First, we
describe a methodology of using a discrete event computer simulation
to address an inherent problem of the Enterprise Resource Planning
(ERP) system of handling uncertainties and unexpected events. Then,
simulation study results comparing the performances of a
manufacturing system with MES and a manufacturing system without MES
are presented. The evaluation metric used in this simulation is the
production lead time. However, the results obtained in this study
can be expanded to more general situations with different evaluation
metrics. |
|
Title: |
AN ARCHITECTURE FRAMEWORK FOR COMPLEX
DATA WAREHOUSES |
Author(s): |
Jérôme Darmont, Omar Boussaďd,
Jean-Christian Ralaivao and Kamel Aouiche |
Abstract: |
Nowadays, many decision support
applications need to exploit data that are not only numerical or
symbolic, but also multimedia, multistructure, multisource,
multimodal, and/or multiversion. We term such data complex data.
Managing and analyzing complex data involves a lot of different
issues regarding their structure, storage and processing, and
metadata are a key element in all these processes. Such problems
have been addressed by classical data warehousing (i.e., applied to
"simple" data). However, data warehousing approaches need to be
adapted for complex data. In this paper, we first propose a precise,
though open, definition of complex data. Then we present a general
architecture framework for warehousing complex data. This
architecture heavily relies on metadata and rests on the XML
language, which helps storing data, metadata and domain-specific
knowledge, and facilitates communication between the various
warehousing processes. Finally, we enumerate the main issues in
complex data warehousing. |
|
Title: |
CONTEXT ANALYSIS FOR SEMANTIC MAPPING
OF DATA SOURCES USING A MULTI-STRATEGY MACHINE LEARNING APPROACH |
Author(s): |
Youssef Bououlid Idrissi and Julie
Vachon |
Abstract: |
Be it on a webwide or
inter-entreprise scale, data integration has become a major
necessity urged by the expansion of the Internet and of its
widespread use for communication between business actors. However,
since data sources are often heterogeneous, their integration
remains an expensive procedure. Indeed, this task requires prior
semantic alignment of all the data sources concepts. Doing this
alignment manually is quite laborious especially if there is a large
number of concepts to be matched. Various solutions have been
proposed attempting to automatize this step. This paper introduces a
new framework for data sources alignment which integrates context
analysis to multi-strategy machine learning. Although their
adaptability and extensibility are appreciated, actual machine
learning systems often suffer from the low quality and the lack of
diversity of training data sets. To overcome this limitation, we
introduce a new notion called ``informational context'' of data
sources. We therefore briefly explain the architecture of a context
analyser to be integrated into a learning system combining multiple
strategies to achieve data source mapping. |
|
Title: |
METADATA PARADIGM FOR EFFECTIVE
GLOBAL INFORMATION TECHNOLOGY IN THE MNCS |
Author(s): |
Longy O. Anyanwu, Gladys A. Arome and
Jared Keengwe |
Abstract: |
Multinational business expansion and
competition have escalated in the recent years, particularly in
Eastern Europe and the third world. Tremendous opportunities,
therefore, have been created for many companies and formidable
hindrances have been amassed against others. Business failure rates
among these multinational enterprises have alarmingly increased
beyond expectation. So has their IT implementation. The increasing
popularity and use of the Internet which businesses have little
control of, are an added complication. This study identifies a
matrix of mitigating factors, as well as information-base
distribution mechanism, critical to successful GIT implementation in
today’s multinational enterprises. The relevance and impact of these
factors on the multinational businesses are discussed. Consequently,
appropriate solutions for each problem are sug-gested. |
|
Area 2 - Artificial
Intelligence and Decision Support Systems
|
Title: |
CLUSTERING INTERESTINGNESS MEASURES
WITH POSITIVE CORREALTION |
Author(s): |
Xuan-Hiep Huynh, Fabrice Guillet and
Henri Briand |
Abstract: |
Selecting interestingness measures
have been an important problem in the knowledge discovery in
database research. A lot of measures have been proposed to extract
the knowledge from large databases and many authors have introduced
the interestingness properties for selecting a good measure for an
application. Some measures are good for some application but the
others are not, and it is difficult to capture what are the best
measures for a given data set. In this paper, we present a new
approach to select the groups or clusters of objective
interestingness measures that highly correlated in an application
and give to the user a small group of measures naturally different
in interestingness |
|
Title: |
A SYSTEM TO INTERPRET AND SUMMARISE
SOME PATTERNS IN IMAGES |
Author(s): |
Hema Nair and Ian Chai |
Abstract: |
A system that is designed and
implemented for automatic interpretation of some patterns in images
is described in this paper. The application domain being considered
for this system is remote-sensed images. Some patterns such as land,
island, water body, river, fire in remote-sensed images are
extracted and summarised in linguistic terms using fuzzy sets. A new
graphical tool (Multimedia University’s RSIMANA-Remote-Sensing Image
Analyser) developed for image analysis which is part of the system
is also described in this paper. The objectives of this
user-friendly graphical tool include calculation of some feature
descriptors such as area, length, perimeter of irregular-shaped
objects/patterns, calculation of centroid of irregular objects, and
automatic classification of some of the patterns in remote-sensed
images such as land, island, water body, river, fire. |
|
Title: |
SYNTHESISE WEB QUERIES: SEARCH THE
WEB BY EXAMPLES |
Author(s): |
Vishv Malhotra, Sunanda Patro and
David Johnson |
Abstract: |
An algorithm to synthesise a web
search query from example documents is described. A user searching
for information on the Web can use a rudimentary query to locate a
set of potentially relevant documents. The user classifies the
retrieved documents as being relevant or irrelevant to his or her
needs. A query can be synthesised from these categorised documents
to perform a definitive search with good recall and precision
characteristics. |
|
Title: |
FUZZY PATTERN RECOGNITION BASED FAULT
DIAGNOSIS |
Author(s): |
Rafik Bensaadi and Hayet Mouss |
Abstract: |
In order to avoid catastrophic
situations when the dynamics of a physical system (entity in a M.A.S
architecture) are evolving toward an undesirable operating mode,
particular and quick safety actions have to be programmed in the
control design. Classic control (PID and even state model based
methods) becomes powerless for complex plants (nonlinear, MIMO and
ill-defined systems). A more efficient diagnosis requires an
artificial intelligence approach. We propose in this paper the
design of a Fuzzy Pattern Recognition System (FPRS) that solves, in
real time, the main following problems: 1 Identification of an
actual state, 2 Identification of an eventual evolution towards a
failure state, 3 Diagnosis and decision-making. |
|
Title: |
IMPROVEMENT ON THE INDIVIDUAL
RECOGNITION SYSTEM WITH WRITING PRESSURE BASED ON RBF |
Author(s): |
Lina Mi and Fumiaki Takeda |
Abstract: |
In our previous research work, an
individual recognition system with writing pressure using
neuro-template of multilayer feedforward network with sigmoid
function was developed. Although this system is effective on
recognition for known registrant, its rejection capability for
counterfeit signature is not enough for commercial use. In this
paper, a new activation function is proposed to improve the
counterfeit rejection performance of the system on the premise of
ensuring the recognition performance for known signature. The
experiment results show that the proposed activation function is
effective to improve the counterfeit rejection capability of the
system with keeping the recognition capability for known signature
satisfying compared with the original system with sigmoid function |
|
Title: |
KNOWLEDGE ACQUISITION MODELING
THROUGH DIALOGUE BETWEEN COGNITIVE AGENTS |
Author(s): |
Mehdi Yousfi-Monod and Violaine
Prince |
Abstract: |
The work described in this paper
tackles learning and communication between cognitive artificial
agents. Focus is on dialogue as the only way for agents to acquire
knowledge, as it often happens in natural situations. Since this
restriction has scarcely been studied as such in artificial
intelligence (AI), until now, this research aims at providing a
dialogue model devoted to knowledge acquisition. It allows two
agents, in a ’teacher’ - ’student’ relationship, to exchange
information with a learning incentive (on behalf of the ’student’).
The article first defines the nature of the addressed agents, the
types of relation they maintain, and the structure and contents of
their knowledge base. It continues by describing the different aims
of learning, their realization and the solutions provided for
problems encountered by agents. A general architecture is then
established and a comment on an a part of the theory implementation
is given.Conclusion is about the achievements carried out and the
potential improvement of this work. |
|
Title: |
HOW TO VALUE AND TRANSMIT NUCLEAR
INDUSTRY LONG TERM KNOWLEDGE |
Author(s): |
Anne Dourgnon-Hanoune, Eunika
Mercier-Laurent and Christophe Roche |
Abstract: |
The French nuclear industry deals
with technologies which will soon be thirty years old. If such
technologies are not renewed they must last for another ten years-
or more if the decision is taken to keep them working. There is a
risk of technological obsolescence- something which is allowed for
in other national and international projects. There is also the
question of constant commercial demand- something also considered
elsewhere in establishing contracts. Another problem is now
beginning to emerge; the continuity and transmission of knowledge
and experience concerning these plants. Personnel in the energy
sector are being renewed. Most current employees are due to retire
in the course of this decade. How is knowledge (both of maintenance
and planning) to be transmitted to the new generations ? This
knowledge includes written information but also know-how and
implicit working assumptions; expertise, experience,
self-learning…In the United States the EPRI produced a technical
dossier “Capturing high value undocumented knowledge in the Nuclear
Industry. Guidelines and methods 1002896 Final report. December
2002.” The problem of knowledge of old technologies is therefore
recent, but almost universal. As far as EDF knows, nobody is
considering this subject in its entirety. Instead, each technology
puts the emphasis on operation (and thus safety) according to a
fixed timetable (ten-year visits, end of use). In this perspective
the initial knowledge of Requirements can be lost. It can happen,
for example, that the need for renewal can oblige the agency to
carry out a costly or difficult retro-engineering project so as to
recover the original knowledge and technology. If we look ahead, the
policy of long term development (notably extending the life of
plants) requires us to consider the life-span of the different
skills and knowledge required by each environment. So it is
necessary to take into account the entire life cycle of a nuclear
installation. We are working on organizing all this knowledge and
building an innovating solution for easy acquisition, access and
sharing knowledge and experiences. First we are creating an
ontology-based common language for all involved and defining some
applications on Intranet. Ontology, understood as an agreed
vocabulary of common terms and meanings shared by a group of people,
is a means for representing craft concepts upon which knowledge can
be organised and classified. We shall present one of the first
applications based on the Logic Diagrams Designer's ontology whose
main goals are to keep in memory the craft knowledge about relay
circuits schemas and to allow accessing and retrieval information.
This choice of ontology as a basis provides an easy and relevant
navigation, indexing and search of documents... |
|
Title: |
AN INFORMATION SYSTEM TO PERFORM
SERVICES REMOTELY FROM A WEB BROWSER |
Author(s): |
M.P. Cuellar, M. Delgado, W. Fajardo
and R. Pérez-Pérez |
Abstract: |
This paper presents the development
of BioMen (Biological Management Executed over Network), an
Internet-managed system. By using service ontologies, the user is
able to perform services remotely from a web browser. In addition,
artificial intelligence techniques have been incorporated so that
the necessary information may be obtained for the study of
biodiversity. We have built a tool which will be of particular use
to botanists and which can by accessed from anywhere in the world
thanks to Internet technology. In this paper, we shall present the
results and how we developed the tool. |
|
Title: |
COMBINING NEURAL NETWORK AND SUPPORT
VECTOR MACHINE INTO INTEGRATED APPROACH FOR BIODATA MINING |
Author(s): |
Keivan Kianmehr, Hongchao Zhang,
Konstantin Nikolov, Tansel Özyer and Reda Alhajj |
Abstract: |
Bioinformatics is the science of
managing, mining, and interpreting information from biological
sequences and structures. In this paper, we discuss two data mining
techniques that can be applied in bioinformatics: namely, Neural
Networks (NN) and Support Vector Machines (SVM), and their
application in gene expression classification. First, we provide
description of the two techniques. Then we propose a new method that
combines both SVM and NN. Finally, we present the results obtained
from our method and the results obtained from SVM alone on a sample
dataset. |
|
Title: |
CONSTRUCTION OF DECISION TREES USING
DATA CUBE |
Author(s): |
Lixin Fu |
Abstract: |
Data classification is an important
problem in data mining. The traditional classification algorithms
based on decision trees have been widely used due to their fast
model construction and good model understandability. However, the
existing decision tree algorithms need to recursively partition
dataset into subsets according to some splitting criteria i.e. they
still have to repeatedly compute the records belonging to a node
(called F-sets) and then compute the splits for the node. For large
data sets, this requires multiple passes of original dataset and
therefore is often infeasible in many applications. In this paper we
present a new approach to constructing decision trees using
pre-computed data cube. We use statistics trees to compute the data
cube and then build a decision tree on top of it. Mining on
aggregated data stored in data cube will be much more efficient than
directly mining on flat data files or relational databases. Since
data cube server is usually a required component in an analytical
system for answering OLAP queries, we essentially provide “free”
classification by eliminating the dominant I/O overhead of scanning
the massive original data set. Our new algorithm generates trees of
the same prediction accuracy as existing decision tree algorithms
such as SPRINT and RainForest but improves performance
significantly. In this paper we also give a system architecture that
integrates DBMS, OLAP, and data mining seamlessly. |
|
Title: |
A RECURRENT NEURAL NETWORK RECOGNISER
FOR ONLINE RECOGNITION OF HANDWRITTEN SYMBOLS |
Author(s): |
Bing Quan Huang and Tahar Kechadi |
Abstract: |
This paper presents an innovative
hybrid approach for online recognition of handwritten symbols. This
approach is composed of two main techniques. The first technique,
based on fuzzy logic, deals with feature extraction from a
handwritten stroke and the second technique, a recurrent neural
network, uses the features as an input to recognise the symbol. In
this paper we mainly focuss our study on the second technique. We
proposed a new recurrent neural network architecture associated with
an efficient learning algorithm. We describe the network and explain
the relationship between the network and the Markov chains. Finally,
we implemented the approach and tested it using benchmark datasets
extracted from the Unipen database. |
|
Title: |
AN APPLICATION OF NON-LINEAR
PROGRAMMING TO TRAIN RECURRENT NEURAL NETWORKS IN TIME SERIES
PREDICTION PROBLEMS |
Author(s): |
M. P. Cuéllar, M. Delgado and M. C.
Pegalajar |
Abstract: |
Artificial Neural Networks are
bioinspired mathematical models that have been widely used to solve
many complex problems. However, the training of a Neural Network is
a difficult task since the traditional training algorithms may get
trapped into local optimal solutions easily. This problem is greater
in Recurrent Neural Networks, where the traditional training
algorithms sometimes provide unsuitable solutions. Some evolutionary
techniques have also been used to improve the training stage, and to
overcome such local optimals solutions, but they have the
disadvantage that the time taken to train the network is high. The
objective of this work is to show that the use of some non-linear
programming techniques is a good choice to train a Neural Network,
since they may provide suitable solutions quickly. In the
experimental section, we apply the models proposed to train an Elman
Recurrent Neural Network in real Time Series Prediction problems. |
|
Title: |
AGENT-BASED INTRUSION DETECTION
SYSTEM FOR INTEGRATION |
Author(s): |
Jianping Zeng and Donghui Guo |
Abstract: |
More and more software applications
are built on the Internet for its wide distribution, low cost at
application deployment. However, for the open property of the
Internet, everyone may access the resources you put on it. As a
result, there are many attacks, such as Deny of Service, illegal
intrusion, etc. So, the security of application becomes a serious
problem. Because of the shortcoming of all kinds of firewall systems
in ensuring security, intrusion detection system (IDS) becomes
popular. There exist many IDS systems, and these systems mainly
concentrate on network-based and host-based detection. So, they
can’t be applied to application-based detection because their
ability of integration with actual applications is too poor. An
agent-based intrusion detection system that can be integrated into
applications of enterprise information systems very well is
proposed. The system architecture, agent structure, integration
mechanism, etc, are mainly discussed. In such an IDS system, we
focus on three kinds of agents, i.e. client agent, server agent and
communication agent. And we explain how to integrate agents with
access control model to achieve better security performance. And by
introducing standard protocol such as KQML, IDMEF into the design of
agent, a more flexible and integratable agent-based IDS is built. |
|
Title: |
A PROPERTY SPECIFICATION LANGUAGE FOR
WORKFLOW DIAGNOSTICS |
Author(s): |
E. E. Roubtsova |
Abstract: |
The paper presents a declarative
language for workflow property specification. The language has been
developed to help analysts in formulating workflow-log properties in
such a way that the properties can be checked automatically. The
language is based on the Propositional Linear Temporal Logics and
the structure of logs. The standard structure of logs is used when
building algorithms for property checks. Our tool for property
driven workflow mining combines a tool-wizard for property
construction, property parsers for syntax checkers and a verifier
for property verification. The tool is implemented as an independent
component that can extend any process management system or any
process mining tool. |
|
Title: |
A WEB-BASED ARCHITECTURE FOR
INDUCTIVE LOGIC PROGRAMMING IN BIOLOGY |
Author(s): |
Andrei Doncescu, Katsumi Inoue,
Muhammad Farmer and Gilles Richard |
Abstract: |
In this paper, we present a current
cooperative work involving different institutes around the world.
Our aim is to provide an online Inductive Logic Programming tool.
This is the first step in a more complete structure for enabling
e-technology for machine learning and bio-informatics. We describe
the main architecture of the project and how the data will be
formatted for being sent to the ILP machinery. We focus on a
biological application (yeast fermentation process) due to its
importance for high added value end products. |
|
Title: |
MULTI-AGENT SYSTEM FORMAL MODEL BASED
ON NEGOTIATION AXIOM SYSTEM OF TEMPORAL LOGIC |
Author(s): |
Xia Youming, Yin Hongli and Zhao
Lihong |
Abstract: |
In this paper we describe the formal
senmatic frame and introduce the formal language LTN to express the
time and the ability and right of an agent on selecting action and
negotiation process in a Multi-Agent system, the change of the right
over time, the free action of an agent and the time need by a agent
to complete an action. Based on the above, the independent
negotiation system has been further complete. In this paper, it is
also addressed that the axiom system is rational, validate and
negotiation reasoning logic is soundness, completeness and
consistent. Key words: Negotiation Axiom, senmatic frame,
Multi-Agent system, negotiation reasoning logic, temporal logic |
|
Title: |
HANDLING MULTIPLE EVENTS IN HYBRID
BDI AGENTS WITH REINFORCEMENT LEARNING: A CONTAINER APPLICATION |
Author(s): |
Prasanna Lokuge and Damminda
Alahakoon |
Abstract: |
Vessel berthing in a container port
is considered as one of the most important application systems in
the shipping industry. The objective of the vessel planning
application system is to determine a suitable berth guaranteeing
high vessel productivity. This is regarded as a very complex dynamic
application, which can vastly benefited from autonomous decision
making capabilities. On the other hand, BDI agent systems have been
implemented in many business applications and found to have some
limitations in observing environmental changes, adaptation and
learning. We propose new hybrid BDI architecture with learning
capabilities to overcome some of the limitations in the generic BDI
model. A new “Knowledge Acquisition Module” (KAM) is proposed to
improve the learning ability of the generic BDI model. Further, the
generic BDI execution cycle has been extended to capture multiple
events for a committed intention in achieving the set desires. This
would essentially improve the autonomous behavior of the BDI agents,
especially, in the intention reconsideration process. Changes in the
environment are captured as events and the reinforcement learning
techniques have been used to evaluate the effect of the
environmental changes to the committed intentions in the proposed
system. Finally, the Adaptive Neuro Fuzzy Inference (ANFIS) system
is used to determine the validity of the committed intentions with
the environmental changes. |
|
Title: |
DEFENDING AGAINST BUSINESS CRISES
WITH THE HELP OF INTELLIGENT AGENT BASED EARLY WARNING SOLUTIONS |
Author(s): |
Shuhua Liu |
Abstract: |
In the practice of business
management, there is a pressing need for good information management
instruments that can constantly acquire, monitor and analyze the
early warning signals of business crises, thus effectively support
decision makers in the early detection of crisis situations. With
the development of advanced computing methods and information
technology, there bring new opportunities for the construction of
such instruments. In this paper, we proposed the use of business
life cycle model as a larger framework of guidance for an early
warning system of business crises. We also developed a framework for
an intelligent agent based early warning system, and discussed the
application of soft computing methods in the intelligent analysis of
early warning information. This will provide a starting point for
the development of intelligent agent based early warning solutions. |
|
Title: |
USER MODELLING FOR DIARY MANAGEMENT
BASED ON INDUCTIVE LOGIC PROGRAMMING |
Author(s): |
Behrad Assadian and Heather Maclaren |
Abstract: |
Software agents are being produced in
many different forms to carry out different tasks, with personal
assistants designed to reduce the amount of effort it takes for the
user to go about their daily tasks. Most personal assistants work
with user preferences when working out what actions to perform on
behalf of their user. This paper describes a novel approach for
modelling user behaviour in the application area of Diary Management
with the use of Inductive Logic Programming. |
|
Title: |
A CONCEPTION OF NEURAL NETWORKS
IMPLEMENTATION IN THE MODEL OF A SELF-LEARNING VIRTUAL POWER PLANT |
Author(s): |
Robert Kucęba and Leszek Kiełtyka |
Abstract: |
The present article focuses on
learning methods of self-learning organization (on the example of
the virtual power plant), using artificial intelligence. There was
multi-module structure of the virtual power plant model presented,
in which there were automated chosen learning processes of the
organization as well as decision making processes. |
|
Title: |
KNOWLEDGE DISCOVERY FROM THE WEB |
Author(s): |
Maryam Hazman, Samhaa R. El-Beltagy,
Ahmed Rafea and Salwa El-Gamal |
Abstract: |
The World Wide Web is a rich resource
of information and knowledge. Within this resource, finding relevant
answers to some given question is often a time consuming activity
for a user. In the presented work we construct a web mining
technique that can extract information from the web and create
knowledge from it. The extracted knowledge can be used to respond
more intelligently to user requests within the diagnosis domain. Our
system has three main phases namely: a categorization phase, an
indexing phase, and search a phase. The categorization phase is
concerned with extracting important words/phrases from web pages
then generating the categories included in them. The indexing phase
is concerned with indexing web page sections. While the search phase
interacts with the user in order to find relevant answers to their
questions. The system was tested using a training web pages set for
the categorization phase. Work in the indexing and search phase is
still in going. |
|
Title: |
MULTIDIMENSIONAL SELECTION MODEL FOR
CLASSIFICATION |
Author(s): |
Dymitr Ruta |
Abstract: |
Recent research efforts dedicated to
classifier fusion have made it clear that combining performance
strongly depends on careful selection of classifiers. Classifier
performance depends, in turn, on careful selection of features,
which on top of that could be applied to different subsets of the
data. On the other hand, there is already a number of classifier
fusion techniques available and the choice of the most suitable
method relates back to the selection in the classifier, feature and
data spaces. Despite this apparent selection multidimensionality,
typical classification systems either ignore the selection
altogether or perform selection along only single dimension, usually
choosing the optimal subset of classifiers. The presented
multidimensional selection sketches the general framework for the
optimised selection carried out simultaneously on many dimensions of
the classification model. The selection process is controlled by the
specifically designed genetic algorithm, guided directly by the
final recognition rate of the composite classifier. The prototype of
the 3-dimensional fusion-classifier-feature selection model is
developed and tested on some typical benchmark datasets. |
|
Title: |
MINING VERY LARGE DATASETS WITH SVM
AND VISUALIZATION |
Author(s): |
Thanh-Nghi Do and François Poulet |
Abstract: |
We present a new support vector
machine (SVM) algorithm and graphical methods for mining very large
datasets. We develop the active selection of training data points
that can significantly reduce the training set in the SVM
classification. We summarize the massive datasets into interval
data. We adapt the RBF kernel used by the SVM algorithm to deal with
this interval data. We only keep the data points corresponding to
support vectors and the representative data points of non support
vectors. Thus the SVM algorithm uses this subset to construct the
non-linear model. We also use interactive graphical methods for
trying to explain the SVM results. The graphical representation of
IF-THEN rules extracted from the SVM models can be easily
interpreted by humans. The user deeply understands the SVM models’
behaviour towards data. The numerical test results are obtained on
real and artificial datasets. |
|
Title: |
USING FUZZY LOGIC FOR PRICING |
Author(s): |
Acácio Magno Ribeiro, Luiz Biondi
Neto, Pedro Henrique Gouvęa Coelho, Joăo Carlos C. B. Soares de
Mello and Lidia Angulo Meza |
Abstract: |
This paper deals with traditional
pricing models under uncertainties. A fuzzy model is applied to the
classical economical approach in order to calculate the
possibilities of economical indices such as profits and losses. A
realistic case study is included to illustrate a typical application
of the fuzzy model to the pricing issue. |
|
Title: |
FREE SOFTWARE FOR DECISION ANALYSIS:
A SOFTWARE PACKAGE FOR DATA ENVELOPMENT MODELS |
Author(s): |
Lidia Angulo Meza, Luiz Biondi Neto,
Joăo Carlos Correia Baptista Soares de Mello, Eliane Gonçalves Gomes
and Pedro Henrique Gouvęa Coelho |
Abstract: |
Data Envelopment Analysis is based on
linear programming problems (LPP) in order to find the efficiency of
Decision Making Units (DMUs). This process can be computationally
intense, as a LPP has to be run for each unit. Besides, a typical
DEA LPP has a large number of redundant constraints concerning the
inefficient DMUs. That results in degenerate LPPs and in some cases,
multiple efficient solutions. The developed work intends to to fill
out a gap in current DEA software packages i.e. the lack of a piece
of software capable of producing full results in classic DEA models
as well as the capability of using more advanced DEA models. The
software package interface as well as the models and solution
algorithms were implemented in Delphi. Both basic and advanced DEA
models are allowed in the package. Besides the main module that
includes the DEA models, there is an additional module containing
some models for decision support such as the multicriteria model
called Analytic Hierarchic Process (AHP). The developed piece of
software was coined as FSDA – Free Software for Decision Analysis |
|
Title: |
KNOWLEDGE NEEDS ANALYSIS FOR
E-COMMERCE IMPLEMENTATION: PEOPLE-CENTRED KNOWLEDGE MANAGEMENT IN AN
AUTOMOTIVE CASE STUDY |
Author(s): |
John Perkins, Sharon Cox and
Ann-Karin Jorgensen |
Abstract: |
A UK car manufacturer case study
provides a focus upon the problem of aligning transactional
information systems used in e-commerce with the necessary human
skills and knowledge to make them work effectively. Conventional
systematic approaches to analysing learning needs are identified in
the case study, which identifies some shortcomings when these are
applied to electronically mediated business processes. A programme
of evaluation and review undertaken in the case study is used to
propose alternative ways of implementing processes of developing and
sharing knowledge and skills as part of the facilitation of networks
of knowledge workers working with intra and inter-organisational
systems. The paper concludes with a discussion on the implications
of these local outcomes alongside some relevant literature in the
area of knowledge management systems. This suggests that the
cultural context constitutes a significant determinant of
initiatives to manage, or at least influence, knowledge based skills
in e-commerce applications. |
|
Title: |
EXTRACTING MOST FREQUENT CROATIAN
ROOT WORDS USING DIGRAM COMPARISON AND LATENT SEMANTIC ANALYSIS |
Author(s): |
Zvonimir Rados, Franjo Jovic and
Josip Job |
Abstract: |
A method for extracting root words
from Croatian language text is presented. The described method is
knowledge-free and can be applied to any language. Morphological and
semantic aspects of the language were used. The algorithm creates
morph-semantic groups of words and extract common root for every
group. For morphological grouping we use digram comparison to group
words depending on their morphological similarity. Latent semantic
analysis is applied to split morphological groups into semantic
subgroups of words. Root words are extracted from every
morpho-semantic group. When applied to Croatian language text, among
hundred most frequent root words, produced by this algorithm, there
were 57 grammatically correct and 32 FAP (for all practical
purposes) correct root words. |
|
Title: |
IMPROVED OFF-LINE INTRUSION DETECTION
USING A GENETIC ALGORITHM |
Author(s): |
Pedro A. Diaz-Gomez and Dean F.
Hougen |
Abstract: |
One of the primary approaches to the
increasingly important problem of computer security is the Intrusion
Detection System. Various architectures and approaches have been
proposed including: Statistical, rule-based approaches; Neural
Networks; Immune Systems; Genetics Algorithms; and Genetic
Programming. This paper focuses on the development of an off-line
Intrusion Detection System to analyze a Sun audit trail file.
Off-line intrusion detection can be accomplished by searching audit
trail logs of user activities for matches to patterns of events
required for known attacks. Because such search is NP-complete,
heuristic methods will need to be employed as databases of events
and attacks grow. Genetic Algorithms can provide appropriate
heuristic search methods. However, balancing the need to detect all
possible attacks found in an audit trail with the need to avoid
false positives (warnings of attacks that do not exist) is a
challenge, given the scalar fitness values required by GAs. This
study discusses a fitness function independent of variable
parameters to overcome this problem. It also describes extending the
system to account for the possibility that intrusions are either
mutually exclusive or not mutually exclusive. |
|
Title: |
AN INTEGRATED FRAMEWORK FOR RESEARCH
IN ORGANIZATIONAL KNOWLEDGE MANAGEMENT |
Author(s): |
Sabrina S. S. Fu and Matthew K. O.
Lee |
Abstract: |
Knowledge is an important key asset
to many organizations. Organizations which can manage knowledge
effectively are expected to gain competitive advantage. Information
technologies have been widely employed to facilitate Knowledge
Management (KM). This paper reviews and synthesise the main prior
conceptual and empirical literature, resulting in a comprehensive
framework for research in IT-enabled KM at the organizational level.
The framework aids the understanding and classification of KM
related research; and the generation of potential hypotheses for
future research. |
|
Title: |
A CRYPTOGRAPHIC APPROACH TO LANGUAGE
IDENTIFICATION: PPM |
Author(s): |
Ebru Celikel |
Abstract: |
In this study, the adaptive
statistical modeling technique called Prediction by Partial Matching
(PPM) is used for written language discrimination. PPM can well
serve as a cryptographic tool in that, as long as the algorithm
itself is unknown to the third parties, it represents the plaintext
in a hard-to-recover form by encoding it. Furthermore, PPM algorithm
yields lossless compression to far better rates (in bits per
character –bpc) than that of conventional compression tools. Trained
version of PPM is employed for implementation. Language
identification experiment results obtained on sample texts from
English, French and Turkish Corpora are given. The rate of success
yielded that the performance of the system is highly dependent on
the diversity, as well as the target text and training text file
sizes. In practice, if the training text itself is kept secret, the
system would provide cryptographic security to promising degrees.
|
|
Title: |
SYSTEMATIC GENERATION IN DCR
EVALUATION PARADIGM : APPLICATION TO THE PROTOTYPE CLIPS SYSTEM |
Author(s): |
Mohamed Ahafhaf |
Abstract: |
In this paper we present an extension
of DCR evaluation method tested on a spoken language understanding
and dialog system. It should allow a deep evaluation of spoken
language understanding and dialog systems. The key point of our
method is the use of a linguistic typology in order to generate an
evaluation corpus that covers a significant number of the linguistic
phenomena we want to evaluate our system on. This allows having a
more objective and deep evaluation of spoken language understanding
and dialog systems. |
|
Title: |
KNOWLEDGE MANAGEMENT SUPPORT FOR
SYSTEM ENGINEERING COMMUNITY |
Author(s): |
Olfa Chourabi, Mohamed Ben Ahmed and
Yann Pollet |
Abstract: |
Knowledge is recognized as a crucial
ressource in today’s knowledge intensive organizations. Creating
effective Knowledge Management structures is one of the key success
factors in System process improvement initiatives (like the
Capability Maturity Model , Spice , Trillium , etc.). This
contribution aims to provide a starting point for discussions on how
to design a Knowledge Management system that support System
engineering organizations. After motivating the problem domain, we
introduce a conceptual architecture supporting continuous learning
and reuse of all kinds of experiences from the System Engineering
(SE) domain and present the underlying methodology |
|
Title: |
VISUAL SVM |
Author(s): |
François Poulet |
Abstract: |
We present a cooperative approach
using both Support Vector Machine (SVM) algorithms and visualization
methods. SVM are widely used today and often give high quality
results, but they are used as "black-box", (it is very difficult to
explain the obtained results) and cannot treat easily very large
datasets. We have developed graphical methods to help the user to
evaluate and explain the SVM results. The first method is a
graphical representation of the separating frontier quality (it is
presented for the SVM case, but can be used for any other boundary
like decision tree cuts, regression lines, etc). Then it is linked
with other graphical methods to help the user explaining SVM
results. The information provided by these graphical methods can
also be used in the SVM parameter tuning stage. These graphical
methods are then used together with automatic algorithms to deal
with very large datasets on standard personal computers. We present
an evaluation of our approach with the UCI and the Kent Ridge
Bio-medical data sets. |
|
Title: |
SYMBOLIC KNOWLEDGE REPRESENTATION IN
TRANSCRIPT BASED TAXONOMIES |
Author(s): |
Philip Windridge, Bernadette Sharp
and Geoff Thompson |
Abstract: |
The aim of this paper is to introduce
a design for the taxonomical representation of participants’
instantial meaning-making, as the basis for providing a measure of
ambiguity and contestation, during a social activity from which a
transcript has been produced. We use hyponymy and meronymy as the
basis for our taxonomies and adopt the System Network formalism as
the basis for their representation. We achieve an integration of
transcript and taxonomy using an XML based ‘satellite’ system of
data storage which allows for the addition of an unlimited number of
analyses stored using the same system. This is possible because of
the separation of transcript content data from metadata. Content
data forms a ‘Root’ document which can then ‘mapped’ to by an
arbitrary number of ‘Descriptor’ documents. As a minimum
configuration, Transcript Based Taxonomies require a Root document,
a Taxonomy Descriptor and a document containing transcript specific
data called an SLA Descriptor. This system automatically confers
instantial meanings by mapping Descriptor document elements to
elements in the Root. Subsequent references to Root elements
automatically include all other mappings to that Root element. Part
of this mapping also includes the sequence of Root elements,
accommodating the diachronic representation of meaning-making.
Together with a number of methods that identify specific areas of
ambiguity and contestation, which use attributes in the Taxonomy
Descriptor XML elements, this diachronic representation provides the
basis for measuring ambiguity and contestation. |
|
Title: |
ENTERPRISE ANTI-SPAM SOLUTION BASED
ON MACHINE LEARNING APPROACH |
Author(s): |
Igor Mashechkin, Mikhail Petrovskiy
and Andrey Rozinkin |
Abstract: |
Spam-detection systems based on
traditional methods have several obvious disadvantages like low
detection rate, necessity to regularly update knowledge bases,
absence of personalization. New intelligent methods for spam
detection that use statistical and machine-learning algorithms solve
these problems successfully. But these methods are not wide-used in
spam classification for enterprise-level mail servers because of
their high resources consumption and insufficient accuracy in terms
of false-positive errors. In this paper we present the solution
based on precise and fast algorithm, classification quality of which
is better than Naďve-Bayes method’s that is most widespread now. The
problem of time efficiency that is typical for learning algorithms
is solved using multi-agent architecture that allows easily scale
system and build uniform corporate system for spam detection based
on heterogeneous enterprise mail system. Pilot program
implementation and its experimental evaluation for standard data
sets, and on real flows of mail have demonstrated that our approach
outperforms existing learning and traditional methods of spam
filtering. That allows to consider it as a promising platform for
construction of enterprise spam filtering systems. |
|
Title: |
A SURVEY OF CASE-BASED DIAGNOSTIC
SYSTEMS FOR MACHINES |
Author(s): |
Erik Olsson |
Abstract: |
Electrical and mechanical equipment
such as gearboxes in an industrial robot or electronic circuits in
an industrial printer sometimes fail to operate as intended. The
faulty component can be hard to locate and replace and it might take
a long time to get an enough experienced technician to the spot. In
the meantime thousands of dollars may be lost due to a delayed
production. Systems based on case-based reasoning are well suited to
prevent this kind of hold in the production. Their ability to reason
from past cases and to learn from new ones is a powerful method to
use when a failure in a machine occurs. The system is able to
automatically search its library of past cases and propose a
solution to the problem. A less experienced technician can use this
solution and quickly repair the machine. Case-based reasoning
systems used for diagnosis of machines is a young field of research
and it shows promising results for the future |
|
Title: |
A BAYESIAN NETWORKS STRUCTURAL
LEARNING ALGORITHM BASED ON A MULTIEXPERT APPROACH |
Author(s): |
Francesco Colace, Massimo De Santo,
Mario Vento and Pasquale Foggia |
Abstract: |
The determination of Bayesian network
structure, especially in the case of large domains, can be complex,
time consuming and imprecise. Therefore, in the last years, the
interest of the scientific community in learning Bayesian network
structure from data is increasing. This interest is motivated by the
fact that many techniques or disciplines, as data mining, text
categorization, ontology building, can take advantage from
structural learning. In literature we can find many structural
learning algorithms but none of them provides good results in every
case or dataset. In this paper we introduce a method for structural
learning of Bayesian networks based on a multiexpert approach. Our
method combines the outputs of five structural learning algorithms
according to a majority vote combining rule. The combined approach
shows a performance that is better than any single algorithm. We
present an experimental validation of our algorithm on a set of “de
facto” standard networks, measuring performance both in terms of the
network topological reconstruction and of the correct orientation of
the obtained arcs. |
|
Title: |
A BAYESIAN APPROACH FOR AUTOMATIC
BUILDING LIGHTWEIGHT ONTOLOGIES FOR E-LEARNING ENVIRONMENT |
Author(s): |
Francesco Colace, Massimo De Santo,
Mario Vento and Pasquale Foggia |
Abstract: |
In the last decade the term
“Ontology” has become a fashionable word inside the Knowledge
Engineering Community. Although there are several methodologies and
methods for building ontologies they are not fully mature if we
compare them with software and knowledge engineering techniques. In
literature the main approaches to solve this problem aim to
facilitate manual ontology engineering by providing natural language
processing tools or skeleton methods. Other approaches rely on
machine learning and automated language processing techniques in
order to extract concepts and relations from structured or
unstructured data such as databases and text. This second approach
is more interesting and fashionable but shows very poor results. On
the other hand the concept of ontology is not unique. In this paper
we propose a novel approach for building university curricula
ontology through analysis of real data: answers of students to final
course tests. In this paper the term ontology means Lightweight
Ontology: a taxonomy with more semantic value In fact teachers
design these tests keeping in mind the main topics of course
knowledge domain and their semantic relation. The ontology building
is accomplished by means of Bayesian Networks. The proposed method
is composed by two steps: the first one uses a structural learning
multi-expert system in order to build a Bayesian Network from data
analysis. In the second step the obtained Bayesian Network is
translated in the course ontology. This approach can be useful for
performing subsequent inference and knowledge extraction tasks as
for example the updating of lesson’s sequencing in e-learning
environment or for improving intelligent tutoring systems
performance. |
|
Title: |
A CLUSTER FRAMEWORK FOR DATA MINING
MODELS - AN APPLICATION TO INTENSIVE MEDICINE |
Author(s): |
Manuel Santos, Joăo Pereira and
Álvaro Silva |
Abstract: |
Clustering is a technique widely
applied in Data Miming problems due to the granularity, accuracy and
adjustment of the models induced. Although the referred results,
this approach generates a considerable large set of models which
difficult the application to new cases. This paper presents a
framework to deal with the enounced problem supported by a
three-dimensional matrix structure. The usability and benefits of
this instrument are demonstrated trough a case study in the area of
intensive medicine. |
|
Title: |
QUALITY CONTENT MANAGEMENT FOR
E-LEARNING: GENERAL ISSUES FOR A DECISION SUPPORT SYSTEM |
Author(s): |
Erla Morales and Francisco García |
Abstract: |
In today’s world, reusable learning
object concepts and standards for their treatment represent an
advantage for knowledge management systems to whatever kind of
business which supports an on-line system. Users are able to manage
and reuse content according to their needs without interoperability
problems. The possibility of importing learning objects for
e-learning aim to increase their information repository but the
learning object quality is not guaranteed. Due to the great
importance of knowledge and its suitable management for e-learning,
this work proposes a system to manage quality learning objects or
units of learning to support teachers to select the best content to
structure their course. To achieve this we suggest two subsystems:
First, an importation, normalization and evaluation subsystem; and
second, a selection, delivery and post evaluation subsystem. |
|
Title: |
INTELLIGENT SOLUTION EVALUATION BASED
ON ALTERNATIVE USER PROFILES |
Author(s): |
Georgios Bardis, Georgios Miaoulis
and Dimitri Plemenos |
Abstract: |
The MultiCAD platform is a system
that accepts the declarative description of a scene (e.g. a
building) as input and generates the geometric descriptions that
comply with the specific description. Its goal is to facilitate the
transition from the intuitive hierarchical decomposition of the
scene to its concrete geometric representation. The aim of the
present work is to provide the existing system with an intelligent
module that will capture, store and apply user preferences in order
to eventually automate the task of solution selection. A combination
of two components based on decision support and artificial
intelligence methodologies respectively are currently being
implemented. A method is also proposed for the fair and efficient
comparison of the results. |
|
Title: |
IMPLEMENTATION OF A HYBRID INTRUSION
DETECTION SYSTEM USING FUZZYJESS |
Author(s): |
Aly El–Semary, Janica Edmonds, Jesús
González and Mauricio Papa |
Abstract: |
This paper describes an
implementation of a fuzzy logic inference engine that is used as a
part of a Hybrid Fuzzy Logic Intrusion Detection System. A
data-mining algorithm is used off-line to produce fuzzy-logic rules
and capture features of interest in network traffic. Using an
inference engine, the intrusion detection system evaluates these
rules and gives network administrators indications of the firing
strength of the ruleset. The inference engine implementation is
based on the Java Expert System Shell (Jess) from Sandia National
Laboratories and FuzzyJess available from the National Research
Council of Canada. Examples and experimental results using data sets
from MIT Lincoln Laboratory demonstrate the potential of the
approach. |
|
Title: |
A DECISION SUPPORT SYSTEM BASED ON
NEURO-FUZZY SYSTEM FOR RAILROAD MAINTENANCE PLANNING |
Author(s): |
Michele Ottomanelli, Mauro Dell’Orco
and Domenico Sassanelli |
Abstract: |
Optimization of Life Cycle Cost
(LCC), related to the railroad maintenance, is one of the main goals
of the railways managers. To obtain the best possible balance
between safety and operating costs, “on condition” maintenance is
more and more used; that is, a maintenance intervention is planned
only when and where necessary. Nowadays, the conditions of railways
are monitored through special diagnostic trains: such trains, like
Archimede, the diagnostic train of Italian National Railways,
measure simultaneously every 50 cm a number of dozens of
characteristic quantities. Therefore, they provide with a vast
amount of data, to be analyzed through an appropriate Decision
Support System (DSS), in order to plan an efficient on condition
maintenance. However, even the most up-to-date DSSs have some
drawbacks: first of all, they are based on a binary logic with rigid
thresholds, restricting their flexibility in use; additionally, they
adopt considerable simplifications in the rail track deterioration
model. In this paper, we present a DSS able to overcome these
drawbacks: based on fuzzy logic, it is able to handle thresholds
expressed as a range, an approximate number or even a verbal value;
moreover, through artificial neural networks it is possible to
obtain more likely the rail track deterioration models. The proposed
model can analyze the data available for a given portion of
rail-track and then it plans the maintenance, optimizing the
avail-able resources. |
|
Title: |
SCENARIO MANAGEMENT: PROCESS AND
SUPPORT |
Author(s): |
M. Daud Ahmed and David Sundaram |
Abstract: |
Scenario planning is a widely
accepted management tool for decision support activities. Scenario
planning, development, organisation, analysis, and evaluation are
generally quite complex processes. Systems that purport to support
these processes are complex and difficult to use and do not fully
support all phases of scenario management. Though traditional
Decision Support Systems (DSS) provide strong database, modelling
and visualisation capabilities for the decision maker they do not
explicitly support scenario management well. This paper presents an
integrated life cycle approach for scenario driven flexible decision
support. The proposed processes help the decision maker with idea
generation, scenario planning, development, organisation, analysis,
and execution. We also propose a generalised scenario evaluation
process that allows homogeneous and heterogeneous scenario
comparisons. This research develops a domain independent,
component-based, modular framework and architecture that support the
proposed scenario management process. The framework and architecture
have been validated through a concrete prototype. |
|
Title: |
TRANSFERRING PROBLEM SOLVING
STRATEGIES FROM THE EXPERT TO THE END USERS - SUPPORTING
UNDERSTANDING |
Author(s): |
Anne Hĺkansson |
Abstract: |
To support sharing knowledge between
people in an organisation, new types of systems are needed to
transfer domain knowledge and problem solving strategies from an
expert to end users and thereby making the knowledge available and
applicable in a specific domain. However, to make the knowledge
available these systems usually use a small number of views for
displaying the contents of the system but the end users may use
several different views. Moreover to apply the knowledge in the
organisation, the systems need a way of illustrating the reasoning
strategies involved in an interpretation of the knowledge to reach
conclusions. One solution is to incorporate different diagrams
knowledge management systems to facilitate the user’s grasping of
the knowledge and the strategies. This paper describes the manners
knowledge management systems can facilitate transferring problem
solving strategies from a domain expert to different kinds of end
users. To this objective, we suggest using visualisation and
graphical diagrams together with simulation to support transferring
problem solving strategies from a domain expert to end users.
Visualisation can support end users to follow the reasoning strategy
of the system more easily (Hĺkansson 2003:a; Hĺkansson 2003:b). This
visualisation includes static presentation and dynamic presentation
of rules and facts in the knowledge base, which are used during
execution of the system. The static illustrates how different rules
are statically related in sequence diagram of the Unified Modelling
Language (UML). The dynamic visualises the rules used and the facts
relevant to a specific consultation, i.e., the presentation depends
on the input inserted by the users. This is illustrated in
collaboration diagram of the UML. The dynamic presentation is also
to be used to simulate of the reasoning strategy for particular
session. |
|
Title: |
CLINICAL DECISION SUPPORT BY TIME
SERIES CLASSIFICATION USING WAVELETS |
Author(s): |
Markus Nilsson, Peter Funk and Ning
Xiong |
Abstract: |
Clinicians do sometimes need help
with diagnoses, or simply need reinsurance that they make the right
decision. This could be provided to the clinician in the form of a
decision support system. We have designed and implemented a decision
support system for the classification of time series. The system is
called HR3Modul and is designed to assist clinicians in the
diagnosis of respiratory sinus arrhythmia. Two parallel streams of
physiological time series are analysed for the classification task.
Patterns are retrieved from one of the time series by the support of
the other time series. These patterns are transformed with wavelets
and matched for similarity by Case-Based Reasoning. Pre-classified
patterns are stored and are used as knowledge in the system. The
amount of patterns that have to be matched for similarity is reduced
by a clustering technique. In this paper, we show that
classification of physiological time series by wavelets is a viable
option for clinical decision support. |
|
Title: |
SOFTWARE MAINTENANCE EXPERT SYSTEM
(SMXPERT) - A DECISION SUPPORT INSTRUMENT |
Author(s): |
Alain April and Jean-Marc Desharnais |
Abstract: |
Maintaining and supporting the
software of an organization is not an easy task, and software
maintainers do not currently have access to tools to evaluate
strategies for improving the specific activities of software
maintenance. This article presents a knowledge-based system which
helps in locating best practices in a software maintenance
capability maturity model (SMmm). The contributions of this paper
are: 1) to instrument the maturity model with a support tool to aid
software maintenance practitioners in locating specific best
practices; and 2) to describe the knowledge-based approach and
system overview used by the research team. |
|
Title: |
STRATEGIC INFORMATION SYSTEMS
ALIGNMENT - A DECISION SUPPORT APPLICATION FOR THE INTERNET ERA |
Author(s): |
David Lanc and Lachlan MacKinnon |
Abstract: |
Strategic information systems
planning, SISP, methods have proven organisationally complex to
utilise, despite 40 years of research and evolution of Information
Systems, IS, in the organisational context. The diverse nature of
organisational strategy and environmental factors have been mooted
as primary causes. On one hand, confusion exists in the literature
due to divergent, deficient definitions of SISP. On the other, a
lack of distinction exists between SISP as a planning process, and
the broader alignment of organisational direction with the IS
capability that provides the context for sustainable IS intellectual
and cultural integration. Consequently, no methods or models for
alignment of IS and organisational activities exist that have both
validity in the literature and sustainability in practice. HISSOM
(Holistic Information Systems Strategy for Organisational
Management) is a practical, holistic model that co-ordinates and
facilitates cohesive alignment of organisational needs and the IS
capability required to meet those needs, at (1) stakeholder; (2)
feedback metrics; (3) strategy and change management; and (4)
organisational culture and capability levels. HISSOM was initially
developed as a logical extension of the IS-alignment literature, and
has been validated by action research in several significant studies
in different industries, markets and organisational settings. The
HISSOM model has been revised in the light of these studies, and a
practical, Web-based decision support application, the HISSOM
Decision Support Advisor, HDSA, is now under development, to promote
wider use of the model and obtain evolutionary feedback from the
user community. A synthesis of the development of HISSOM and work on
designing the HDSA architecture is described, together with the
impact of this research on extending the field of SISP and
IS-alignment. |
|
Title: |
USING DMFSQL FOR FINANCIAL CLUSTERING |
Author(s): |
Ramón Alberto Carrasco, María Amparo
Vila and José Galindo |
Abstract: |
At present we have a dmFSQL server
available for Oracle© Databases, programmed in PL/SQL. This server
allows us to query a Fuzzy or Classical Database with the dmFSQL
language (Data Mining Fuzzy SQL) for any data type. The dmFSQL
language is an extension of the SQL language, which permits us to
write flexible (or fuzzy) conditions in our queries to a fuzzy or
traditional database. In this paper we propose the use of the dmFSQL
language for fuzzy queries as one of the techniques of Data Mining
which can be used to obtain the clustering results in real time.
This enables us to evaluate the process of extraction of information
(Data Mining) at both a practical and a theoretical level
(aplications in some Spanish Saving Banks). We present a new version
of the prototype, called DAPHNE, for clustering wich use dmFSQL. We
consider that this model satisfies the requirements of Data Mining
systems (handling of different types of data, high-level language,
efficiency, certainty, interactivity, etc) and this new level of
personal configuration makes the system very useful and flexible |
|
Title: |
EXECUTION OF IMPERATIVE NATURAL
LANGUAGE REQUISITIONS BASED ON UNL INTERLINGUA AND SOFTWARE
COMPONENTS |
Author(s): |
Flávia Linhalis and Dilvan de Abreu
Moreira |
Abstract: |
This paper describes the use of an
Interlingua as a new approach to the execution of imperative natural
language (NL) requisitions. Our goal is to embed a natural language
interface into applications to allow the execution of users
requisitions, described in natural language, through the activation
of specific software components. The advantage of our approach is
that natural language requisitions are first converted to an
interlingua, UNL (Universal Networking Language), before the
suitable components, methods and arguments are retrieved to execute
each requisition. The interlingua allows the use of different human
languages in the requisition (other systems are restricted to
English). The NL-UNL conversion is preformed by the HERMETO system.
In this paper, we also describe SeMaComp (Semantic Mapping between
UNL relations and Components), a module that extracts semantic
relevant information from UNL sentences and uses this information to
retrieve the appropriated software components. |
|
Title: |
WEB USAGE MINING USING ROUGH
AGGLOMERATIVE CLUSTERING |
Author(s): |
Pradeep Kumar, P. Radha Krishna,
Supriya Kumar De and S. Bapi Raju |
Abstract: |
Tremendous growth of the web world
incorporates application of data mining techniques to the web logs.
Data Mining and World Wide Web encompasses an important and active
area of research. Web log mining is analysis of web log files with
web pages sequences. Web mining is broadly classified as web content
mining, web usage mining and web structure mining. Web usage mining
is a techniques to discover usage patterns from Web data, in order
to understand and better serve the needs of Web-based applications.
This paper demonstrates a rough set based upper similarity
approximation method to cluster the web usage pattern. Results were
presented using clickstream data to illustrate our technique.
|
|
Title: |
A LINGUISTIC FUZZY METHOD TO STUDY
ELECTRICITY MARKET AGENTS |
Author(s): |
Santiago Garcia-Talegon and Juan
Moreno-Garcia |
Abstract: |
The aim of this paper is to study the
behavior of the agents that participate in the Spanish electricity
market, for this purpose, the data that the Market Operator provides
us after the period of confidentiality are analyzed. The objective
is to know the operation to simulate the offerings of blocks the
some of them. Market participants are companies authorized to
participate in the electricity production market as electricity
buyers and sellers. The economic management of the electricity
market is entrusted to Iberico Market Operator of Energy (MO). A
fuzzy method has been created. It is based on the hour and in the
matches obtained of the previous day at this hour, and it is capable
of model the behavior that is going to have an agent of the electric
market in each hour. |
|
Title: |
A METHODOLOGY FOR INTELLIGENT E-MAIL
MANAGEMENT |
Author(s): |
Francisco P. Romero, Jose A. Olivas
and Pablo Garcés |
Abstract: |
We present, in the context of the
intelligent Information Retrieval, a soft-computing based
methodology that enables the efficient e-mail management. We use
fuzzy logic technologies and a data mining process for automatic
classification of large amounts of e-mails in a folder organization.
It is also presented a process to deal with the incoming messages to
keep the achieved structure. The aim is to make possible an optimum
exploitation of the information contained in these messages.
Therefore, we apply Fuzzy Deformable Prototypes for the knowledge
representation. The effectiveness of the method has been proved by
applying these techniques in an IR system. The documents considered
are composed by a set of e-mail messages produced by some
distribution lists with different subjects and languages. |
|
Title: |
ANATOMY OF A SECURE AND SCALABLE
MULTIAGENT SYSTEM FOR EVENT CAPTURE AND CORRELATION |
Author(s): |
Timothy Nix, Kenneth Fritzsche and
Fernando Maymi |
Abstract: |
Event monitoring and correlation
across a large network is inherently difficult given limitations in
processing with regards to the huge quantity of generated data.
Multiple agent systems allow local processing of events, with
certain events or aggregate statistics being reported to centralized
data stores for further processing and correlation by other agents.
This paper presents a framework for a secure and scalable multiagent
system for distributed event capture and correlation. We will look
at what requirements are necessary to implement a generic multiagent
system from the abstract view of the framework itself. We will
propose an architecture that meets these requirements. Then, we
provide some possible applications of the multiagent network within
the described framework. |
|
Title: |
PERFORMANCE MEASUREMENT AND CONTROL
IN LOGISTICS SERVICE PROVIDING |
Author(s): |
Elfriede Krauth, Hans Moonen, Viara
Popova and Martijn Schut |
Abstract: |
Planning is the process of assigning
individual tasks to resources at a certain point in time. Initially
a manual job, however, in the past decades information systems have
largely overtaken this role, especially in industries such as
(road-) logistics. This paper focuses on the performance parameters
and objectives that play a role in the planning process. In order to
gain insight in the factors that should play a role when designing a
new software system for Logistical Service Providers (LSPs).
Therefore we study the area of Key Performance Indicators (KPI).
Typically, KPIs are used in a post-ante context: to evaluate the
past performance of a company. We reason that KPIs could be utilized
in the planning phase as well. The paper describes the extended
literature survey that we performed, and introduces a novel
framework that captures the dynamics of competing KPIs, by
positioning them in the practical context of an LSP. This framework
could be valuable input in the design of agent-based information
systems, capable of incorporating the business dynamics of today’s
LSPs. |
|
Title: |
DECISION SUPPORT SYSTEM FOR
AFFORDABLE HOUSING |
Author(s): |
Deidre E. Paris |
Abstract: |
This research used neural networks to
develop a decision support system, and model the relationship
between one’s living environment and residential satisfaction.
Residential satisfaction was investigated at two affordable housing
multifamily rental properties located in Atlanta, Georgia. The
neural network was trained using data from Defoors Ferry Manor and
the network was validated using data from Moores Mill. The neural
network accurately categorized ninety-eight percent of the cases in
the training set and ninety-three percent of the cases in the
validation test set. This research represents a first attempt to use
neural networking to model the relationship between one’s living
environment and residential satisfaction. |
|
Title: |
KNOWLEDGE MANAGEMENT IN
NON-GOVERNMENTAL ORGANISATIONS - A PARTNERSHIP FOR THE FUTURE |
Author(s): |
José Braga de Vasconcelos, Paulo
Castro Seixas, Paulo Gens Lemos and Chris Kimble |
Abstract: |
This paper explores Knowledge
Management (KM) practices for use with portal technologies in
Non-Governmental Organizations (NGOs). The aim is to help NGOs
become true CSOs (Civil Society Organizations). In order to deal
with (at the top) more donors and (at the bottom) more
beneficiaries, NGO’s working in Humanitarian Aid and Social
Development will increasingly require a system to manage the
creation, accessing and deployment information: within the NGOs
themselves, between different NGO’s that work together and,
ultimately, between NGOs and Civil Society as a whole.Put simply,
NGOs are organizations that need an effective KM solution to tackle
the problems that arise from both their local-global nature and from
the difficulties of ensuring effective communication between and
within NGO’s and Civil Society. To address these problems, the
underlying objectives, entities, activities, workflow and processes
of the NGO will be considered from a KM framework. Thus, this paper
presents the needs of a responsible, cooperative and participative
NGO from a KM perspective, in order to promote the growth of
Communities of Practice in local as well as in global network.
Viewed in this way we believe that KM will become an engine to turn
NGOs into CSOs. |
|
Title: |
DISTRIBUTED COMMUNITY COOPERATION IN
MULTI AGENT FILTERING FRAMEWORK |
Author(s): |
Sahin Albayrak and Dragan Milosevic |
Abstract: |
In nowadays easy to produce and
publish information society, filtering services have to be able to
simultaneously search in many potentially relevant distributed
sources, and to autonomously combine only the best found results.
Ignoring a necessity to address information retrieval tasks in a
distributed manner is a major drawback for many existed search
engines which try to survive the ongoing information explosion. The
essence of a proposed solution for performing distributed filtering
is in both installing filtering communities around information
sources and setting a comprehensive cooperation mechanism, which
both takes care about how promising is each particular source and
tries to improve itself during a runtime. The applicability of the
presented cooperation among communities is illustrated in a system
serving as intelligent personal information assistant (PIA).
Experimental results show that integrated cooperation mechanisms
successfully eliminate long lasting filtering jobs with duration
over 1000 seconds, and they do that within an acceptable decrease in
feedback and precision values of only 3% and 6%, respectively. |
|
Title: |
USING ENSEMBLE AND LEARNING
TECHNIQUES TOWARDS EXTENDING THE KNOWLEDGE DISCOVERY PIPELINE |
Author(s): |
Sakthiaseelan Karthigasoo, Yu-N Cheah
and Selvakumar Manickam |
Abstract: |
Knowledge discovery presents itself
as a very useful technique to transform enterprise data into
actionable knowledge. However, their effectiveness is limited in
view that it is difficult to develop a knowledge discovery pipeline
that is suited for all types of datasets. Moreover, it is difficult
to select the best possible algorithm for each stage of the
pipeline. In this paper, we define (a) a novel clustering ensemble
algorithm based on self-organizing maps to automate the annotation
of un-annotated medical datasets; (b) a data discretization
algorithm based on Boolean Reasoning to discretize continuous data
values; (c) a rule filtering mechanism; and (d) to extend the
regular knowledge discovery process by including a learning
mechanism based on neural network ensembles to produce a neural
knowledge base for decision support. We believe that this would
result a decision support system that is tolerant towards ambiguous
queries, e.g. with incomplete inputs. We also believe that the
boosting and aggregating features of ensemble techniques would help
to compensate for any shortcomings in some stages of the pipeline.
Ultimately, we combine these efforts to produce an extended
knowledge discovery pipeline. |
|
Title: |
SITUATION ASSESSMENT WITH OBJECT
ORIENTED PROBABILISTIC RELATIONAL MODELS |
Author(s): |
Catherine Howard and Markus Stumptner |
Abstract: |
This paper presents a new Object
Oriented Probabilistic Relational language which is built upon the
Bangsř Object Oriented Bayesian Network framework. We are currently
studying the application of this language for situation assessment
in complex military and business domains. |
|
Title: |
FACIAL POLYGONAL PROJECTION - A NEW
FEATURE EXTRACTING METHOD TO HELP IN NEURAL FACE DETECTION |
Author(s): |
Adriano Martins Moutinho, Antonio
Carlos Gay Thomé and Pedro Henrique Gouvęa Coelho |
Abstract: |
Locating the position of a human face
in a photograph is likely to be a very complex task, requiring
several image and signal processing methods. This paper proposes a
new technique called polygonal facial projection that is able, by
measuring specific distances on the image, to extract relevant
features and improve efficiency of neural face identification
systems (Rowley, 1999) (xxx and yyy, 2004), facilitating the
separation of facial patterns from other objects present in the
image. |
|
Title: |
USING A GAME THEORETICAL APPROACH FOR
EXPERIMENTAL SIMULATION OF BROOD REDUCTION - CONFLICT AND
CO-OPERATION, EFFECT ON BROOD SIZE WITH LIMITED RESOURCES |
Author(s): |
Fredrik Ĺhman and Lars Hillström |
Abstract: |
A number of hypothesis have been
presented to explain the complex interactions occurring during brood
reduction, but few simulation models successfully combines
hypothesis together necessary to describe ESS. In our solution we
present a simple experimental simulation for brood reduction for
which each sibling act as an autonomous agent that has the ability
to initiate actions for co-operation and competition against others
chicks within the same brood. Agents have a limited set of actions
which can be activated during onset of some environmental condition.
Parameters for optimization of inclusive fitness is based on
Mocks[5] earlier theory for maximizing inclusive fitness. During the
experimental simulations we have studied sizes and fitness measures
with varying degree of asynchrony, prey intensity and aggressiveness
for siblings within the artificial brood. All siblings were assumed
to be full sibs with relatedness 0.5. Results from the experimental
simulation shows some interesting similarities with brood reduction
in a real world setting. Agents within the artificial brood respond
with competitiveness whenever resources are limited. Simulated later
hatching also showed a lower rate of survival because of conflicts
with older siblings. |
|
Title: |
TOWARDS A CHANGE-BASED CHANCE
DISCOVERY |
Author(s): |
Zhiwen Wu and Ahmed Y. Tawfik |
Abstract: |
This paper argues that chances (risks
or opportunities) can be discovered from our daily observations and
background knowledge. A person can easily identify chances in a news
article. In doing so, the person combines the new information in the
article with some background knowledge. Hence, we develop a
deductive system to discover relative chances of particular chance
seekers. This paper proposes a chance discovery system that uses a
general purpose knowledge base and specialized reasoning algorithms.
|
|
Title: |
REDUCING RISK IN THE ENTERPRISE:
PROPOSAL FOR A HYBRID AUDIT EXPERT SYSTEM |
Author(s): |
Susan Clemmons and Kenneth Henry |
Abstract: |
This paper theorizes the use of a
hybrid expert system to support a complete audit of financial
statements for an enterprise. The expert system proposed would
support the audit process by using two types of artificial
intelligence technologies: case-based reasoning and fuzzy logic
technologies. The case base and automated reasoning recommendations
would give the auditing firm another insight on the audit. Unlike
previous audit expert systems, this system is intended to focus
broadly on an enterprise’s entire financial statement audit process;
it combines a case based knowledge representation with fuzzy logic
processing. The attempt at capturing a wide domain is necessary to
support organizational decision-making. Focusing on narrow decision
points within an audit process limits the users and usefulness of
the system. |
|
Area 3 - Information
Systems Analysis and Specification
|
Title: |
PILOTING SOFTWARE ENGINEERING
INSTITUTE’S SOFTWARE PROCESS IMPROVEMENT IN INFORMATION SYSTEMS
GROUPS |
Author(s): |
Donald R. Chand |
Abstract: |
Although the Software Engineering
Institute’s (SEI) software process improvement has been successfully
used to improve the software development capabilities by software
groups in commercial, aerospace, and DOD subcontractor
organizations, the systems/applications development groups in
Information Systems (IS) organizations have been slow in embracing
the SEI approach. This paper describes the experience of piloting
the SEI process improvement with six different IS groups in the
Information Management and Technology (IM&T) division of a XYZ
Corporation. The lessons learned provide an understanding of
potential barriers for adopting the SEI approach in IS organizations |
|
Title: |
EARLY DETECTION OF COTS FUNCTIONAL
SUITABILITY FOR AN E-PAYMENT CASE STUDY |
Author(s): |
Alejandra Cechich and Mario Piattini |
Abstract: |
The adoption of COTS-based
development brings with it many challenges about the identification
and finding of candidate components for reuse. Particularly, the
first stage in the identification of COTS candidates is currently
carried out dealing with unstructured information on the Web, which
makes the evaluation process highly costing when applying complex
evaluation criteria. To facilitate the process, in this paper we
introduce an early measurement procedure for functional suitability
of COTS candidates, and we illustrate the proposal by evaluating
components for an e-payment case study. |
|
Title: |
BRAIL – SAFETY REQUIREMENT ANALYSIS |
Author(s): |
Jean-Louis Boulanger |
Abstract: |
In the European railways standards
(CENELEC EN 50126, (1999); EN 50128, (2001); EN 50129, (2000)), it
is required to obtain evidence of safety in system requirements
specifications. In the railway domain, safety requirements are
obviously severe. It is very important to keep requirements
traceability during software development process even if the
different used models are informal, semi formal or formal. This
study is integrated into a larger one that aims at linking an
informal approach (UML notation) to a formal (B method) one. |
|
Title: |
TOWARDS A META MODEL FOR BUSINESS
PROCESS CONCEPTS |
Author(s): |
Boriana Rukanova, Mehmet N. Aydin,
Kees van Slooten and Robert A. Stegwee |
Abstract: |
Although there have been attempts to
identify essential business process concepts and to create a meta
model of business process concepts, the current studies do not
include an explicit approach on how to identify these concepts.
Further, how to construct such a meta model and how to include new
elements to it remains implicit. This paper presents an approach on
how to construct a meta model for business process concepts. The
approach defines how to capture and define business process
concepts, how to construct a meta model using these concepts and how
to extend the meta model. The paper also illustrates how to apply
the approach. The actual construction of the meta model for business
process concepts is a subject of further research. |
|
Title: |
BUILDING CLASS DIAGRAMS
SYSTEMATICALLY |
Author(s): |
M. J. Escalona and J. L. Cavarero |
Abstract: |
The class diagram has become more
important since the object-oriented paradigm has acquired more
acceptance. This importance has been translated also in the new
field of web engineering. However, in a lot of cases, it is not easy
to get the best class diagram in a problem. For this reason, it is
necessary to offer systematic processes (as cheaper and easier as
possible) to give a suitable reference to the development team. This
work presents two different processes developed in the University of
Nice and in the University of Seville and applies them to the same
problem comparing the results and getting some important
conclusions. |
|
Title: |
DESIGN OF A STANDOFF OBJECT-ORIENTED
MARKUP LANGUAGE (SOOML) FOR ANNOTATING BIOMEDICAL LITERATURE |
Author(s): |
Jing Ding and Daniel Berleant |
Abstract: |
With the rapid growth of
electronically available scientific literature, text mining is
attracting increasing attention. While numerous algorithms, tools,
and systems have been developed for extracting information from
text, little effort has been focused on how to mark up the
information. We present the design of a standoff, object-oriented
markup language (called SOOML), which is simple, expressive,
flexible, and extensible, satisfying the demanding needs of
biomedical text mining. |
|
Title: |
SPECIFICATION OF E-COMMERCE SYSTEMS
USING THE UMM MODELLING METHODOLOGY |
Author(s): |
Ioannis Ignatiadis and Konstantinos
Tarabanis |
Abstract: |
UN/CEFACT (United Nations / Centre
for Trade Facilitation and Electronic Business) Modelling
Methodology – in short UMM – has been developed by the TMWG
(Technical Modelling Working Group) within UN/CEFACT, in order to
support the development of e-business applications in a
technology-neutral, implementation-independent manner. The purpose
of this paper is to provide the results from an EU co-funded
project, entitled “LAURA”, where UMM was used for the analysis and
design of the e-commerce system to be developed. The goal of the
‘LAURA” project is to set-up adaptive zones of B2B electronic
commerce for Small and Medium Enterprises (SMEs) from the Less
Favoured Regions of Europe. In particular, an analysis of the
strengths and weaknesses of UMM will be carried out, as those were
evidenced from a practical perspective in the “LAURA” project. |
|
Title: |
WHAT CAN ORGANIZATIONAL ANALYSIS GIVE
TO REQUIREMENT ANALYS? DEVELOPING AN INFORMATION SYSTEM IN HOSPITAL
EMERGENCY DEPARTMENTS |
Author(s): |
Anne De Vos, Claire Lobet-Maris and
Anne Rousseau |
Abstract: |
This paper presents an overview of
the analytical framework we apply to organizational change in regard
to the development of information systems. A 3-dimensional way of
thinking is proposed, based on theory and methods taken from the
literature on organizations, especially the organized action
political theory developed by Crozier and Friedberg (1977, 1993) and
the theory of the “ Economics of Worth ” as presented in Boltanski
and Thevenot (1991). The first part of this paper will present the
conceptual framework of our approach to the question : which
organizational changes are inherent in the development of new
information systems ? In the second part, we will put the framework
into operation. The method raise a question regarding the role
social sciences should play in the design of information systems. |
|
Title: |
PRESERVING THE CONTEXT OF INTERRUPTED
BUSINESS PROCESS ACTIVITIES |
Author(s): |
Sarita Bassil, Stefanie Rinderle,
Rudolf Keller, Peter Kropf and Manfred Reichert |
Abstract: |
The capability to safely interrupt
business process activities is an important requirement for advanced
process-aware information systems. Indeed, exceptions stemming from
the application environment often appear while one or more
application-related process activities are running. Safely
interrupting an activity consists of preserving its context, i.e.,
saving the data associated with this activity. This is important
since possible solutions for an exceptional situation are often
based on the current data context of the interrupted activity. In
this paper, a data classification scheme based on data relevance and
on data update frequency is proposed and discussed with respect to
two different real-world applications. Taking into account this
classification, a correctness criterion for interrupting running
activities while preserving their context is proposed and analyzed. |
|
Title: |
APPLYING SDBC IN THE
CULTURAL-HERITAGE SECTOR |
Author(s): |
Boris Shishkov and Jan L.G. Dietz |
Abstract: |
Among the actual
cultural-heritage-related problems is the one of effectively
managing and globally distributing digitized cultural (and
scientific) information. The only feasible way to realize this goal
is via the Internet. Hence, a significant issue to be considered is
the adequate design of software applications which to realize
brokerage tasks within the global space. However, due to the great
complexity of this cultural-heritage-related task (compared to other
brokerage tasks successfully realized by software systems), the
usage of the existing popular modeling instrumentarium seems
inadequate. Hence, in this paper, an approach is presented and it is
briefly discussed how the approach could be useful for building
cultural heritage sector brokers. |
|
Title: |
RESEARCH ON SUPPORT TOOLS FOR
OBJECT-ORIENTED SOFTWARE REENGINEERING |
Author(s): |
Xin Peng, Wenyun Zhao, Yijian Wu and
Yunjiao Xue |
Abstract: |
Reengineering presents a practical
and feasible approach to transform legacy systems into evolvable
systems.Component-based systems are evolvable and can be easily
reengineered. Internet and component-based software development also
shows a new orientation for reengineering. Object-oriented software
reengineering should base on component library and focus on
seamlessly cooperating with component library and assembly tool to
construct a whole reengineering system. So the reengineering
discussed here concentrates on reconstructing the system into a more
feasible one via comprehension and modification of the legacy
system, extracting components from the system and submitting them to
the component library. In this paper, we present an object-oriented
software reengineering model and propose a component extraction
algorithm. Our tool prototype FDReengineer is also discussed. |
|
Title: |
ASPECT IPM: TOWARDS AN INCREMENTAL
PROCESS MODEL BASED ON AOP FOR COMPONENT-BASED SYSTEMS |
Author(s): |
Alexandre Alvaro, Eduardo Santana de
Almeida, Daniel Lucrédio, Antonio Franscisco do Prado, Vinicius
Cardoso Garcia and Silvio Romero de Lemos Meira |
Abstract: |
In spite of recent and constant
researches in the Component-Based Development area, there is still a
lack for patterns, processes and methodologies that effectively
support either the development “for reuse” and “with reuse”. This
paper presents Aspect IPM, a process model that integrates the
concepts of component-based software engineering, frameworks,
patterns, non-functional requirements and aspect-oriented
programming. This process model is divided in two activities: Domain
Engineering and Component-Based Development. An aspect-oriented
non-functional requirements framework was built to aid the software
engineer in this two activities. A preliminary, evaluation to
analyze the results of using Aspect IPM, is also presented. |
|
Title: |
A SECURITY ARCHITECTURE FOR
INTER-ORGANIZATIONAL WORKFLOWS: PUTTING SECURITY STANDARDS FOR WEB
SERVICES TOGETHER |
Author(s): |
Michael Hafner, Ruth Breu and Michael
Breu |
Abstract: |
Modern eBusiness processes are
spanning over a set of public authorities and private corporations.
Those processes require high security principles, rooted on open
standards. The SECTINO project follows the paradigm of model driven
security architecture: High level business-oriented security
requirements for inter-organizational workflows are translated into
a configuration for a standards based target architecture. The
target architecture encapsulates a set of core web services, links
them via a workflow engine, and guards them by imposing specified
security policies. |
|
Title: |
THE “RIGHT TO BE LET ALONE” AND
PRIVATE INFORMATION |
Author(s): |
Sabah S. Al-Fedaghi |
Abstract: |
The definition of privacy given by
Warren and Brandeis as the “right to be let alone” is described as
the most comprehensive of rights and the right most valued by
civilized men. Nevertheless, the formulation of privacy as the right
to be let alone has been criticized as “broad” and “vague”
conception of privacy. In this paper we show that the concept of
“right to let alone” is an extraordinary, multifaceted notion that
coalesces practical and idealistic features of privacy. It embeds
three types of privacy depending on their associated: active,
passive and active/passive activities. Active privacy is
“freedom-to” claim where the individual is an active agent when
dealing with private affairs claiming he/she has the right to
control the “extendibility of others’ involvement” in these affairs
without interference. This is a right/contractual-based notion of
privacy. Accordingly, Justice Rehnquist declaration of no privacy
interest in a political rally refers to active privacy. Passive
privacy is “freedom-from” notion where the individual is a passive
agent when dealing with his/her private affairs and he/she has
privacy not due control –as in active privacy– but through others
being letting him/her alone. This privacy has duty/moral
implications. In this sense Warren and Brandeis advocated that even
truthful reporting leads to “a lowering of social standards and
morality.” Active/passive privacy is when the individual is the
actor and the one acted on. These three-netted interpretations of
the “right to be alone” encompass most –if not all- definitions of
privacy and give the concept the required narrowness and precision. |
|
Title: |
USING A WORKLOAD INFORMATION
REPOSITORY - MAPPING BUSINESSES AND APPLICATIONS TO SERVERS AND
PROCESSES |
Author(s): |
Tim R. Norton |
Abstract: |
Workloads are often defined
differently within an organization, depending on the purpose of the
analysis, making it very difficult to compare analysis from
different points-of-view. WIRAM (Workload Informa-tion Repository
for Analysis and Modeling) is a preliminary implementation of a
database repository to collect application and system information
about workload groupings and their relationships. This informa-tion
can then be used to define consistent workloads from business
products to computer systems, regard-less of the analysis or
modeling tools used or the objectives of the analysis. |
|
Title: |
SERVICE BROKERAGE IN PROLOG |
Author(s): |
Cheun Ngen Chong, Sandro Etalle,
Pieter Hartel, Rieks Joosten and Geert Kleinhuis |
Abstract: |
Service brokerage is a complex
problem. At the design stage the semantic gap between user, device
and system requirements must be bridged, and at the operational
stage the conflicting objectives of many parties in the value chain
must be reconciled. For example why should a user who wants to watch
a film need to understand that due to limited battery power the film
can only be shown in low resolution? Why should the user have to
understand the business model of a content provider? To solve these
problems we present (1) the concept of a packager who acts as a
service broker, (2) a design derived systematically from a
semi-formal specification (the CC-model), and (3) an implementation
using our Prolog based LicenseScript language. |
|
Title: |
PATTERNS IN ONTOLOGY ENGINEERING:
CLASSIFICATION OF ONTOLOGY PATTERNS |
Author(s): |
Eva Blomqvist and Kurt Sandkuhl |
Abstract: |
In Software Engineering, patterns are
an accepted way to facilitate and support reuse. This paper focuses
on patterns in the field of Ontology Engineering and proposes a
classification scheme for ontology patterns. The scheme divides
ontology patterns into five levels: Application Patterns,
Architecture Patterns, Design Patterns, Semantic Patterns, and
Syntactic Patterns. Semantic and Syntactic Patterns are quite
well-researched but the higher levels of pattern abstraction are so
far almost unexplored. To illustrate the possibilities of patterns
on these levels some examples are discussed, together with ideas of
future work. Application of the pattern classification would require
defined patterns for all different kinds of ontologies, and both
manual and automatic pattern implementation. Our reserach is
focusing on the Design Pattern level, using existing patterns from
other areas to create Ontology Design Patterns for use in
semi-automatic ontology creation. |
|
Title: |
APPLYING COMPONENT-BASED UML-DRIVEN
CONCEPTUAL MODELING IN SDBC |
Author(s): |
Boris Shishkov and Jan L.G. Dietz |
Abstract: |
With the great role of ICT in many
areas, the importance of software applications (in utilizing ICT)
increases. However, we often observe in software projects: low user
satisfaction, increasing budgets, unrealized goals. It is claimed
that one frequent cause of software project failure is the mismatch
between (business) requirements and the actual functionality of the
delivered software application. In order to overcome this, it is
necessary to soundly align business process modeling and software
specification. A possible and promising way to realize this is using
components. In this paper, we report further results concerning the
proposition of a new approach, namely SDBC. What distinguishes SDBC
from the currently popular business/software modeling methods is the
component-based business-software alignment, the thorough
(multi-aspect) business process modeling perspective, and the
consistency with the UML. |
|
Title: |
MODEL DRIVEN ARCHITECTURE BASED
REAL-TIME ENTERPRISE INFORMATION INTEGRATION - AN APPROACH AND
IMPACT ON BUSINESSES |
Author(s): |
Vikas S. Shah |
Abstract: |
The rapid advancements of enterprise
applications urge organizations to access and process information in
multiple incompatible systems accumulated as massive complex data in
diversified formats due to lack of an accepted common base in the
development community. EII solutions must provide interoperability
across various software platforms with an ability to react and adapt
enterprise operations in favour of continues internal and external
environmental alterations dealing with time sensitive information.
Concept of RTE is based upon the premise of getting the right
information to the right people at the right time in “real time”.
MDA specifications lead the industry towards interoperable,
reusable, and portable software components as well as information
models based on standard models. Recently, MDA is considered as
another evolutionary step introducing an engineering discipline to
practice pattern-based software development. In this paper, we
present an innovative approach to achieve real-time intensive EII
through combining respective strengths of MDA and RTE. Purpose is to
discuss issues during architectural choices and trade-offs
introducing the notion of intelligent enterprise integration.
Preliminary observation reveals that the strategy provides
consistent architectural framework and significantly reduces
integration cost. The paper also reports potential advantages and
implications of real-time EII over existing business models. |
|
Title: |
PERSPECTIVES ON PROCESS DOCUMENTATION
- A CASE STUDY |
Author(s): |
Jörg Becker, Christian Janiesch,
Patrick Delfmann and Wolfgang Fuhr |
Abstract: |
The documentation of IT projects is
of paramount importance for the lasting benefit of a project’s
outcome. However, different forms of documentation are needed to
comply with the diverse needs of users. In order to avoid the
maintenance of numerous versions of the same documentation, an
integrated method from the field of reference modeling creating
perspectives on configurable models is presented and evaluated
against a case in the field of health care. The proposal of a
holistic to-be model for process documentation provides useful hints
towards the need of presenting a model that relates to a specific
user’s perspective. Moreover it helped to evaluate the applicability
of configurable, company-specific models concerning the relative
operating efficiency. |
|
Title: |
AUTOMATING THE CONFIGURATION OF IT
ASSET MANAGEMENT IN INDUSTRIAL AUTOMATION SYSTEMS |
Author(s): |
Thomas Koch, Esther Gelle and Patrick
Sager |
Abstract: |
The installation and administration
of large heterogeneous IT infrastructures, for enterprises as well
industrial automation systems, are becoming more and more complex
and time consuming. Industrial automation systems, such as those
delivered by ABB, present an additional challenge, in that these
control and supervise mission critical production sites.
Nevertheless, it is common practice to manually install and maintain
industrial networks and the process control software running on
them, which can be both expensive and error prone. In order to
address these challenges, we believe that in the long term such
systems must behave autonomously. As preliminary steps to the
realization of this vision, automated IT asset management tools and
practices will be highlighted in this contribution. We will point
out the advantages of combining process control and network
management in the domain of industrial automation technology.
Furthermore we will introduce a new component model for Autonomic
Computing for network management and will apply this to industrial
automation systems. |
|
Title: |
VERIFICATION AND VALIDATION OF THE
REAL TIME SYSTEM IN THE RADAR SENSOR |
Author(s): |
Naibin Li |
Abstract: |
This paper presents the modeling,
simulation and verification of the embedded real time system for the
memory interface system based on the tool UPPAAl[1,2,4]. The real
time system of the memory interface in the radar sensor is the
arbiter as the kernel of the non-preemptive, fix cycle, round-robin
schedule controls and schedules four input buffers, the five output
buffers and two integrators working synchronously to share the
system resource. We construct accurately dynamic model as the
networks of timed automata with rigorous logic and real timed
abstraction of this real time system, this hybrid system with
discrete and continuous state change consists of six process
templates and 20 concurrent processes. We simulate and verify the
entire system to detect potential fault in order to guarantee the
reliability of the design of the real time system. |
|
Title: |
A NEW PUBLIC-KEY ENCRYPTION SCHEME
BASED ON NEURAL NETWORKS AND ITS SECURITY ANALYSIS |
Author(s): |
Niansheng Liu and Donghui Guo |
Abstract: |
A new public-key Encryption scheme
based on chaotic attractors of neural networks is described in the
paper. There is a one-way function relationship between the chaotic
attractors and their initial states in an Overstoraged Hopfield
Neural Networks (OHNN), and each attractor and its corresponding
domain of attraction are changed with permutation operations on the
neural synaptic matrix. If the neural synaptic matrix is changed by
commutative random permutation matrix, we propose a new cryptography
technique according to Diffie-Hellman public-key cryptosystem. By
keeping the random permutation operation of the neural synaptic
matrix as the secret key, and the neural synaptic matrix after
permutation as public-key, we introduce a new encryption scheme for
a public-key cryptosystem. Security of the new scheme is discussed |
|
Title: |
A FORMAL LANGUAGE FOR MODEL
TRANSFORMATION SPECIFICATION |
Author(s): |
f
Dan Song, Keqing He, Peng Liang and
Wudong Liu |
Abstract: |
Model transformation and its
automation have been the core and major challenge of the MDA;
consequently OMG issued a QVT RFP to standardize its process. Though
many approaches have been proposed, their efficiency cannot be
validated and their application scope is still limited. Meanwhile,
UML, as a well-established standard for modelling, is experiencing
the major updating. The task of providing a reliable solution to
model transformation is a critical. This paper proposes an
aspect-driven transformation approach combined with formal language
to implement model transformation. Aspect-driven approach is
convenient for customizing transformation rules and formal language
is easy for automation. The foundation of our work is explained and
a concrete transformation example from UML 1.4 to UML 2.0 is
presented using the combined mechanism. |
|
Title: |
FUNCTIONAL AND NON-FUNCTIONAL
APPLICATION SOFTWARE REQUIREMENTS: EARLY CONFLICT DETECTION |
Author(s): |
Paulo Sérgio Muniz Silva and Leonardo
Chwif |
Abstract: |
Usually, standard practices of
application software development are only focused on functional
requirements. However, IS managers know that when they have an
experienced development team, typically systems break not because
they do not meet functional requirements, but because some system
attributes, also known as non-functional requirements, such as
performance, reliability, etc., are not satisfied. One of the root
causes of this failure is that non-functional requirements do not
receive an adequate attention, are not well understood and are not
appropriately modeled. Furthermore, non-functional requirements may
present critical conflicts among them. This paper proposes a
pragmatic method to help the early understanding of the
relationships between the functional and the non-functional
requirements of application software. The method has two main goals:
to help the early traceability analysis between functional and
non-functional requirements, and to analyze the potential conflicts
between them. |
|
Title: |
MEASURING REQUIREMENTS COMPLEXITY TO
INCREASE THE PROBABILITY OF PROJECT SUCCESS |
Author(s): |
Holly Parsons-Hann and Kecheng Liu |
Abstract: |
The widespread adoption of
Information Technology has helped reduce market problems due to
geographical separation and allow collaboration between
organisations who are physically distributed around the globe.
However, despite the successful strategic benefits brought by the
evolution of the internet and other web based services, this has not
led to a higher project success rate within companies. The biggest
reason for project failure is cited as ‘incomplete requirements’
which suggests that research must be done into the requirements
analysis to solve this reoccurring problem. This paper aims to
highlight and analyse the current work done in the software
complexity and requirements engineering field and demonstrate how
measuring requirements complexity will lead to less project
failures. |
|
Title: |
ACKNOWLEDGING THE IMPLICATIONS OF
REQUIREMENTS |
Author(s): |
Ken Boness, Rachel Harrison and
Kecheng Liu |
Abstract: |
The traditional software requirements
specification (SRS) used as the principal instrument for management
and planning and as the foundation for design can play a pivotal
role in the successful outcome of a project. However this can be
compromised by uncertainty and time-to-market pressures. In this
paper we recognise that the SRS must be kept in a practical and
useful state. We recognise three prerequisites to this end and
introduce a programme of research aimed at developing a Requirements
Profile that changes the emphasis of requirements engineering from
defining the requirements to defining what is known about the
requirements. The former (being a subset of the latter) leaves the
traditional idea of a SRS unaffected whereas the latter adds much to
the avoidance of misunderstanding. |
|
Title: |
EVOLUTIONARY SOFTWARE LIFE CYCLE FOR
SELF-ADAPTING SOFTWARE SYSTEMS |
Author(s): |
Ahmed Ghoneim, Sven Apel and Gunter
Saake |
Abstract: |
Robot software systems perform tasks
continually to face environmental changes. These changes in the
environment require to adapt the strategies of the set of behaviors
or to add new ones according to the ability of the robot's hardware
capabilities. We present an evolutionary life cycle for
self-evolving robot software systems. The life cycle applies within
a reflective architecture, that provides the ability to
automatically trap the design information in form of uml/xmi
documents of the base-level systems. The life cycle is composed of
two cooperating cycles: the base-cycle which includes the running
application and base-engine for getting the internal representation;
and the meta-cycle which provides the adaptation engine for the base
application. The evolutionary life cycle main features are
highlighted as follows: First, it allows to extract the robots
design information from uml models. Second, by using MOP capability
the extracted data are trapped to constitute the meta-data. Third,
incremental meta-cycles are applied to evolve and validate runtime
changes. Finally, the modified meta-data are reflected to the base
application and leaves it consistent with these changes. The
proposed life cycle practicability is illustrated through a case
study. |
|
Title: |
SUSTAINABLE DEVELOPMENT AND
INVESTMENT IN INFORMATION TECHNOLOGIES: A SOCIO-ECONOMIC ANALYSIS |
Author(s): |
Manuel Joăo Pereira, Luís Valadares
Tavares and Raquel Soares |
Abstract: |
The output of investments in
Information Systems and Technologies (IST) has been a topic of
debate among the IST research community. The “Productivity Paradox
of IST Investments” sustains that the investment in IST does not
increase productivity. Some researchers showed that developed
countries have been having a rather stable and sometimes declining
economic growth despite their efforts in Research and Development
(R&D). Other researchers argue that there is sound evidence that
investments in IST are having impacts on the productivity and
competitiveness of countries. This paper analyses the relationship
between IST and R&D investments and the global development of
countries (not only productivity of countries) using economic,
demographic and literacy independent variables that explain global
development. The objective is to research whether R&D and IST
investments are critical to the productivity and to global
development of the countries. Working at a country level, the
research used sixteen socio-economic variables during a period of
five years (1995-1999). The research methodology included causal
forecast, cluster analysis, factor analysis, discriminant analysis
and regression analysis. The conclusion confirms the correlation
between the Gross National Product (GNP) and R&D and IST
investments. The variables illiteracy rate, life expectancy at
birth, Software investment as percentage of GNP and number of
patents per 1000 inhabitants can explain the development of a
country. |
|
Title: |
DESCRIPTION OF WORKFLOW PATTERNS
BASED ON P/T NETS |
Author(s): |
Guofu Zhou, Yanxiang He and Zhuomin
Du |
Abstract: |
Through comparing and analyzing
Aalst's workflow patterns, we model these patterns with P/T system
without additional elements. Based on these models, the number of
patterns can be reduced significatively. Moreover, synchronic
distance is presented to specify workflow patterns. |
|
Title: |
INTEGRATED PERFORMANCE MANAGEMENT
|
Author(s): |
Faribors Ronaghi |
Abstract: |
Recently the performance of companies
has gained a significant meaning due to globalization and new
conditions in the field of the markets and the competition area. To
be successful the set objectives derived from the strategy in
different levels must be controlled and an approach must be chosen
that integrates the three parts, performance management concept, IT
and organisation. The proposed article is to depict the basic
requirements for integrated performance management and shows as a
result a meta model, where all the basic objects and their relations
are considered. |
|
Title: |
COLLABORATIVE ONTOLOGIES AND ITS
VISUALISATION IN CSCW SYSTEMS |
Author(s): |
Michael Vonrueden and Thorsten Hampel |
Abstract: |
The goal of semantic structures and
especially the semantic web is to simplify knowledge retrieval in
computer based systems. The Visual Cooperative Ontology Environment
- short visCOntE - aims to support the process of a collaborated
creation of ontologies and the mapping of an individual's mental map
into a digital system. Due to the collaborative and graphical
approach many requirements have to be considered to establish such a
project. Beneath a deep description of visCOntE and possible usage
scenarios, the question of which requirements a successfull
collaborative ontology creation should fit and which functions a
system should make available will be determined in detail.
|
|
Title: |
MODEL SHARING IN THE SIMULATION AND
CONTROL OF DISTRIBUTED DISCRETE-EVENT SYSTEMS |
Author(s): |
Fernando Gonzalez |
Abstract: |
Today, sophisticated discrete-event
systems are being designed whose complexity necessitates the
employment of distributed planning and control. While using a
distributed control architecture results in the overall system model
consisting of a collection of independent models, today's
commercially available simulation languages can only accommodate a
single model. As a result, in order to use these simulation
languages one must create a new system model that consists of a
single model but yet models a collection of models. Typically the
communication among the distributed models are ignored causing
inaccurate results. In this paper we use our simulation concept,
also presented in this paper, to create a simulation tool that
enables the simulation of distributed systems by using a collection
of models rather than a single model. With our concept we create a
methodology that accomplishes this by simulating the communications
among the distributed models. Besides the benefit of not having to
create a new model for simulation, this methodology produces an
increase in accuracy since the communication among the models is
taken into consideration. Furthermore this tool has the capability
to control the system using the same collection of models.
|
|
Title: |
THREAT-DRIVEN ARCHITECTURAL DESIGN OF
SECURE INFORMATION SYSTEMS |
Author(s): |
Dianxiang Xu and Josh Pauli |
Abstract: |
To deal with software security issues
in the early stages of system development, this paper presents a
threat-driven approach to the architectural design and analysis of
secure information systems. We model security threats to systems
with misuse cases and mitigation requirements with mitigation use
cases at the requirements analysis phase, and drive system
architecture design (including the identification of architectural
components and their connections) by use cases, misuse cases, and
mitigation use cases. According to the misuse case-based threat
model, we analyze whether or not a candidate architecture is
resistant to the identified security threats and what constraints
must be imposed on the choices of system implementation. This
provides a smooth transition from requirements specification to
high-level design and greatly improves the traceability of security
concerns in high assurance information systems. We demonstrate our
approach through a case study on a security-intensive payroll
information system. |
|
Title: |
CONCEPTUAL OPTIMISATION IN BUSINESS
PROCESS MANAGEMENT |
Author(s): |
Yves Callejas, Jean Louis Cavarero
and Martine Collard |
Abstract: |
To optimise business processes is a
very complex task. The goal is double: to improve productivity and
quality. The method, developed in this paper, is composed of 4 steps
: the first one is the modelisation step (to describe the business
process in a very rigorous way), then a conceptual optimisation
(supported by evaluation and simulation tools) to improve the
business process structure (to make it more consistent, to normalise
it), then an operational optimisation to improve the business
process performing (to make it more efficient) by providing to each
operation the necessary resources and at last a global optimisation
(to take into account all the business processes of the company
under study). The conceptual optimisation is, in fact, a static
optimisation (achieved independently of resources) while the
operational optimisation is dynamic. The main difference between
these 2 steps is the fact that the first one is totally hand made
(we want to build, from the set of indicators provided by evaluation
and simulation, the best business process as possible), in
opposition with the second which is totally automatic (since it
requires linear and non linear programming tools). This method is
the result of three years research achieved for the French organism
“Caisses d’Allocations Familiales: CAF”. It was validated on the
business processes of the CAF, which deal with information (files
and documents), but it can also be applied on industrial business
processes (dealing with products and materials). |
|
Title: |
ADAPTIVE BUSINESS OBJECTS - A NEW
COMPONENT MODEL FOR BUSINESS INTEGRATION |
Author(s): |
Prabir Nandi and Santhosh Kumaran |
Abstract: |
We present a new component model for
creating next generation e-Business applications. These applications
have two overriding requirements: (1) Ability to change the
application behavior quickly and easily in line with the
fast-changing business conditions and (2) Seamless integration of
people, process, information, and systems. Our new component model
is built around the concept of Adaptive Business Objects, and
fulfills both the above requirements. This paper describes this
component model and demonstrates its use in real business solutions. |
|
Title: |
A METHODOLOGY FOR ROLE-BASED MODELING
OF OPEN MULTI-AGENT SOFTWARE SYSTEMS |
Author(s): |
Haiping Xu and Xiaoqin Zhang |
Abstract: |
Multi-agent systems (MAS) are rapidly
emerging as a powerful paradigm for modeling and developing
distributed information systems. In an open multi-agent system,
agents can not only join or leave an agent society at will, but also
take or release roles dynamically. Most of existing work on MAS uses
role modeling for system analysis; however, role models are only
used at conceptual level with no realizations in the implemented
system. In this paper, we propose a methodology for role-based
modeling of open multi-agent software systems. We specify role
organization and role space as containers of conceptual roles and
role instances, respectively. Agents in an agent society can take or
release roles from a role space dynamically. The relationships
between agents are deduced through a mechanism called A-R mapping.
As a potential solution for automated MAS development, we summarize
the procedure to generate a role-based design of open MAS. Finally,
we give a case study of organizing a conference to illustrate the
feasibility of our approach. |
|
Title: |
SEMANTIC-BASED SIMILARITY DECISIONS
FOR ONTOLOGIES |
Author(s): |
Anne Yun-An Chen and Dennis McLeod |
Abstract: |
Many data representation structures,
such as web site categories and domain ontologies, have been
established for semantic-based information search and retrieval on
the web. These structures consist of concepts and their
interrelationships. Approaches to determine the similarity in
semantics among concepts in data representation structures have been
developed in order to facilitate information retrieval and
recommendation processes. Some approaches are only suitable for
similarity computations in pure tree structures. Other approaches
designed for the Directed Acyclic Graph structures yield high
computational complexity for online similarity decisions. Another
approach is the Cosine-Similarity Measure. This approach requires
manual edits for the data similarity matrix. In order to provide
efficient similarity computations for data representation
structures, we propose a geometry-based solution. Structures are
first spontaneously adapted into a geometric 3-dimensional space.
Similarity computations are based on geometric properties. The
similarity model is based on the proposed geometry-based solution,
and the online similarity computation is performed in a constant
time. An application of the proposed similarity model to earthquake
ontology is exemplified. |
|
Title: |
MODELING STRATEGIC ACTOR
RELATIONSHIPS TO SUPPORT RISK ANALYSIS AND CONTROL IN SOFTWARE
PROJECTS |
Author(s): |
Subhas C. Misra, Vinod Kumar and Uma
Kumar |
Abstract: |
In this paper, we present an approach
project managers could use to model and control risks in software
projects. There are no similar approaches on modeling software
project risks in the existing pieces of literature. The approach is,
thus, novel to the area of software risk management. The approach is
helpful to project managers for performing means-end analysis,
thereby uncovering the structural origin of risks in a project, and
how the root-causes of such risks can be controlled from the early
stages of the projects. We have illustrated this approach with a
simple example typical of software development projects. Though some
attempt has been made to model risk management in enterprise
information systems using conventional modeling techniques, like
data flow diagrams, and UML, the previous works have analyzed and
modeled the same just by addressing “what” a process is like,
however, they don’t address “why” the process is the way it is. The
approach addresses this limitation of the existing software project
risk management models by exploring the strategic dependencies
between the actors of a project, and analyzing the motivations,
intents, and rationales behind the different entities and activities
in a project. However, the intention of our work is not to provide a
new risk management framework. Our work is restricted to providing a
methodology that one can use in the existing risk management
lifecycle models to analyze and uncover the structural origin of the
risks, and control the risks from the early phases of a project. |
|
Title: |
A STRATEGIC MODELING TECHNIQUE FOR
CHANGE MANAGEMENT IN ORGANIZATIONS UNDERGOING BPR |
Author(s): |
Subhas C. Misra, Vinod Kumar and Uma
Kumar |
Abstract: |
Because of the competitive economy,
organizations today seek to rationalize, innovate, and adapt to
changing environments, and circumstances as part of Business Process
Reengineering (BPR) efforts. Irrespective of the process
reengineering program selected, and the technique used to model it,
BPR brings with it the issues of organizational, and process
changes, which involves managing organizational changes (also called
“change management”). Change management is non-trivial, as
organizational changes are difficult to accomplish. Though some
attempt has been made to model change management in enterprise
information systems using conventional conceptual modeling
techniques, they have just addressed “what” a change process is
like, and they don’t address “why” the process is the way it is. Our
approach is novel in the sense that it presents a
actor-dependency-based 5-phased technique for analysing, and
modeling early-phase requirements of organizational change
management that provides the motivations, intents, and rationales
behind the entities, and activities. We have considered a case study
to illustrate this approach. Finally, we have provided concluding
remarks by describing the importance, and the limitations of this
approach. |
|
Title: |
A MODEL FOR POLICY BASED SERVICE
COMMUNITY |
Author(s): |
Hironobu Kuruma and Shinichi Honiden |
Abstract: |
Since the World Wide Web is an open
system, it is difficult to maintain the information about services
on the Web in a centralized server. Therefore the service mediation
system could be constructed by federation of service communities, in
which each community provides and mediates limited number of
services according to its own policy. The federation should preserve
the policy of each community. Furthermore, (1) scalability, (2)
verifiability of policy compliance, and (3) flexibility to the
change of federation relation should be considered in implementing
the federation. In this paper, we introduce a notion of policy of
community based on access control among players and show a community
model that is aimed at specifying communications between players
compliant with policy. The community model provides function
specification of the service mediation system. Since a
meta-architecture based language is used to describe community
model, communications for the cooperation of communities can be
represented separately from the communications for service request
and provision. As the result, our community model (1) represents
communications between players in a modular way, (2) provides a
basis for verification of policy compliance, and (3) encapsulates
the dependencies on partner communities. |
|
Title: |
A COST-ORIENTED TOOL TO SUPPORT
SERVER CONSOLIDATION |
Author(s): |
Danilo Ardagna, Chiara Francalanci,
Gianfranco Bazzigaluppi, Mauro Gatti, Francesco Silveri and Marco
Trubian |
Abstract: |
Nowadays, Companies perceive the IT
infrastructure as a commodity not delivering any competitive
advantage and usually, as the first candidate for budget squeezing
and costs reductions. Server consolidation is a broad term which
encompasses all the projects put in place in order to rationalize
the IT infrastructure and reduce operating costs. This paper
presents a design methodology and a software tool to support Server
Consolidation projects. The aim is to identify a minimum cost
solution which satisfies user requirements. The tool has been tested
by considering four real test cases, taken from different
geographical areas and encompassing multiple application types.
Preliminary results from the empirical verification indicate that
the tool identifies a realistic solution to be refined by technology
experts, which reduces consolidation projects costs, time and
efforts. |
|
Title: |
ENTERPRISE INFRASTRUCTURE PLANNING -
MODELLING AND SIMULATION USING THE PROBLEM ARTICULATION METHOD |
Author(s): |
Simon Tan and Kecheng Liu |
Abstract: |
Current systems development costs
rise almost exponentially as development time increases,
underscoring the importance of effective enterprise planning and
project management. Enterprise infrastructure planning offers an
avenue to effectively improve and shorten design and development
time; and to develop a system of high quality and with significantly
lower operating and development costs. The Problem Articulation
Method (PAM) is a method for articulating business and technical
requirements in an organisation. It is capable of assimilating the
internal systems changes in response to the dynamics and
uncertainties of the business environment. The requirements and
specifications, from this analysis constitute as a baseline for
managing changes, and provide the mechanism by which the reality of
the enterprise and its systems can be aligned with planned
enterprise objectives. An illustration of planning the development
of a procurement system will be used to demonstrate the enterprise
infrastructure requirements with a discrete-event enterprise
simulation package “Enterprise Dynamic”. This paper will examine the
capability of PAM in the articulation and simulation of complex
enterprise requirements. |
|
Title: |
METRIC SUITE DIRECTING THE FAILURE
MODE ANALYSIS OF EMBEDDED SOFTWARE SYSTEMS |
Author(s): |
Guido Menkhaus and Brigitte Andrich |
Abstract: |
Studies have found that reworking
defective requirements, design, and code typically consumes up to 50
percent of the total cost of software development. A defect has a
high impact when it has been inserted in the design and is only
detected in a later phase of a project. This increases project cost,
time and may even jeopardize the success of a project. More time
needs to be spent on analysis of the design of the project. When
analysis techniques are applied on the design of a software system,
the primary objective is to anticipate potential scenarios of
failure in the system. The detection of defects that may cause
failures and the correction is more cost effective in the early
phases of the software lifecycle, whereas testing starts late and
defects found during testing may require massive rework. In this
article, we present a metric suite that guides the analysis during
the risk assessment of failure modes. The computation of the metric
suite bases on Simulink models. We provide tool support for this
activity. |
|
Title: |
TYPE AND SCOPE OF TRUST RELATIONSHIPS
IN COLLABORATIVE INTERACTIONS IN DISTRIBUTED ENVIRONMENTS |
Author(s): |
Weiliang Zhao, Vijay Varadharajan and
George Bryan |
Abstract: |
In this paper, we consider the
modelling of trust relationships in distributed systems based on a
formal mathematical structure. We discuss different forms of trust.
In particular, we address the base level authentication trust at the
lower layer with a hierarchy of trust relationships at a higher
level. Then we define and discuss trust direction and symmetric
characteristics of trust for collaborative interactions in
distributed environments. We define the trust scope label in order
to describe the scope and diversity of trust relationship under our
taxonomy framework. We illustrate the proposed definitions and
properties of the trust relationships using example scenarios. The
discussed trust types and properties will form part of an overall
trust taxonomy framework and they can be used in the overall
methodology of life cycle of trust relationships in distributed
information systems that is currently in the process of development. |
|
Title: |
TOWARDS AN APPROACH FOR
ASPECT-ORIENTED SOFTWARE REENGINEERING |
Author(s): |
Vinicius Garcia, Daniel Lucrédio,
Antonio Francisco do Prado, Eduardo Santana de Almeida, Alexandre
Alvaro and Silvio Romero de Lemos Meira |
Abstract: |
This paper presents a reengineering
approach to help in migrating pure object-oriented codes to a
mixture of objects and aspects. The approach focuses on
aspect-mining to identify potential crosscutting concerns to be
modeled and implemented as aspects, and on refactoring techniques to
reorganize the code according to aspect-oriented paradigm by using
code transformations it is possible to recover the aspect-oriented
design using a transformational system. With the recovered design it
is possible to add or modify the system requirements in a CASE tool,
and to generate the codes in an executable language, in this case
AspectJ. |
|
Title: |
A NON PROPRIETARY FRAMEWORK FOR
POLICY CONTROLLED MANAGEMENT OF THE MODEL IN THE MVC DESIGN PARADIGM |
Author(s): |
Aaron Jackson and John G. Keating |
Abstract: |
There are a variety of systems
available to help automate and control the Web Content Management
(WCM) process. Most of these systems are modelled using the
Model-View-Controller (MVC) design paradigm. This is a design
technique frequently adopted by software developers to assist in
modularity, flexibility, and re-use of object oriented web
developments. This design paradigm involves separating the objects
in a particular interaction into 3 categories for the purpose of
providing a natural set of encapsulating boundaries, encouraging
many-to-many relationships along the separate component boundaries,
and segregating presentation and content. These MVC based systems
control what is known as static content. In this paper we propose a
new framework for controlling the software tools used in MVC based
systems. More precisely, the automatic deployment of model software
tools based on XML defined policies. This framework incorporates a
non-proprietary component based architecture and well structured
representations of Policies. The Policies are not embedded in the
system, they are generated, and therefore each component is self
contained and can be independently maintained. Our framework will
work on a centralized or distributed environment and we believe that
the use of this framework makes it easier to deploy MVC based
systems. |
|
Title: |
TOWARDS A SELF-FORMING BUSINESS
NETWORKING ENVIRONMENT |
Author(s): |
Claudia-Melania Chituc and Americo
Lopes Azevedo |
Abstract: |
The rapid evolution of the markets
and the changing client’s demands determined enterprises to adapt
their business from traditional business practices to e-business,
and new forms of collaboration (such as supply chain enterprises,
extended enterprises or virtual enterprises) were created. In this
context, emerging technologies (such as Peer-to-Peer, Web services,
Intelligent agents, Workflow) become core technologies supporting
enterprise integration. They address business integration needs,
streamlining transactions while supporting process coordination and
consistency. The aim of this paper is to analyse business
integration concepts and solutions, and to propose a new
inter-operability paradigm: Plug-and-Do-Business that represents the
basis of a conceptual framework for a self-forming business
networking environment. The paper is organized in four sections.
After a brief introduction to the topic, issues related to
enterprise integration are presented, such as enterprise integration
needs, reference models, technologies and architectures. Two
comparisons of business-to-business (B2B) standards are than
referred. The third section presents the emergence of the novel
Plug-and-Do-Business paradigm that models the natural integration of
an enterprise in a networked environment. The methodology developed
for the research project is than described. The fourth and last
section contains the conclusions of the paper. |
|
Title: |
AN MDA-EDOC BASED DEVELOPMENT PROCESS
FOR DISTRIBUTED APPLICATIONS |
Author(s): |
Rita Suzana Pitangueira Maciel, Bruno
Carreiro da Silva, Carlos André Guimarăes Ferraz and Nelson Souto
Rosa |
Abstract: |
With the proposal of MDA by OMG, the
modelling of systems, in development process of distributed
applications, has become a central point, therefore software models
go beyond system documentation. EDOC - MDA profile for modelling
distributed application - uses as conceptual framework the RM-ODP.
These elements, although very useful, are insufficient for a
software development process; therefore they are not followed by
development methodologies. In this article is presented a MDA-based
development process for distributed applications that utilize EDOC
and the RM-ODP. The process is described as a sequence of steps and
a set of diagrams that should be specified to provide a MDA-based
system description. |
|
Title: |
BRINGING SOCIAL CONSTRUCTS TO THE
INFORMATION SYSTEM DEVELOPMENT PROCESS: CONTRIBUTIONS OF
ORGANIZATIONAL SEMIOTICS |
Author(s): |
Carlos Alberto Cocozza Simoni, M.
Cecília C. Baranaukas and Rodrigo Bonacin |
Abstract: |
Literature has shown the influence of
the social, cultural and organizational aspects involved in the
process of developing information systems. The Unified Process (UP)
has been widely used in the software industry, but literature has
shown its drawbacks when applied to the modelling of human actions
in the social and organizational contexts. Our research investigates
the use of Organizational Semiotics (OS) methods combined with the
UP to compose a complete cycle of system development, aiming at
bringing social constructs to the development process of information
systems. |
|
Title: |
TRANSFORMING SA/RT GRAPHICAL
SPECIFICATIONS INTO CSP+T FORMALISM - OBTAINING A FORMAL
SPECIFICATION FROM SEMI-FORMAL SA/RT ESSENTIAL MODELS |
Author(s): |
Manuel I. Capel and Juan A. Holgado |
Abstract: |
A correct system specification is
systematically obtained from the essential user requirements model
by applying a set of rules, which give a formal semantics to the
graphical analysis entities of SA/RT. The aim of the systematic
procedure is to set the methodological infrastructure necessary for
deriving a complete system specification of a given real-time system
in terms of CSP+T processes. A detailed complete solution to the
Production Cell problem has been discussed so as to show how the
method can be applied to solve a real-world industrial problem. |
|
Title: |
DECOUPLING MVC: J2EE DESIGN PATTERNS
INTEGRATION |
Author(s): |
Francisco Maciá-Pérez, Virgilio
Gilart-Iglesias, Diego Marcos-Jorquera, Juan Manuel García-Chamizo
and Antonio Hernández-Sáez |
Abstract: |
Nowadays the Internet has become a
suitable environment for the new business models, by means of which
companies can reach the new open market world-widely. However,
adapting the traditional application architectures is not enough in
order to take advantage of this environment in effective way. For
this reason, it is necessary to develop new approaches so as to
reach the environment’s full potential, as in the case of the
distributed software components on n-tier architectures model. Due
to its complexity, this model requires technological platforms, like
J2EE, in order to support the development of such applications. In
spite of the power that the J2EE platform provides, some
organizations refuse to develop applications under this platform
because it requires a deep knowledge of the J2EE technology and its
design patterns. In this article we propose a model based on the
Model-View-Controller paradigm and built over the integration of
open source frameworks (StrutsEJB-Cocoon-Struts) which are used by a
wide community but have not been managed as a global solution. This
model and its underlying integrated framework offer a powerful
environment that reduces the complexity associated with the
development of J2EE applications. |
|
Title: |
THE SEMIOTIC LEARNING FRAMEWORK – HOW
TO FACILITATE ORGANISATIONAL LEARNING |
Author(s): |
Angela Nobre |
Abstract: |
The complexity of current
organisational contexts implies the need for innovative theorisation
of learning at organisational level. Organisational learning
represents a critical aspect of each organisation’s capacity to
innovate, and to nurture and maintain its inner dynamism. The
Semiotic Learning Framework is presented as a theoretical approach
to organisational learning and as a working methodology to be
applied within organisational contexts. It derives its rationale
from social semiotics and from social philosophy and it focuses on
critical organisational key issues. This framework is to be applied
as an organisational learning initiative at organisational level, as
the content of a post-graduate programme, and as a methodology for
interdisciplinary team works. |
|
Title: |
EVALUATION AND COMPARISON OF ADL
BASED APPROACHES FOR THE DESCRIPTION OF DYNAMIC OF SOFTWARE
ARCHITECTURES |
Author(s): |
Mohamed Hadj Kacem, Mohamed Jmaiel,
Ahmed Hadj Kacem and Khalil Drira |
Abstract: |
This paper presents an evaluation
study of Architecture Description Languages (ADL) which allows to
compare the expressive power of these languages for specifying the
dynamicity of software architectures. Our investigation enabled us
to release two categories of ADLs: configuration languages and
description languages. Here, we address both categories, and we
focus on two aspects : the the behaviour of software components and
the evolution of the architecture during execution. In addition, we
explain how each ADL handles these aspects and demonstrate that they
are generally not or not enough dealt with by most of the ADLs. This
motivates future extensions to be undertaken in this domain.
Throughout this paper, we illustrate the comparison of these two
aspects by describing an example of a distributed application for
collaborative authoring support. |
|
Title: |
SEMANTIC WEB SUPPORT FOR BUSINESS
PROCESSES |
Author(s): |
Airi Salminen and Maiju Virtanen |
Abstract: |
Development of semantic web
technologies has been initiated to improve the utilization of web
resources particularly by software applications. Limitations in the
capabilities of applications to process data accessible on the web
as well as limitations in the interconnectivity of software
applications cause vastly extra human work in business processes.
Semantic web is intended to extend the current web by metadata
adding meaning to web resources. In an interorganizational business
process context, semantic web could be an extension of the current
intranet, extranet, and internet resources better enabling computers
and people in business processes to work in cooperation. In the
paper we will explore the possibilities of the semantic web
technologies to support business processes. Particularly we will
evaluate the possibilities and problems related to the utilization
of RDF (Resource Description Framework), which enables formal
representing of metadata and metadata schemas. The possibilities of
RDF metadata are discussed in describing various types of metadata,
such as contextual and contentual metadata of a process. The
challenges in RDF schema design are analyzed in defining the most
important concepts for a schema. We will use the Finnish legislative
process as a case to demonstrate the issues discussed. It is an
example of a complex interorganizational process participated by
many organizations. In the end we will draw implications of our
analysis to the development of RDF schemas and other semantic web
solutions for business processes. |
|
Title: |
PROCESS ORIENTED DISCOVERY OF
BUSINESS PARTNERS |
Author(s): |
Axel Martens |
Abstract: |
Emerging technologies and industrial
standards in the field of Web services enable a much faster and
easier cooperation of distributed partners. With the increasing
number of enterprises that offer specific functionality in terms of
Web services, discovery of matching partners becomes a serious
issue. At the moment, discovery of Web services generally is based
on meta-information (e.g. name, business category) and some
technical aspects (e.g. interface, protocols). But, this selection
might be to coarse grained for dynamic application integration, and
there is much more information available, which can be used to
increase precision. This paper describes an approach to discover
business partners based on the comparison of their published Web
service process models. |
|
Title: |
SYSTEM ENGINEERING PROCESSES
ACTIVITIES FOR AGENT SYSTEM DESIGN: COMPONENT BASED DEVELOPMENT FOR
RAPID PROTOTYPING |
Author(s): |
Jaesuk Ahn, Dung Lam, Thomas Graser
and K. Suzanne Barber |
Abstract: |
Agent Technology is becoming a new
means of designing and building complex, distributed software
systems. Agent technology is now being applied to the development of
large open software systems; such development requires methodologies
to construct software systems that select and assemble highly
flexible agent technology components written at different times by
various developers. However, the lack of mature agent software
development methodologies, the diversity of agent technologies, and
the lack of a common framework for describing these technologies
challenges designers attempting to evaluate, compare, select, and
potentially reuse agent technology. This paper proposes (1)
categorization and comparison of agent technologies under a common
ontology, (2) a repository of agent technologies which will assist
the agent designer in browsing and evaluating agent technologies in
the context of a given high level reference architecture and
associated requirements, (3) an architecting process to rapidly
prototype by selecting agent technology components that fulfill the
designer’s requirements, and (4) toolkit support to build a
technology repository and agent system |
|
Title: |
TOWARDS A GLOBAL SOFTWARE DEVELOPMENT
MATURITY MODEL |
Author(s): |
Leonardo Pilatti and Jorge Audy |
Abstract: |
Build softwares have always been a
challenge. To shape and to implement a computational viable solution
involves a lot of technical and social questions (referring to the
interaction between stakeholders). This complexity increases,
significantly, when dispersed global teams are used. The necessity
to have a set of processes better to organize the development
strategy appears as one of the main challenges to be explored. The
objective of this article is to present a proposal of structure for
a maturity model for global software development. The study is based
on an ample theoretical revision on the structures of the main
maturity and government models of information technology. The
empirical base of this study will involve a multinational
organization of software development with branch offices in Brazil,
Russia and India. |
|
Title: |
SOFTWARE PROJECT DRIVEN ANALYSIS AND
DEVELOPMENT OF PROCESS ACTIVITIES SUPPORTING WEB BASED SOFTWARE
ENGINEERING TOOLS |
Author(s): |
Shriram Sankaran and Joseph E. Urban |
Abstract: |
The field of software engineering has
seen the development of software engineering tools that allow for
distributed development of software systems over the web. This paper
covers the development of a web based software design tool that
served as the basis for software requirements formulation of a
software process tracking tool. These software tools are an
outgrowth of a software engineering project capstone. The discussion
focuses on those development activities that assisted the front end
of the development through needs determination and software
requirements formulation. This paper describes the background for
the software engineering projects, software tool development
processes, and the developed software tools. |
|
Title: |
A METHODOLOGY OF FORECASTING DEMANDS
OF THE COMMUNICATION TRAFFIC |
Author(s): |
Masayuki Higuma and Masao J.
Matsumoto |
Abstract: |
A Traffic demand of the communication
has strong relations between the gross domestic product (GDP).
However the Linear regression Model (LM) cannot apply analyzing a
traffic demand, because its relations have non linear shape.
Otherwise the Auto Regression model (AR) has problems ,which cannot
reflect trends of social and economical issues ,and has big
forecasting errors, when a traffic demand has a trend component.
Therefore this paper considers a new methodology forecasting traffic
demands, which has high quality by resolving the above problems, by
modeling and indexing social and economical issues. |
|
Title: |
QUALITY OF SERVICE IN FLEXIBLE
WORKFLOWS THROUGH PROCESS CONSTRAINTS |
Author(s): |
Shazia Sadiq, Maria Orlowska, Joe Lin
and Wasim Sadiq |
Abstract: |
Workflow technology has delivered
effectively for a large class of business processes, providing the
requisite control and monitoring functions. At the same time, this
technology has been the target of much criticism due to its limited
ability to cope with dynamically changing business conditions which
require business processes to be adapted frequently, and/or its
limited ability to model business processes which cannot be entirely
predefined. Requirements indicate the need for generic solutions
where a balance between process control and flexibility may be
achieved. In this paper we present a framework that allows the
workflow to execute on the basis of a partially specified model
where the full specification of the model is made at runtime, and
may be unique to each instance. This framework is based on the
notion of process constraints. Where as process constraints may be
specified for any aspect of the workflow, such as structural,
temporal, etc. our focus in this paper is on a constraint which
allows dynamic selection of activities for inclusion in a given
instance. We call these cardinality constraints, and this paper will
discuss their specification and validation requirements.
|
|
Title: |
CARTOGRAPHIES OF ONTOLOGY CONCEPTS |
Author(s): |
Hatem Ben Sta, Lamjed Ben Said,
Khaled Ghédira, Michel Bigand and Jean Pierre Bourey |
Abstract: |
We are interested to study the state
of the art of ontologies and to synthesize it. This paper makes a
synthesis of definitions, languages, ontology classifications,
ontological engineering, ontological platforms and application
fields of ontologies. The objective of this study is to cover up and
synthesize the ontological concepts through the proposition of a
whole of cartographies relative to these concepts. |
|
Title: |
REVEALING THE REAL BUSINESS FLOWS
FROM ENTERPRISE SYSTEMS TRANSACTIONS |
Author(s): |
Jon Espen Ingvaldsen, Jon Atle Gulla,
Ole Andreas Helge and Atle Prange |
Abstract: |
Understanding the dynamic behavior of
business flows is crucial for being able to modify, maintain and
improve an organization. In this paper we present an approach and a
tool to business flow analysis that helps us reveal the real
business flows and get and exact understanding of current situation.
Analyzing the logs of large enterprise systems, the tool
reconstructs models of how people work and detects important
performance indicators. The tool is used as part of change projects
and replaces much of the traditional manual work that is involved. |
|
Title: |
ICT BASED ASSET MANAGEMENT FRAMEWORK |
Author(s): |
Abrar Haider and Andy Koronios |
Abstract: |
Manufacturing and production
environment is subjected to radical change. Impetus to this change
has been fuelled by intensely competitive liberalised markets; with
technological advances promising enhanced services and improved
asset infrastructure and plant performance. This emergent
re-organisation has a direct influence on economic incentives
associated with the design and management of asset equipment and
infrastructures, for continuous availability of these assets is
crucial to profitability and efficiency of the business. As a
consequence, engineering enterprises are faced with new challenges
of safeguarding the technical integrity of these assets, and the
coordination of support mechanisms required to keep these assets in
running condition. At present, there is insufficient understanding
of optimised technology exploitation for realisation of these
processes; and theory and model development is required to gain
understanding that is a prerequisite to influencing and controlling
asset operation to the best advantage of the business. This paper
aims to make a fundamental contribution to the development and
application of ICTs for asset management, by investigating the
interrelations between changing asset design, production demand and
supply management, maintenance demands, asset operation and process
control structures, technological innovations, and the support
processes governing asset operation in manufacturing, production and
service industries. It takes lifecycle perspective of asset
management by addressing economic and performance tradeoffs,
decision support, information flows, and process re-engineering
needs of superior asset design, operation, maintenance,
decommissioning, and renewal. |
|
Title: |
A LOOSELY COUPLED ARCHITECTURE FOR
DIGITAL LIBRARIES: THE PHRONESIS CASE |
Author(s): |
Juan C. Lavariega, Andan Salinas,
David Garza, Lorena Gomez and Martha Sordia |
Abstract: |
Digital Libraries (DL) provide
services for submission, indexing, classification, storage,
searching, retrieval, and administration of digital documents. There
are several DL projects and products, some of them focus on the
administration of domain specific collections, and others limit
collections to be physically located within the borders of site
where the DL software resides. Phronesis is a tool for creation and
administration of DL which can be geographically distributed and
which are accessible over the WWW. Phronesis developing team
intention was to make the project accessible to other developers,
who can improve its functionality. However, one of the major
drawbacks was Phronesis’ data centric architecture and the highly
coupled subsystems which made hard to maintain and to add new
functionality. This paper addresses the problems with the old data
centric Phronesis architecture. Throughout the paper we discuss the
functionality provided by the subsystems, and present a loosely
coupled architecture for digital libraries. The approach presented
here follows the style of services oriented architectures (SOA). The
SOA for Phronesis is a framework that provides services for the
submission, indexing and compression of documents. Phronesis SOA is
organized into layers of functionality that favor maintenance,
reuse, and testing of the entire project; increasing performance and
availability. |
|
Title: |
PROCESS REFERENCE MODEL FOR DATA
WAREHOUSE DEVELOPMENT - A CONSENSUS-ORIENTED APPROACH |
Author(s): |
Ralf Knackstedt, Karsten Klose, Björn
Niehaves and Jörg Becker |
Abstract: |
IS literature provides a variety of
Data Warehouse development methodologies focusing on technical
issues, for instance the automatical generation of Data Warehouse or
OLAP schemata from conceptual graphical models or the
materialization of views. On the other hand, we can observe a
growing influence of conceptual modelling in the move of general IS
development which is specifically addressing early phase design
issues. Here, conceptual modelling solves communicational problems
which emerge when for instance IT personnel and business personnel
work together, mostly having distinct educational and professional
backgrounds as well as using distinct domain languages. Thus, the
aim of this paper is to provide the foundation of a Data Warehouse
development methodology in form of a process reference model which
is based on a conceptual modelling approach. After analyzing
theoretical-epistemological issues fundamental to conceptual
modelling issues, we instantiate and operationalize them focusing
the consensus-oriented approach. This understanding provides the
basis for the consensus-oriented Data Warehouse development
methodology. |
|
Title: |
PROCESS MODELLING FOR SERVICE
PROCESSES - MODELLING METHODS EXTENSIONS FOR SPECIFYING AND
ANALYSING CUSTOMER INTEGRATION |
Author(s): |
Karsten Klose, Ralf Knackstedt and
Jörg Becker |
Abstract: |
Service Provider business processes
require extensive customer participation. Due to the customer’s
substantial impact on the successful implementation of performance
processes, measures of customer interaction must be planned
meticulously. At present, there are numerous modelling techniques
for a model-based structuring of these processes. Admittedly, these
models provide only general operations for model modifications such
as the ability to delete and add elements. As a result, process
designers are not supported sufficiently by domain-specific business
design options. This paper demonstrates possible extensions for
process modelling techniques which are intended to assist service
providers in analysing their processes with particular regard to
customer integration and contract formulation. In the presented
business case, the application of the method allowed for some rapid,
useful and promising recommendations regarding the improvement of
customer processes of an IT service provider. |
|
Title: |
XML-BASED IMPACT ANALYSIS USING
CHANGE-DETECTION APPROACH FOR SYSTEM INTERFACE CONTROL |
Author(s): |
Namho Yoo |
Abstract: |
In this paper, an XML-based approach
is presented for the impact analysis of interface control in
sustained systems. Once a system is completed developed, it goes
into a sustained phase supported by many interfaces. As new
technologies develop, updating and maintaining such systems require
non-trivial efforts. A clear pre-requisite before the deployment of
a new system is to clarify the influence of changes on other systems
connected through interfaces. However, as each sustained system
manages its own information separately, integrating relevant
information among the interfaced systems is a major hurdle. In our
approach, the XML technology is applied to support impact analysis
for interface control architecture using change-detection approach
for the reference. In particular, I focus on messaging interface
issues in Health Level Seven typically used in medical information
system and propose a scheme to represent message information that
can be used for the decision support of interface impact between
sustained systems. |
|
Title: |
XML VIEWS, PART III: AN UML BASED
DESIGN METHODOLOGY FOR XML VIEWS |
Author(s): |
Rajugan R., Tharam S. Dillon,
Elizabeth Chang and Ling Feng |
Abstract: |
Object-Oriented (OO) conceptual
models have the power in describing and modelling real-world data
semantics and their inter-relationships in a form that is precise
and comprehensible to users. Today UML has established itself as the
language of choice for modelling complex enterprises information
systems (EIS) using OO techniques. Conversely, the eXtensible Markup
Language (XML) is fast emerging as the dominant standard for
storing, describing and interchanging data among various enterprises
systems and databases. With the introduction of XML Schema, which
provides rich facilities for constraining and defining XML content,
XML provides the ideal platform and the flexibility for capturing
and representing complex enterprise data formats. Yet, UML provides
insufficient modelling constructs for utilising XML schema based
data description and constraints, while XML Schema lacks the ability
to provide higher levels of abstraction (such as conceptual models)
that are easily understood by humans. Therefore to enable efficient
business application development of large-scale enterprise systems,
we need UML like models with rich XML schema like semantics. To
address such issue, we proposed a semantic aware XML view mechanism
[Raju03] to conceptually model and design XML Schema based view
mechanism to support data modelling of complex domains such as data
warehousing. In our later work, we proposed a semantic net based
design methodology [Raju04] for designing XML views. In this paper,
we propose a UML stereotype based approach to design and transform
XML views. |
|
Title: |
MODEL DRIVEN DEVELOPMENT OF BUSINESS
PROCESS MONITORING AND CONTROL SYSTEMS |
Author(s): |
Tao Yu and Jun-Jang Jeng |
Abstract: |
This paper describes model-driven
approach in monitoring and controlling the behavior of business
processes. The business-level monitoring and control requirements
are first described by a series of policies that can be combined
together to construct a Directed Acyclic Graph (DAG), which can be
regarded as the Platform Independent Model (PIM) for the high level
business solution. PIM provides a convenient and clear way for
business users to understand, monitor and control the interactions
in the target business process. Then the PIM is transformed to an
executable representation (Platform Specific Model, PSM), such as
BPEL (Business Process Execution Language for Web Service) by
decomposing the DAG into several subprocesses and modeling each
sub-process as a BPEL process that will be deployed to runtime. |
|
Title: |
ACCESS CONTROL MODEL FOR GRID VIRTUAL
ORGANIZATIONS |
Author(s): |
Nasser B., Benzekri A., Laborde R.,
Grasset F. and Barrčre F. |
Abstract: |
The problems encountered in the
scientific, industrial and engineering fields entail sophisticated
processes across widely distributed communities. The Grid emerged as
a platform that has a goal enabling coordinated resources sharing
and problem resolving in dynamic multi-institutional virtual
organizations. Though the multi-institutional aspect is considered
in the grid definition, there is no recipe that indicates how to
fabricate a VO in such environment where mutual distrust is a
constraint. Excluding a central management authority, the different
partners should cooperate to put in place a multi-administrated
environment. The role of each partner in the VO should be clear and
unambiguous (permissions, interdictions, users and resources to
manage…). Organizing a large scale environment is error prone where
not well formalized models lead to unexpected security breaches.
Within the access control models RBAC has proved to be flexible but
is not adapted to model the multi-institutional aspect. In this
context, we propose a formal access control model, ORBAC
(Organization Based Access Control model), that encompass all the
concepts required to express a security policy in complex
distributed organizations. Its generality and formal foundation
makes this model the best candidate to serve as a common framework
for setting up Virtual Organizations. |
|
Title: |
GRAPHICAL SPECIFICATION OF DYNAMIC
NETWORK STRUCTURE |
Author(s): |
Fredrik Seehusen and Ketil Střlen |
Abstract: |
We present a language, MEADOW, for
specifying dynamic networks from a structural viewpoint. We
demonstrate MEADOW in three examples addressing dynamic
reconfiguration in the setting of object-oriented networks, ad hoc
networks and mobile code networks. MEADOW is more expressive than
any language of this kind (e.g. SDL-2000 agent diagrams, composite
structures in UML 2.0) that we are aware of, but maintains, in our
opinion, the simplicity and elegance of these languages.
|
|
Title: |
DIALOGUE ACT MODELLING FOR ANALYSIS
AND SPECIFICATION OF WEB-BASED INFORMATION SYSTEMS |
Author(s): |
Ying Liang |
Abstract: |
Web-based information systems aim to
enable people to live and do thing in society with help of computer
systems on internet. User interfaces and navigation structures of
these systems become more important and critical than the ones of
traditional information systems to the user because of the nature of
these systems. The experiences on requirements analysis and
specification of these systems have shown need of gathering and
specifying communicational requirements for the system in the
analysis model as a basis for designing user interfaces and
navigation structures. This paper addressed this issue and proposes
a dialogue act modelling approach that has focus on communicational
requirements with pragmatic and descriptive views in terms of the
Speech Theory in the social science and the object modelling
techniques in Software Engineering. |
|
Title: |
REAL TIME DETECTION OF NOVEL ATTACKS
BY MEANS OF DATA MINING TECHNIQUES |
Author(s): |
Marcello Esposito, Claudio
Mazzariello, Francesco Oliviero, Simon Pietro Romano and Carlo
Sansone |
Abstract: |
Rule-based Intrusion Detection
Systems (IDS) rely on a set of rules to discover attacks in network
traffic. Such rules are usually hand-coded by a security
administrator and statically detect one or few attack types: minor
modifications of an attack may result in detection failures. For
that reason, signature based classification is not the best
technique to detect novel or slightly modified attacks. In this
paper we approach this problem by extracting a set of features from
network traffic and computing rules which are able to classify such
traffic. Such techniques are usually employed in off line analysis,
as they are very slow and resource-consuming. We want to state the
affordability of a detection technique which combines the use of a
common signature-based intrusion detection system and the deployment
of a pattern recognition technique. We will introduce the problem,
describe the developed architecture and show some experimental
results to demonstrate the usability of such a system. |
|
Title: |
A THEORETICAL PERFORMANCE ANALYSIS
METHOD FOR BUSINESS PROCESS MODEL |
Author(s): |
Liping Yang, Ying Liu and Xin Zhou |
Abstract: |
During designing a business process
model, to predict its performance is very important. The performance
of business operational process is heavily influenced by its
bottlenecks. In order to improve the performance, finding the
bottlenecks is critical. This paper proposes a theoretical analysis
method for bottleneck detection. An abstract computational model is
designed to capture the main elements of a business operational
process model. Based on the computational model, a balance equation
system is set up. The bottlenecks can be detected by solving the
balance equation system. Compared with traditional bottleneck
detection methods, this theoretical analysis method has two obvious
advantages: the cost of detecting bottlenecks is very low because
they can be predicted in design time with no need for system
simulation; and it can not only correctly predict the bottlenecks
but also give the solutions for improving the bottleneck by solving
the balance equation system. |
|
Title: |
MULTIVIEWS COMPONENTS FOR INFORMATION
SYSTEM DEVELOPMENT |
Author(s): |
Bouchra El Asri, Mahmoud Nassar,
Bernard Coulette and Abdelaziz Kriouile |
Abstract: |
Component based software intends to
meet the need of reusability and productivity. View concept allows
software flexibility and maintainability. This work addresses the
integration of these two concepts. Our team has developed a
view-centred approach based on an extension of UML called VUML (View
based Unified Modelling Language). VUML provides the notion of
multiviews class that can be used to store and deliver information
according to users viewpoints. Recently, we have integrated into
VUML multiviews component as a unit of software which can be
accessed through different viewpoints. A multiviews component has
multiviews interfaces that consist of a base interface (shared
interface) and a set of view interfaces, corresponding to different
viewpoints. VUML allows dynamic changing of viewpoint and offers
mechanisms to manage consistency among dependent views. In this
paper, we focus on the static architecture of the VUML component
model. We illustrate our study with a distant learning system case
study. |
|
Title: |
DEFINITION OF BUSINESS PROCESS
INTEGRATION OPERATORS FOR GENERALIZATION |
Author(s): |
Georg Grossmann, Yikai Ren, Michael
Schrefl and Markus Stumptner |
Abstract: |
Integration of autonomous
object-oriented systems requires the integration of object structure
and object behavior. Past research in the integration of autonomous
object-oriented systems has so far mainly addressed integration of
object structure. During our research we have identified business
process correspondences and give proper integration operators. In
this paper we define these integration operators by a set of high
level operation calls and demonstrate them on a car dealer and car
insurance example. For modelling purposes we use a formalised subset
of UML activity diagrams. |
|
Title: |
RESOURCE-AWARE CONFIGURATION
MANAGEMENT USING XML FOR MITIGATING INFORMATION ASSURANCE
VULNERABILITY |
Author(s): |
Namho Yoo |
Abstract: |
This paper suggests an XML-based
configuration management for mitigating information assurance
vulnerability. Once an information assurance vulnerability notice is
given for a system, it is important for reducing massive system
engineering efforts for configuration management. When multiple
systems are updated by security patches for mitigating system
vulnerability, configuration management based on system resource is
trivial, in order to increase accuracy, efficiency and effectiveness
of software processes. By employing XML technology, we can achieve
seamless and efficient configuration management between
heterogeneous system format as well as data formats in analysing and
exchanging the pertinent information for information assurance
vulnerability. Thus, when a system is updated to improve system
vulnerability, the proposed XML-based configuration management
mechanism refers to the system resource information and analyse the
security model and posture of affected sustained system and minimize
the propagated negative impact. Then, an executable architecture for
implementation to verify the proposed scheme and algorithm and
testing environment is presented to mitigate vulnerable systems for
sustained system. |
|
Title: |
A FRAMEWORK FOR MANAGING MULTIPLE
ONTOLOGIES: THE FUNCTION-ORIENTED PERSPECTIVE |
Author(s): |
Baowen Xu, Peng Wang, Jianjiang Lu,
Dazhou Kang and Yanhui Li |
Abstract: |
Ontologies are now ubiquitous in
Semantic Web and knowledge representation areas. Managing multiple
ontologies is a challenging issue including comparing existing
ontologies, reusing complete ontologies or their parts, maintaining
different versions, and so on. However, most previous multiple
ontologies management work focused on ontologies maintenance,
evolutions, and versioning. They ignored the very important point:
exploiting the functions of multiple ontologies provide. This paper
proposed a new framework for managing multiple ontologies based on
the function-oriented perspective, and its goal is to bring multiple
ontologies together to provide more powerful capabilities for the
practical applications. The new multiple ontologies management
architecture is not only feasible, but also robust in the dynamic
and distributed Semantic Web environment. |
|
Title: |
INTRUSION DETECTION AND RESPONSE TO
AUTOMATED ATTACKS |
Author(s): |
Shawn Maschino |
Abstract: |
This paper investigates current
research in the fields of intrusion detection and response for
automated attacks such as worms, denial-of-service, and distributed
denial-of-service attacks. As the number of networked systems rise
the ability to detect and respond to attacks is an essential part of
system security for protecting data and ensuring availability of
systems. This survey highlights current risk due to the latest
automated attack technology and applies historical and current
research to show the information security approach to detecting and
preventing these types of attacks. Recent technologies such as
virtualization and grid computing are discussed in relation to the
roles they play in this area, and future areas of work are
addressed. |
|
Title: |
USER-CENTRIC ADAPTIVE ACCESS CONTROL
AND RESOURCE CONFIGURATION FOR UBIQUITOUS COMPUTING ENVIRONMENTS |
Author(s): |
Mike White, Brendan Jennings and Sven
van der Meer |
Abstract: |
Provision of adaptive access control
is key to allowing users harness the full potential of ubiquitous
computing environments. In this paper, we introduce the M-Zones
Access Control (MAC) process, which provides user-centric
attribute-based access control, together with automatic
reconfiguration of resources in response to the changes in the set
of users physically present in the environment and. User control is
realised via user-specified policies, which are analysed in tandem
with system policies and policies of other users, whenever events
occur that require policy decisions and associated configuration
operations. In such a system user’s policies may habitually conflict
with system policies, or indeed other users’ policies; thus, policy
conflict detection and resolution is a critical issue. To address
this we describe a conflict detection/resolution method based on a
policy precedence scheme. To illustrate the operation of the MAC
process and its conflict detection/resolution method, we discuss its
realisation in a test bed emulating an office-based ubiquitous
computing environment. |
|
Title: |
METAPOLICIES AND CONTEXT-BASED ACCESS
CONTROL |
Author(s): |
Ronda R. Henning |
Abstract: |
An access control policy mediates
access between authorized users of a computer system and system
resources. Access control policies are defined at a given level of
abstraction, such as the file, directory, system, or network, and
can be instantiated in layers of increasing (or decreasing)
abstraction. In this paper, the concept of a metapolicy, or policy
that governs execution of subordinate security policies, is
introduced. The metapolicy provides a method to communicate updated
higher level policy information to all components of a system; it
minimizes the overhead associated with access control decisions by
making access decisions at the highest level possible in the policy
hierarchy. This paper discusses how metapolicies are defined and how
they relate to other access control mechanisms. The rationale for
revisiting metapolicies as an access control option is presented.
Finally, a proposed research methodology is presented to determine
the feasibility of metapolicy derivation and deployment in current
generation distributed and federated computing environments |
|
Area 4 - Software Agents
and Internet Computing
|
Title: |
C# TEMPLATES FOR TIME-AWARE AGENTS |
Author(s): |
Merik Meriste, Tőnis Kelder, Jüri
Helekivi and Leo Motus |
Abstract: |
Autonomous behaviour of components
characterises today computer applications. This has introduced a new
generic architecture - multi-agent systems - where the interactions
of autonomous proactive components, i.e. agents - are decisive in
determining the overall behaviour of the system. Increasingly, the
agents' applications need time-awareness of agents and/or their
interactions. Therefore the application architecture is to be
enhanced with sophisticated time model that enables the study of
time-aware behaviour and interactions of agents. The focus of this
paper is on the inner structure of an agent that provides explicit
hooks for elaboration of time support to enable time-aware behaviour
of agents, on the general infrastructure for time-sensitive
communication of agents, and on templates for building interactive
time-aware agents. |
|
Title: |
A NEW MODEL FOR DATABASE SERVICE
DISCOVERY IN MOBILE AGENT SYSTEM |
Author(s): |
Lei Song, Xining Li and Jingbo Ni |
Abstract: |
One of the main challenges of mobile
agent technology is how to locate hosts that provide services
specified by mobile agents. As it is a newly emerging research
topic, few research groups have paid attention to offering an
environment that combines the concept of service discovery and
mobile agents to build dynamic distributed systems. Traditional
Service Location Protocols (SDPs) can be applied to mobile agent
systems to explore the Service Discovery issue. However, because of
their architecture deficiencies, they do not adequately solve all
the problems that may arise in a dynamic domain such as Database
Location Discovery. From this point of view, we need some enhanced
service discovery techniques for the mobile community. This article
proposes a new model for solving the database service location
problem in the domain of mobile agents by implementing a Service
Discovery Module based on Search Engine techniques. As a typical
interface provided by a mobile agent server, the Service Discovery
Module also improves the self-decision intelligent ability of mobile
agents with respect to Information Retrieval. This work focuses on
the design of an independent search engine, IMAGOSearch and a
discussion of how to integrate it with the IMAGO System, thus
providing a global scope service location tool for intelligent
mobile agents. |
|
Title: |
AN ARCHITECTURE FOR INTRUSION
DETECTION AND ACTIVE RESPONSE USING AUTONOMOUS AGENTS IN MOBILE AD
HOC NETWORKS |
Author(s): |
Ping Yi, Shiyong Zhang and Yiping
zhong |
Abstract: |
A mobile ad hoc network is a
collection of wireless mobile hosts forming a temporary network
without the aid of any established infrastructure or centralized
administration. The flexibility in space and time induces new
challenges towards the security infrastructure. Contrary to their
wired counterpart, mobile ad hoc networks do not have a clear line
of defense, and every node must be prepared for encounters with an
adversary. Therefore, a centralized or hierarchical network security
solution does not work well. We provide scalable, distributed
security architecture for mobile ad hoc networks in this paper. The
architecture integrates the ideas of immune system and a multi-agent
architecture. Compared with traditional security system, the
proposed security architecture is designed to be distributed,
autonomy, adaptability, scalability. |
|
Title: |
A SOFTWARE FRAMEWORK FOR OPEN
STANDARD SELF-MANAGING SENSOR OVERLAY FOR WEB SERVICES |
Author(s): |
Wail Omar, Bassam Ahmad, Azzelarabe
Taleb-Bendiab and Yasir Karm |
Abstract: |
To improve the usability and
reliability of grid-based applications, instrumentation middleware
services are now proposed and widely accepted as a means to monitor
and manage grid users’ applications. A plethora of research works
now exist focusing on the design and implementation of a range of
software instrumentation techniques (Lee et al. 2003, Reilly and
Taleb 2002) to enhance general systems’ management including; QoS,
fault-tolerance, systems recovery and load-balancing. However,
management and assurance concerns of related to sensors and
actuation (effectors) for grid and web services environment received
little to no attention. This paper presents a lightweight framework
for the generation, deployment and discovery of different types of
sensors and actuators together with two associated description
languages namely; monitor session description language and sensor
and actuation description langue. These are used respectively to
describe the set of deployed sensors and actuators in a given
self-managing grid infrastructure, and to define monitoring
properties and policies of a given target service/application. In
addition, the paper presents a developed sensor framework to provide
the basic systems awareness fabric layer for managing decentralised
web services. The paper concludes with a case study illustrating the
use of the sensor framework and monitoring job request to manage and
schedule the sensor’s operation. |
|
Title: |
LEVELS OF ABSTRACTION IN PROGRAMMING
DEVICE ECOLOGY WORKFLOWS |
Author(s): |
SengW. Loke, Sea Ling, Gerry Butler
and Brett Gillick |
Abstract: |
We explore the notion of the workflow
for specifying interactions among collections of devices (which we
term "device ecologies"). We discuss three levels of abstraction in
programming device ecologies: high-level workflow, low-level
workflow and device conversations, and how control (in the sense of
operations issued by an end-user on such workflows or exceptions) is
passed between levels. Such levels of abstraction are important
since the system should be as user friendly as possible while
permitting programmability not only at high levels of abstraction
but also at low levels of detail. We also present a conceptual
architecture for the device ecology workflow engine for executing
and managing such workflows. |
|
Title: |
GENERIC FAULT-TOLERANT LAYER
SUPPORTING PUBLISH/SUBSCRIBE MESSAGING |
Author(s): |
Milovan Tosic and Arkady Zaslavsky |
Abstract: |
With the introduction of clustered
messaging brokers and the fault-tolerant Mobile Connector, we can
guarantee the exactly-once consumption of messages by agents. The
context-aware messaging allowed us to decrease the messaging
overhead which has to be present in any fault-tolerant solution.
This paper proposes a complete fault-tolerant layer for multi-agent
systems (EFTL) that does not restrict agent autonomy and mobility in
any way. An application can choose if it wants EFTL support and that
decision is based on support costs. A persistent publish/subscribe
messaging model allows the creation of an external
platform-independent fault-tolerant layer. In order to support the
multi-agent platforms of different vendors, a large part of the
application logic is moved from those platforms to an application
server. We present the EFTL system architecture, the algorithm of
exactly-once message consumption and the system’s performance
analysis. |
|
Title: |
LIGHTWEIGHT CLIENT-PULL PROTOCOL FOR
MOBILE COMMUNICATION |
Author(s): |
Stefano Sanna, Emanuela De Vita,
Andrea Piras and Christian Melchiorre |
Abstract: |
Consumer mobile devices, such as
cellular phones and PDAs, rely on TCP/IP as main communication
protocol. However, cellular networks are not reliable as wired and
wireless LAN, due to both users mobility and geographical obstacles.
Moreover, limited bandwidth outside urban areas requires an
application level data priority management, in order to improve user
experience and avoid communication stack deadlocks. This paper
presents early specification and first prototype of the LCPP
(Lightweight Client-Pull Protocol), a UDP-based communication
protocol specially designed to provide better performance, fast
responsiveness and save processing power on mobile devices. Using
some concepts adopted in the field of P2P file sharing, LCPP
provides data priority management approach, which enables
application to negotiate concurrent access to communication channel
and to be notified about delaying, network congestion or remote
device inability to process data. |
|
Title: |
EVALUATION OF METHODS FOR CONVERTING
REQUEST FOR QUOTATION DATA INTO ORDINAL PREFERENCE DATA: ESTIMATING
PRODUCT PREFERENCE IN ONLINE SHOPPING SYSTEM |
Author(s): |
Toshiyuki Ono, Hirofumi Matsuo and
Norihisa Komoda |
Abstract: |
Obtaining timely information on
consumer preference is critical in the success of marketing and
operations management. Ono and Matsuo (2000) proposed a method for
estimating consumer preference that uses the consumers’ history of
browsing among possible configurations of personal computer in an
online shopping environment. This method consists of three steps:
(1) collecting the data on each consumer’s browsing history of
quotation and purchase requests, (2) converting requests for
quotation and purchase order data into ordinal preference data, and
(3) estimating consumer preference on product attributes by applying
a multi-attribute utility function. The proposed method assumes that
a product configuration quoted later is preferred to those quoted
earlier. It also assumes that how many times a production
configuration is quoted does not affect the estimate of product
preference as long as it is quoted at least once. Although these
assumptions are critical in estimating consumer preference, their
validity is not examined. In this paper, we examine the validity of
such hypotheses on the relationships between the consumer preference
and the sequence and frequency of quoted product configurations, and
propose six methods for estimating consumer preference. We show
experimentally that, for about 60% of the examinees, all of the
proposed methods approximate the consumer preference obtained by the
conjoint analysis, and that there is little difference in precision
between the six methods. Therefore, we conclude that any of proposed
six methods can be used equally well for estimating consumer
preference in a timely fashion. |
|
Title: |
ALIGNMENT OF WEB SITES TO CUSTOMER
PROCESSES - A STUDY IN THE BANKING INDUSTRY |
Author(s): |
Juergen Moormann and Nicole Kahmer |
Abstract: |
Banks continually claim to supply
customer-orientated services. However, banking services are still
focused on purely delivering financial products. Customers who
approach the bank will usually receive financial products but often
no specific solution to their true problem. In that way, customers’
perception of banking services is often far from satisfaction. In
addition, important targets of marketing strategy (e.g., customer
loyalty, cross- and up-selling) do not get achieved. Therefore, the
consistent alignment of financial services to customer processes
becomes increasingly important and will significantly enhance the
competitiveness of banks. This paper investigates the extent of
customer support provided by banks with respect to the customers’
problem solving process. The study focuses on one certain customer
interface within the multi-channel approach – the Internet. As the
basis of this study, the paper offers the theoretical framework of
customer processes. Secondly, it provides an empirical
identification of customer processes which has been conducted by
means of a comprehensive questionnaire. Thirdly, the evaluation of
100 web sites of banks represent the main part of the study. As a
result, the paper reveals that most of the analyzed web sites fail
to assist customers within their processes. It will be a major
challenge for the banks’ managers to bring together both sides:
Developing technically sound front end application systems and at
the same time incorporating the idea of a consequent customer-driven
approach. |
|
Title: |
A MICROKERNEL ARCHITECTURE FOR
DISTRIBUTED MOBILE ENVIRONMENTS |
Author(s): |
Thomas Bopp and Thorsten Hampel |
Abstract: |
Microkernels are well known in the
area of operating systems research. In this paper we adapted the
concept of microkernel to the field of Computer Supported
Cooperative Work and Learning (CSCW/L) to provide a basic underlying
architecture for various collaborative systems. Such architecture
serves well for the fields of mobile and distributed collaborative
infrastructures with its new inclusion of small mobile devices and
ad-hoc network structures. Our architecture provides a distributed
object repository for an overlay network of CSCW/L peers. Nodes can
dynamically join and leave this network and each peer is still
autonomous. In this network different kinds of peers exist depending
on the module configuration of a system. So-called super-peers with
lots of storage and computing power provide gateways to the network
(for example HTTP). |
|
Title: |
AN AGENT FOR EMERGENT PROCESS
MANAGEMENT |
Author(s): |
John Debenham |
Abstract: |
Emergent processes are business
processes whose execution is determined by the prior knowledge of
the agents involved and by the knowledge that emerges during a
process instance. The amount of process knowledge that is relevant
to a knowledge-driven process can be enormous and may include common
sense knowledge. If a process' knowledge can not be represented
feasibly then that process can not be managed; although its
execution may be partially supported. In an e-market domain, the
majority of transactions, including trading orders, requests for
advice and information, are knowledge-driven processes for which the
knowledge base is the Internet, and so representing the knowledge is
not at issue. Multiagent systems are an established platform for
managing complex business processes. What is needed for emergent
process management is an intelligent agent that is driven not by a
process goal, but by an in-flow of knowledge, where each chunk of
knowledge may be uncertain. These agents should assess the extent to
which it chooses to believe that the information is correct, and so
they require an inference mechanism that can cope with information
of differing integrity. An agent is described that achieves this by
using ideas from information theory, and by using maximum entropy
logic to derive integrity estimates for knowledge about which it is
uncertain. Emergent processes are managed by these agents that
extract the process knowledge from this knowledge base --- the
Internet --- using a suite of data mining bots. The agents make no
assumptions about the internals of the other agents in the system
including their motivations, logic, and whether they are conscious
of a utility function. These agents focus only on the information in
the signals that they receive. |
|
Title: |
ADVISORY AGENTS IN THE SEMANTIC WEB |
Author(s): |
Ralf Bruns, Jürgen Dunkel and Sascha
Ossowski |
Abstract: |
In this paper, we describe the
advances of the Semantic E-learning Agent project, whose objective
is to develop virtual student advisers that render support to
university students in order to successfully organize und perform
their studies. The advisory agents are developed with novel concepts
of the Semantic Web and agent technology. The key concept is the
semantic modeling of the domain knowledge by means of XML-based
ontology languages such as OWL. Software agents apply ontological
and domain knowledge in order to assist human users in their
decision making processes. Agent technology enables the
incorporation of personal confidential data with public accessible
knowledge sources of the Semantic Web in the same inference process.
|
|
Title: |
BUILDING A LARGE-SCALE INFORMATION
SYSTEM FOR THE EDUCATION SECTOR: A PROJECT EXPERIENCE |
Author(s): |
Pawel Gruszczynski, Bernard Lange,
Michal Maciejewski, Cezary Mazurek, Krystian Nowak, Stanislaw
Osinski, Maciej Stroinski and Andrzej Swedrzynski |
Abstract: |
Implementing a large-scale
information system for the education sector involves a number of
engineering challenges, such as high security and correctness
standards imposed by the law, a large and varied group of end users,
or fault-tolerance and a distributed character of processing. In
this paper we report on our experiences with building and deploying
a senior high school recruitment system for five major cities in
Poland. We discuss system architecture and design decisions, such as
thin vs. rich client, on-line vs. off-line processing, dedicated
network vs. Internet environment. We also analyse potential problems
our present approach may cause in the future. |
|
Title: |
DESIGN OF CONTINUOUS CALL MARKET WITH
ASSIGNMENT CONSTRAINTS |
Author(s): |
A. R. Dani, V. P. Gulati and Arun K
Pujari |
Abstract: |
Today’s companies increasingly use
Internet as common communication medium for commercial transactions.
Global connectivity and reach of Internet means that companies face
increasing competition from various quarters. This requires that
companies must optimize the way they do business, change their
business processes and introduce new business processes. This has
opened up new research issues and electronic or automated
negotiation is one such area. Few companies have tried to introduce
electronic auctions for procurement and for trade negotiations. In
the present paper, we propose the design of continuous call market,
which can help enterprises in electronic procurement as well as
selling items electronically. The design is based on double sided
auctions, where multiple buyers and sellers submit their respective
bids and asks. Buyers and sellers can also specify assignment
constraints. The main feature of our work is an algorithm, which
generates optimum matching with polynomial time complexity under
assignment constraints |
|
Title: |
BEST PRACTICES AGENT PATTERNS FOR
ON-LINE AUCTIONS |
Author(s): |
Ivan Jureta, Manuel Kolp and Stéphane
Faulkner |
Abstract: |
Today high volume of goods and
services is being traded using online auction systems. The growth in
size and complexity of architectures to support online auctions
requires the use of distributed and cooperative software techniques.
In this context, the agent software development paradigm seems
appropriate both for their modelling, development and
implementation. This paper proposes an agent-oriented patterns
analysis of best practices for online auction. The patterns are
intended to help both IT managers and software engineers during the
requirement specification of an on-line auction system while
integrating benefits of agent software engineering. |
|
Title: |
A LIGHTWEIGHT APPROACH TO UNBREAKABLE
LINKS IN WWW-BASED HYPERTEXT ENVIRONMENTS: “USERS AND TOOLS WANT TO
BREAK LINKS” |
Author(s): |
Thomas Bopp, Thorsten Hampel and
Bernd Eßmann |
Abstract: |
In this paper, we present a
lightweight approach to achieve link consistency through a
combination of object pointers and WWW-style path-oriented links.
Our goal is to allow the use of common web-based tools with our
CSCW/L system sTeam, but at the same time achieve link consistency
within the system. |
|
Title: |
WEB RECOMMENDATION SYSTEM BASED ON A
MARKOV-CHAINMODEL |
Author(s): |
Francois Fouss, Stephane Faulkner,
Manuel Kolp, Alain Pirotte and Marco Saerens |
Abstract: |
This work presents some general
procedures for computing dissimilarities between nodes of a
weighted, undirected, graph. It is based on a Markov-chain model of
random walk through the graph. This method is applied on the
architecture of a Multi Agent System (MAS), in which each agent can
be considered as a node and each interaction between two agents as a
link. The model assigns transition probabilities to the links
between agents, so that a random walker can jump from agent to
agent. A quantity, called the average first-passage time, computes
the average number of steps needed by a random walker for reaching
agent k for the first time, when starting from agent i. A closely
related quantity, called the average commute time, provides a
distance measure between any pair of agents. Yet another quantity of
interest, closely related to the average commute time, is the
pseudoinverse of the Laplacian matrix of the graph, which represents
a similarity measure between the nodes of the graph. These
quantities, representing dissimilarities (similarities) between any
two agents, have the nice property of decreasing (increasing) when
the number of paths connecting two agents increases and when the
“length” of any path decreases. The model is applied on a
collaborative filtering task where suggestions are made about which
movies people should watch based upon what they watched in the past.
For the experiments, we build a MAS architecture and we instantiated
the agents belief-set from a real movie database. Experimental
results show that that the Laplacian-pseudoinverse based similarity
outperforms all the other methods. |
|
Title: |
ESTIMATION OF THE SECURITY LEVEL IN A
MOBILE AND UBIQUITOUS ENVIRONMENT BASED ON THE SEMANTIC WEB |
Author(s): |
Reijo Savola |
Abstract: |
The emerging Semantic Web enables
semantic discovery and systematic maintenance of information that
can be used as reference data when estimating the security level of
a network, or a part of it. Using suitable security metrics and
ontologies, nodes can estimate the level of security from both their
own and the network’s point of view. The most secure applications
and communication peers can be selected based on estimation results.
In this paper we discuss security level estimation in a mobile and
ubiquitous environment based on the Semantic Web. An
interdisciplinary security information framework can be built using
the Semantic Web to offer metrics and security level information for
product quality, the traffic and mobility situation, general
statistical knowledge and research results having an effect on the
security level. |
|
Title: |
PERSONALISATION AND CUSTOMISATION - A
STRATEGIC LEVERAGE TO SUSTAIN E-TRADING MARKET SHARE |
Author(s): |
Jimmy Liu, S.J. Fischer and S. Peters |
Abstract: |
Electronic banking (e-banking) has
emerged the most popular way for retail banks to provide financial
services to private households. Stock-trading transformed into
e-trading as retail banks created comprehensive web portals for
customers to perform financial transactions. Low search costs have
sparked fierce price competition. For companies to sustain
profitability and retain customers, service differentiation is
vital. Personalisation and Customisation (P&C) techniques allow
banks to provide this individualised and differentiated service and
foster a stronger customer relationship. Various P&C approaches have
been examined through case studies of top e-trading companies. A
three-layer architecture can be used to which enables P&C to provide
an individual service without undermining core business functions. |
|
Title: |
DEVELOPING OF MULTISTAGE VIRTUAL
SHARED MEMORY MODEL FOR CLUSTER BASED PARALLEL SYSTEMS |
Author(s): |
Aye Aye Nwe, Khin Mar Soe, Than Nwe
Aung, Thinn Thu Naing, Myint Kyi and Pyke Tin |
Abstract: |
In this paper, we proposed a new
multistage virtual shared memory model for cluster based parallel
systems. This model can be expanded in hierarchical manner and
covered many of the previous clusters of parallel system designs.
Queuing theory and Jackson queuing networks are applied for
constructing an analytical model. This model gives a closed-form
solution for the system performance metrics, such as processor
waiting time and system processing power. In development of these
analytical models we used open queuing network rules for analyzing a
closed queuing network and calculate the input rate of each service
center as a function of the input rate for previous service center.
The model can be used for evaluating the cluster based parallel
processing system or optimizing its specification on design space. |
|
Title: |
CREATING JOINT EFFICIENCIES:
WEB-ENABLED SUPPLY CHAIN SERVICES FOR RURAL COMMUNITIES |
Author(s): |
S. M. Muniafu, A. Verbraeck |
Abstract: |
Currently, about half the population
of the world lives in rural areas, and they are disadvantaged
regarding access to the basic technical knowledge to exploit the
expanding Internet infrastructure. They lack readily available
supportive tools, methodologies, and the capability to take
advantage of the newly developed technologies to integrate their
supply chains. This paper identifies the need for designing
environments to support the development of web-enabled supply chain
services for rural areas, based on the concept of a so-called design
studio, which uses simulation models and collaboration technology to
facilitate the design. The practical applicability of the concept in
creating joint efficiencies is discussed before concluding that the
conceptual model presented may provide a much-needed solution to
some of the failures and problems faced when trying to put supply
chains in rural areas onto the web. Exploratory cases are being
carried out to prove and validate the applicability of the concept |
|
Title: |
USING ONTOLOGIES TO PROSPECT OFFERS
ON THE WEB |
Author(s): |
Rafael Cunha Cardoso, Fernando da
Fonseca de Souza and Ana Carolina Salgado |
Abstract: |
Today, information retrieval and
extraction systems perform an important role getting relevant
information from the World Wide Web (WWW). Semantic Web, which can
be seen as the Web’s future, introduces a set of concepts and tools
that are being used to insert “intelligence” into contents of the
current WWW. Among such concepts, Ontologies play a fundamental role
in this new environment. Through ontologies, software agents can
cover the Web “understanding” its meaning in order to execute more
complex and useful tasks. This work presents an architecture that
uses Semantic Web concepts allied to Regular Expressions (regex) to
develop a device that retrieves/extracts specific domain information
from the Web (HTML documents). The prototype developed, based on
this architecture, gets data about offers announced on supermarkets
Web sites, using Ontologies and regex to achieve this goal. |
|
Title: |
APPROACHES OF WEB SERVICES
COMPOSITION - COMPARISON BETWEEN BPEL4WS AND OWL-S |
Author(s): |
Daniela Barreiro Claro, Patrick
Albers and Jin-Kao Hao |
Abstract: |
Web service technologies offer users
web applications and allow them to connect to different required
services. Web Services technologies allow interaction between
applications. Sometimes a single service given alone does not meet
user’s needs. In this case, it is necessary to compose several
services in order to achieve the user’s goal. For composing web
services, we developed an example using two main approaches: the
first one is BPEL4WS, a Business Process composition, and the other
is OWL-S, an ontology specifically for web services composition. In
this paper we compare the features of these two approaches and we
propose a mechanism to improve service discovery. |
|
Title: |
HYBRID APPLICATION SUPPORT FOR MOBILE
INFORMATION SYSTEMS |
Author(s): |
Volker Gruhn and Malte Hülder |
Abstract: |
The wide-spread presence of wireless
networks and the availability of mobile devices has enabled the
development of mobile applications that take us a step closer to
accomplishing Weiser's vision of ubiquitous computing, unfortunately
network connectivity is still not given anywhere and at any time. To
increase the benefit of mobile applications, the next logical step
is to provide support for an offline mode, that allows to
continuously work with an application, even when the device is
disconnected from a network. In this paper typical problems of
replicating data are explained, possible solutions are discussed and
two architectural patterns are illustrated, that could be used to
implement hybrid support. |
|
Title: |
WEB ENGINEERING : AN ASPECT ORIENTED
APPROACH |
Author(s): |
Joumana Dargham and Sukaina Al
Nasrawi |
Abstract: |
Web-Engineering has become, nowadays,
the main research interest for software developers. With the
spreading of use of the World Wide Web and the need for a new
category of applications the research community has shifted its
interest toward a new era of applications: web-based applications.
To parallel the fast growth of the technology and the new needs for
general/special web-applications, research should be done to improve
the development and standardize it as it is done for non-web
applications. Many development and programming tools were
implemented to support web-engineering, however studies at the
design level are still premature. Frameworks, design methodologies
and web-based development tools are at an experimental level and
depend on individuals efforts. In this context and considering that
overheads for web-development are obstacles more than new
methodologies, we are adopting the aspect-oriented approach for the
development of web-applications. Common aspects can be defined and
the OOHDM concepts can be mapped into an aspect-oriented design
model. |
|
Title: |
BOOSTING ITEM FINDABILITY: BRIDGING
THE SEMANTIC GAP BETWEEN SEARCH PHRASES AND ITEM INFORMATION |
Author(s): |
Hasan Davulcu, Hung V. Nguyen and
Vish Ramachandran |
Abstract: |
Most search engines do their text
query and retrieval based on keyword phrases. However, publishers
cannot anticipate all possible ways in which users search for the
items in their documents. In fact, many times, there may be no
direct keyword match between a search phrase and descriptions of
items that are perfect “hits” for the search. We present a highly
automated solution to the problem of bridging the semantic gap
between item information and search phrases. Our system can learn
rule-based definitions that can be ascribed to search phrases with
dynamic connotations by extracting structured item information from
product catalogs and by utilizing a frequent itemset mining
algorithm. We present experimental results for a realistic
e-commerce domain. Also, we compare our rule-mining approach to
vector-based relevance feedback retrieval techniques and show that
our system yields definitions that are easier to validate and
perform better. |
|
Title: |
J2EE VERSUS ZOPE |
Author(s): |
Paul L. Juell, Syed M. Rahman and
Akram Salah |
Abstract: |
This paper compares several features
between J2EE and Zope technologies. Both technologies have
individual strength and will be appropriate in individual contexts.
In choosing a development environment or technology for web
applications, a criterion is needed to assess the available
development technologies. In order to do this comparison, we have
designed a web-based prototype for "managing research information"
and implemented the prototype in both technologies. We have compared
several key features in both technologies including content
managements, session handling, safe delegation, security, and
testing facilities. The comparison in this paper forms a basis for
making choices for web development technology for academia and
industry. |
|
Title: |
PROVIDING PEER-TO-PEER FEATURES TO
EXISTING CLIENT-SERVER CSCW SYSTEMS |
Author(s): |
Bernd Eßmann and Holger Funke |
Abstract: |
Developers of classical client-server
CSCW systems are facing a true dilemma: They created a working
cooperation environment for many scenarios of cooperative work.
Since users get independent to fixed places by using mobile devices
interconnected by ad-hoc networks, the support of mobility becomes
an important topic of CSCW. Furthermore, while client-server
architectures do not work well in dynamic networks, P2P systems
enter the field of CSCW. But, is it a good approach to discard the
well working client-server system in order to implement a brand-new
P2P system from scratch? Our approach is extending our CSCW platform
step-by-step with P2P abilities without loosing the advantages of
client-server computing. This paper describes the first step of
wrapping the RMI-based communication protocol into an industry
standard P2P protocol called JXTA. We do this by first identifying
the vital features of CSCW systems like e.\,g. sequent communication
and event handling. These results in the development of grounding
concepts for the then described concrete implementation for our CSCW
system. |
|
Title: |
A FRAMEWORK FOR DISTRIBUTED OBJECTS
IN PEER-TO-PEER COOPERATION ENVIRONMENTS |
Author(s): |
Bernd Eßmann and Thorsten Hampel |
Abstract: |
The dictum of the mobile society
demands new qualities to systems for computer supported cooperative
work (CSCW). The collaboration support today includes distant
cooperation as well as face-to-face meeting. Application providing
the needed support have to deal with heterogeneous network
environments and have be able to establish a network from scratch,
when existing infrastructures are not available. While client/server
architectures are not useful in such network environments,
peer-to-peer architectures seem to be the design of choice. With
JXTA a powerful peer-to-peer framework exists that allows building
peer-to-peer applications. What is missing, is a concept for
combining cooperative objects and services in a shared workspace. In
this paper we present the concept of distributed knowledge spaces.
It is derived from the well proven concept of virtual knowledge
spaces. Besides the conceptual approach this paper introduces the
architecture for an peer-to-peer application providing the required
scalable architecture and basic mechanism for providing the
distributed knowledge spaces. |
|
Title: |
FRAMEWORK FOR HIERARCHICAL MOBILE
AGENTS: TOWARD SERVICE-ORIENTED AGENT COMPOUND |
Author(s): |
Fuyuki Ishikawa, Nobukazu Yoshioka,
Yasuyuki Tahara and Shinichi Honiden |
Abstract: |
Hierarchical mobile agent model is an
extension of the mobile agent model. In the model, an agent can
migrate into another agent and form a parent-child relationship (the
accepting agent becomes the parent). This model enables agents to
form synthesis to integrate functions. Especially, it enables agents
to interact with each other locally, not through remote connections,
and keep the partnership stable for a long term even if the agents
migrate around. This work proposes MAFEH framework, where control of
parent-child relationship is enhanced and made easy. The framework
includes two features: (1) Parent-Child Agreement that denotes an
agreement on behaviors of a parent and a child, and (2) Interaction
Partner Description that is used to specify synthesis actions
separated from the main application logic. This work also considers
adoption of the framework to multimedia application, where an agent
encapsulates a multimedia content and forms synthesis with various
agents encapsulating other contents or providing additional
services. |
|
Title: |
CONTENT PACKAGE ADAPTATION: A WEB
SERVICES APPROACH |
Author(s): |
Ricardo Fraser and Permanand Mohan |
Abstract: |
The IMS Content Packaging
Specification is a format that facilitates the deployment of
discrete units of learning resources based on an XML structure
called a manifest. The contents and structure of a content package
are determined at design time when it is created. Since the package
has been authored for use in a particular instructional setting,
re-purposing the content package to meet the demands of a different
instructional setting is difficult. Although there have been
attempts to improve the flexibility of the package such as using IMS
Simple Sequencing, the adaptation provided is still inadequate. In
this paper we argue that Web Services can be used to facilitate the
dynamic adaptation of a content package so that it can be reused in
diverse instructional scenarios and accessed by additional learners
who otherwise would not be able to utilize it. We present a
framework for adaptation based on Web Services and identify a
representative set of Web Services that could be used for content
package adaptation. We then discuss in detail the Media Integration
and Translation Services for Accessibility (MITSA), a category of
Web Services designed to promote media accessibility of a content
package. Finally, we conclude by highlighting the benefits of the
Web Services approach for content package adaptation. |
|
Title: |
INTEGRATING AGENT TECHNOLOGIES INTO
ENTERPRISE SYSTEMS USING WEB SERVICES |
Author(s): |
Eduardo H. Ramírez and Ramón F. Brena |
Abstract: |
In this work we present a decoupled
architectural approach that allows Software Agents to interoperate
with enterprise systems using Web Services. The solution leverages
existing technologies and standards in order to reduce the
time-to-market and increase the adoption of agent-based
applications. Insights on applications that may be enhanced by the
model are presented. |
|
Title: |
SOFTWARE ARCHITECTURE WITH EMERGENT
SEMANTICS - HOW CAN SYSTEMS BE WEAKLY COUPLED, BUT STRONGLY
REFERENCED |
Author(s): |
Len Yabloko |
Abstract: |
applying well known results of
research in non-monotonic reasoning to emergent semantics |
|
Title: |
ADDING SUPPORT FOR DYNAMIC ONTOLOGIES
TO EXISTING KNOWLEDGE BASES |
Author(s): |
Upmanyu Misra, Zhengxiang Pan and
Jeff Heflin |
Abstract: |
An ontology version needs to be
created when changes are to be made in an ontology while keeping the
basic structure of the ontology more or less intact. It has been
shown that an Ontology Perspective theory can be applied on a set of
ontology versions. In this paper, we present a Virtual Perspective
Interface (VPI) based on this theory that ensures that old data is
still accessible through ontology modifications and can be accessed
using new ontologies, in addition to the older ontologies which may
still be in use by legacy applications. We begin by presenting the
problems that are to be dealt with when such an infrastructure needs
be created. Then we present possible solutions that may be used to
tackle such problems. Finally, we provide an analysis of these
solutions to support the one that we have implemented. |
|
Title: |
AUCTION BASED SYSTEM FOR ELECTRONIC
COMMERCE TRANSACTION |
Author(s): |
A. R. Dani, V. P. Gulati and Arun K.
Pujari |
Abstract: |
Auctions provide efficient price
discovery mechanism for sellers. Auctions are being used for the
sale of variety of objects. In the last few years auction based
protocols are widely used in electronic commerce. Auction-based
systems have been developed for electronic procurement. In this
paper we propose systems for electronic commerce transactions, which
can support electronic procurement as well as help enterprises to
sell items. We also consider assignment constraints that may be
required in different commercial transactions. In this paper we
consider forward and reverse auctions. We formulate the problem as
mixed integer programming problem. Then we propose an algorithm to
obtain optimum solution and compute pay-off. The system can also
handle different types of assignment constraints. |
|
Title: |
DYNAMIC COALITION IN AGENT AWARE
ADHOC VIRTUAL P2P INTERCONNECT GRID COMPUTING SYSTEM – A3PVIGRID |
Author(s): |
Avinash Shankar, Chattrakul
Sombattheera, Aneesh Krishna, Aditya Ghose and Philip Ogunbona |
Abstract: |
The Field of Distributed computing
and artificial intelligence are much researched fields and are as
old as the popularization and usage of personal computers. The last
few years have been exciting times for researchers and scientists
alike due to the phenomenal advancements in computing and
computational sciences. The field of Multi agent systems [1] try to
add the “intelligence factor” into goal based programs which tend to
communicate and negotiate using agent languages such as KQML [2].
The primary problems such as resource and service discovery models,
load balancing and scheduling, brokering, etc are prevailing in grid
systems due to bottlenecks such as bandwidth and network traffic in
communication infrastructures and their associated costs in
fabricating a scalable and cost effect Grid services infrastructure.
This paper is an extension of many different architectural
schematics(CBReM [3], CBWeB [4], gridCoED [5], AviGrid [6] ) and
load balancing schemes [Eager et al 3 algorithms[7], Mitizenmachers
randomness Algorithms [8], ICHU [9], CoED [10], gridCoED [11],
Aviload [12]] previously researched that tends to provide a
Coalition framework for Multi Agent based peer to peer Grid
computing systems that are based on a Web / Grid services schematic.
The primary goals of the paper is to apply coalition formation in
agents; add efficient load balancing and scheduling (AviLoad
Scheduler); to provide a replacement solution to resource discovery
models by applying application oriented directory services and
Economic brokering services to the agent aware adhoc p2p virtual
interconnect grid computing system or A3pvigrid system. |
|
Title: |
NARRATIVE SUPPORT FOR TECHNICAL
DOCUMENTS: FORMALISING RHETORICAL STRUCTURE THEORY |
Author(s): |
Nishadi De Silva and Peter Henderson |
Abstract: |
Business Process Re-engineering (BPR)
is an area that requires a lot of technical documents and an
important feature of a well-written document is a coherent
narrative. Even though computer software has helped authors in many
other aspects of writing, support for narratives is almost
non-existent. Therefore, we introduce CANS (Computer-Aided Narrative
Support), a tool that uses Rhetorical Structure Theory to enhance
the narrative of a document. From this narrative, the tool generates
questions to prompt the author for the content of the document. CANS
also allows the author to explore alternative narratives for a
document. A catalogue of predefined narrative structures for popular
types of documents is provided too. Our tool is still in its
rudimentary stages but sufficiently complete to be demonstrated.
|
|
Title: |
DESIGN AND IMPLEMENTATION OF A
CONTEXT-BASED SYSTEM FOR COMPOSITION OF WEB SERVICES |
Author(s): |
Wassam Zahreddine and Qusay H.
Mahmoud |
Abstract: |
This paper investigates web services
and mobile agents as two individually powerful technologies and when
combined together, provides ease of use and reliability for any
user. Currently businesses are starting to take advantage of web
services and home users are beginning to see the light as well.
However, web services have their shortfalls but with the help of
mobile agents these weaknesses can be overcome. A single Web service
may not be enough to satisfy a user’s requirements. It might be
necessary to combine multiple Web services together (a composite
service) to satisfy a requirement; that is where agents can be used
to compose Web services on behalf of their users. On the other hand,
when agents move around to perform a task on behalf of a user, they
will need to execute a service and such a service might be a Web
service itself. In this paper we discuss a novel approach for
integrating mobile agents and web services, and a proof of concept
implementation. |
|
Title: |
A FRAMEWORK FOR WEB APPLICATIONS
DEVELOPMENT: A SOAP BASED COMMUNICATION PROTOCOL |
Author(s): |
Samar Tawbi, Jean-Paul Bahsoun and
Bilal Chebaro |
Abstract: |
The rapid evolution of interactive
Internet services has led to both a constantly increasing number of
modern Web sites and to an increase in their functionality, which
makes them more complicated to be built. In this context, we have
proposed a generic approach for Web site development that manages
the operational content of this kind of applications. A framework
has been defined to support the development of web applications’
processing tasks as Web services and the communication protocols
with the users of these services. In this paper, we will expose the
general structure of this framework, and we will focus on the
communication protocol defined between the users and the system. Our
approach in this protocol addresses universal clients; it is based
on the SOAP protocol, XML language and their related technologies.
It adopts the concept of Web services but uses it for providing code
results rather than information results as it is known in the Web
society. |
|
Title: |
PREDICTING THE PERFORMANCE OF DATA
TRANSFER IN A GRID ENVIRONMENT |
Author(s): |
A.B.M Russel and Savitri Bevinakoppa |
Abstract: |
In a Grid environment, only
implementing a parallel algorithm for data transfer or multiple
parallel jobs allocation doesn’t give reliable data transfer. There
is a need to predict the data transfer performance before allocating
the parallel processes on grid nodes. A predictive framework will be
a solution in this scenario. In this paper we propose a predictive
framework for performing efficient data transfer. Our framework
considers different phases for providing information about efficient
and reliable participating nodes in a computational Grid
environment. Our experimental results reveal that multivariable
predictors provide better accuracy compared to univariable
predictors. We observe that the Neural Network prediction technique
provides better prediction accuracy compared to the Multiple Linear
Regression and Decision Regression. Our proposed ranking factor
overcomes the problem of considering fresh participating nodes in
data transfer. |
|
Title: |
JOB SCHEDULING IN COMPUTATIONAL GRID
USING GENETIC ALGORITHMS |
Author(s): |
Mohsin Saleem and Savitri Bevinakoppa |
Abstract: |
The computational Grid is a
collection of heterogeneous computing resources connected via
networks to provide computation for the high-performance execution
of applications. To achieve this high-performance, an important
factor is the scheduling of the applications/jobs on the compute
resources. Scheduling of jobs is challenging because of the
heterogeneity and dynamic behaviour of the Grid resources. Moreover
the jobs to be scheduled also have varied computational
requirements. In general the scheduling problem is NP-complete. For
such problems, Genetic Algorithms (GAs) are reckoned as useful tools
to find high-quality solutions. In this paper, a customised form of
GAs is used to find suboptimal schedules for the execution of
independent jobs, with no inter-communications, in the computational
Grid environment with the objective of minimising the makespan.
Further, while using the GA-based approach the solution is encoded
in the form of chromosome, which not only represents the allocation
of the jobs onto the resources but also specifies the order in which
the jobs have to be executed. Simple genetic operators i.e.,
crossover and mutation are used. The selection is done on the using
Tournament Selection and Elitism strategies. It was observed that
the specification of order of the jobs to be executed on the Grid
resources played a significant role in minimising the makespan. The
results obtained from the experiments performed were also compared
with other heuristics and the GA-based approach by other researchers
for job-scheduling in the computational Grid environment. It was
observed that the GA-based approach used in this paper was able to
achieve much better performance in terms of makespan. |
|
Title: |
IMPLEMENTING A DYNAMIC PRICING SCHEME
FOR QOS ENABLED IPV6 NETWORKS |
Author(s): |
El-Bahlul Fgee, Shyamala Sivakumar,
W.J. Phillips and W. Robertson and J. Kenny |
Abstract: |
Currently the Internet based on IP
networks supports a single best-effort service. In this scheme, all
packets are queued and forwarded with the same priority. No
guarantees are made that a given packet will actually reach its
destination; much less arrive in a time (Borella, 2004). However,
many Electronic Commerce applications make use of the Internet as a
transport infrastructure because of its reach-ability, popularity
and cost efficiency. Typically, these applications are delay and
loss sensitive and the packet may be encrypted for security reasons.
Challenges faced by ISPs supporting e-commerce traffic include
enhancing their traffic flow handling capabilities, speeding the
processing of these packets at core routers, and incorporating
Quality of Service (QoS) methods to differentiate between traffic
flows of different classes. These schemes add to the infrastructure
costs of network providers which can be recovered by introducing
extra charges for traffic requiring special handling. Many pricing
schemes have been proposed for QoS-enabled networks. However,
integrated pricing and admission control has not been studied in
detail. In this paper a dynamic pricing model is integrated with an
IPv6 QoS manager to study the effects of increasing traffic flows
rates on the increased cost of delivering high priority traffic
flows. The pricing agent that is part of the QoS manager assigns the
prices for each traffic flow accepted by the domain manager. These
prices are dynamically calculated according to the network status.
Combining the pricing strategy with the QoS manager allows only
higher priority traffic packets that are willing to pay more to be
processed during congestion. This approach is flexible and scalable
as end-to-end pricing is decoupled from the network core and core
nodes are not involved in QoS decisions and reservations. |
|
Title: |
PEDAGOGICAL FRAMEWORKS AND
TECHNOLOGIES FOR ONLINE NETWORK LABORATORY INSTRUCTION - RESEARCH
ISSUES IN MATCHING TECHNOLOGY TO PEDAGOGICAL PROCESSES |
Author(s): |
Shyamala Sivakumar |
Abstract: |
We investigate the technological
issues involved in designing an electronic learning system that
adapts pedagogical approaches and best practice instructional
strategies to model, design and implement a blended virtual learning
space. We discuss technology issues that are challenging in the
design and implementation of a modular integrated web environment
(IWE) used to deliver online network laboratory learning. We show
that the IWE must incorporate an online laboratory tutorial system
for guided practice to elicit performance from the learner. Also,
the learning space must be designed to match the quality of service
(QoS) requirements to the interaction taking place in the learning
space and the characteristics of the delivery media must be matched
to learning process. This approach promotes good student interaction
& infrastructure management. |
|
Title: |
TRANSCENDING TAXONOMIES WITH GENERIC
AND AGENT-BASED E-HUB ARCHITECTURES |
Author(s): |
George Kontolemakis, Marisa Masvoula,
Panagiotis Kanellis and Drakoulis Martakos |
Abstract: |
If effectively utilized, modern
technologies such as ontologies and software agents hold the
potential to inform the design of the next generation of E-Hubs. In
terms of their evolution, we argue that taxonomies as tools hold the
danger of stifling innovation as they may implicitly impose
boundaries on the problem domain. We proceed to use one that is
well-referenced in the literature and identify a number of issues
that can be seen as limiting factors, proposing a generic and
agent-mediated architecture that holds the potential of addressing
them. |
|
Title: |
TESTING WEB APPLICATIONS
INTELLIGENTLY BASED ON AGENT |
Author(s): |
Lei Xu and Baowen Xu |
Abstract: |
Web application testing is concerned
with numerous and complicated testing objects, methods and
processes. In order to improve the testing efficiency, the automatic
and intelligent level of the testing execution should be enhanced.
So combined with the specialties of the Web applications, the
necessity and feasibility of the automatic and intelligent execution
of the Web application testing are analyzed firstly; Then, on the
base of the related work, the executing process of the Web
application testing is detailedly described and thoroughly analysed,
so as to determine the steps and flows of the testing execution
along with the adopted techniques and tools; next, improve the
capture-replay technique and make it fit for the dynamic characters
of Web applications, and adopt the intelligent Agent to realize the
monitor, management and exception-handler of the whole testing
execution. Thus, in this way, the process of the Web application
testing can be implemented automatically and intelligently. |
|
Area 5 - Human-Computer
Interaction
|
Title: |
OPENDPI: A TOOLKIT FOR DEVELOPING
DOCUMENT-CENTERED ENVIRONMENTS |
Author(s): |
Olivier Beaudoux and Michel
Beaudouin-Lafon |
Abstract: |
Documents are ubiquitous in modern
desktop environments, yet these environments are based on the notion
of application rather than document. As a result, editing a document
often requires juggling with several applications to edit its
different parts. This paper presents OpenDPI, an experimental
user-interface toolkit designed to create document-centered
environments, therefore getting rid of the concept of application.
OpenDPI relies on the DPI (Document, Presentation, Instrument)
model: documents are visualized through one or more presentations,
and manipulated with interaction instruments. The implementation is
based on a component model that cleanly separates documents from
their presentations and from the instruments that edit them. OpenDPI
supports advanced visualization and interaction techniques such as
magic lenses and bimanual interaction. Document sharing is also
supported with single display groupware as well as remote shared
editing. The paper describes the component model and illustrates the
use of the toolkit through concrete examples, including multiple
views and concurrent interaction. |
|
Title: |
WAVELETS TRANSFORMS APPLIED TO
TERMITE DETECTION |
Author(s): |
Carlos G. Puntonet, Isidro Lloret
Galiana and Juan Jose de la Rosa |
Abstract: |
In this paper we present an study
which shows the possibility of using wavelets to detect transients
produced by termites. Identification has been developed by means of
analyzing the impulse response of three sensors undergoing natural
excitations. De-noising by wavelets exhibits good performance up to
SNR=-30 dB, in the presence of white gaussian noise. The test can be
extended to similar vibratory or acoustic signals resulting from
impulse responses. |
|
Title: |
EFFICIENT JOIN PROCESSING FOR COMPLEX
RASTERIZED OBJECTS |
Author(s): |
Hans-Peter Kriegel, Peter Kunath,
Martin Pfeifle, Matthias Renz |
Abstract: |
One of the most common query types in
spatial database management systems is the spatial intersection
join. Many state-of-the-art join algorithms use minimal bounding
rectangles to determine join candidates in a first filter step. In
the case of very complex spatial objects, as used in novel database
applications including computer-aided design and geographical
information systems, these one-value approximations are far too
coarse leading to high refinement cost. These expensive refinement
cost can considerably be reduced by applying adequate compression
techniques. In this paper, we introduce an efficient spatial join
suitable for joining sets of complex rasterized objects. Our join is
based on a cost-based decompositioning algorithm which generates
replicating compressed object approximations taking the actual data
distribution and the used packer characteristics into account. The
experimental evaluation on complex rasterized real-world test data
shows that our new concept accelerates the spatial intersection join
considerably. |
|
Title: |
MODELS’ SPECIFICATIONS TO BUILD
ADAPTATIVE MENUS |
Author(s): |
Gérard Kubryk |
Abstract: |
Web engineering becomes more and more
important in the last years. The research community has identified
the need to offer new methods and methodologies in order to build a
good environment to develop web information systems and to offer to
the users menus which are perfectly adapted to their requirements.
WEB and audio services have to provide the best services possible.
To achieve this goal, they have to find out what the customers are
doing without altering their privacy. This paper presents two
classes of models, mathematical and learning models, and four
possible ways to manage and build adaptative menus. These methods
are gravity analogy, learning by ants analogy, learning by sanction
reinforcement, learning by genetic algorithm. Later on, a comparison
of these four models will be made based on two criteria: efficiency
(answering time and computer load) and accuracy with customer
expectation. The final step will be to carry out psychological
analysis of user activity, meaning, “what is my perception of time
into and between service consultation” to determine ways to set
parameters of such a system |
|
Title: |
AUTOJOIN: PROVIDING FREEDOM FROM
SPECIFYING JOINS |
Author(s): |
Terrence Mason, Lixin Wang and Ramon
Lawrence |
Abstract: |
SQL is not appropriate for casual
users as it requires understanding relational schemas and how to
construct joins. Many new query interfaces insulate users from the
logical structure of the database, but they require the automatic
discovery of valid joins. Although specific query interfaces
implement join determination algorithms, they are tied to the
specific language and typically limited in scope or scalability.
AutoJoin provides a general solution to the query inference problem,
which allows more complex queries to be executed on larger and more
complicated schemas. It enumerates query interpretations at least an
order of magnitude faster than previous methods. In addition, the
engine reduces the number of queries considered ambiguous.
Experimental results demonstrate that query inference can be
efficiently performed on large, complex schemas allowing simpler
access to databases through keyword search or conceptual query
languages. AutoJoin also provides programmers with a tool to
iteratively create SQL queries without requiring explicit knowledge
of the structure of a database. |
|
Title: |
DYNAMIC USER INTERFACES FOR
SEMI-STRUCTURED CONVERSATIONS |
Author(s): |
James E. Hanson, Prabir Nandi,
Santhosh Kumaran and Paul Foreman |
Abstract: |
The growing complexity of
application-to-application interactions has motivated the
development of an architectural model with first-class support for
multi-step, stateful message exchanges—i.e., conversations—and a
declarative means of specifying conversational protocols. In this
paper, we extend this architectural model to encompass UI-enabled
devices, thereby enabling it to cover human-to-application
conversations as well. This permits either participant to be
human-driven, automated, or anywhere in between, without affecting
the nature of the interaction or of the other participant. The
UI-enabled conversational model also reduces the difficulty of
developing conversational applications, providing significant
benefits both for UI and for application developers. We describe the
architecture of a UI-enabled conversational system that supports a
variety of user devices, and includes a means by which UI markup may
be automatically generated from the conversational protocols used.
We go through a sample application currently implemented using a
commercially available application server, and further describe a
graphical tool for editing and testing conversational protocols,
that significantly eases the protocol development process.
|
|
Title: |
IMPLEMENTING MULTILINGUAL INFORMATION
FRAMEWORK IN APPLICATIONS USING TEXTUAL DISPLAY |
Author(s): |
Satyendra Gupta, Samuel Cruz-Lara and
Laurent Romary |
Abstract: |
This paper presents implementation of
MLIF (Multilingual Information Framework), a high level model for
describing multilingual data across wide range of possible
applications in translation/localization process within several
multimedia domains (e.g. broadcasting of interactive multimedia
applications), natural language interfaces, geographical information
systems for multilingual communities. |
|
Title: |
AN INTERFACE USABILITY TEST FOR THE
EDITOR MUSICAL |
Author(s): |
Irene K. Ficheman, Andréia R.
Pereira, Diana F. Adamatti, Ivan C. A. de Oliveira, Roseli D. Lopes,
Jaime S. Sichman, José R. de Almeida Amazonas and Lucia V. L.
Filgueiras |
Abstract: |
This paper presents an usability test
conducted for a music composition edutainment software called Editor
Musical. The software, which offers creative virtual learning
environments, has been developed in collaboration between the
University of Săo Paulo, Laboratório de Sistemas Integráveis (LSI)
da Escola Politécnica da Universidade de Săo Paulo (USP) and the Săo
Paulo State Symphony Orchestra, Coordenadoria de Programas
Educacionais da Orquestra Sinfônica do Estado de Săo Paulo (OSESP).
This paper focuses on the description of a usability test applied to
children between 8 and 9 years old. The goal of the test was to
verify the easiness of its use and to elaborate a final report that
will guide the development of new improved versions of the software. |
|
Title: |
WHY ANTHROPOMORPHIC USER INTERFACE
FEEDBACK CAN BE EFFECTIVE AND PREFERRED BY USERS |
Author(s): |
Pietro Murano |
Abstract: |
This paper addresses and resolves an
interesting question concerning the reason for anthropomorphic user
interface feedback being more effective (in two of three contexts)
and preferred by users compared to an equivalent non-anthropomorphic
feedback. Firstly the paper will summarise the author’s three
internationally published experiments and results. These will show
statistically significant results indicating that in two of the
three contexts anthropomorphic user interface feedback is more
effective and preferred by users. Secondly some of the famous work
by Reeves and Nass will be introduced. This basically shows that
humans behave in a social manner towards computers through a user
interface. Thirdly the reasons for the obtained results by the
author are inextricably linked to the work of Reeves and Nass. It
can be seen that the performance results and preferences are due to
the subconscious social behaviour of humans towards computers
through a user interface. The conclusions reported in this paper are
of significance to user interface designers as they allow one to
design interfaces which match more closely our human
characteristics. These in turn would enhance the profits of a
software house. |
|
Title: |
VISUAL DATA MINING TOOLS: QUALITY
METRICS DEFINITION AND APPLICATION |
Author(s): |
Edwige Fangseu Badjio and François
Poulet |
Abstract: |
The main purpose of this work is to
integrate HCI (Human Computer Interaction) requirements in visual
data mining tools engineering. We present the definition of
metrics/measurements in order to improve the quality of those tools
at all the steps or after the development process. On the basis of
these metrics/measurements, we have derived a questionnaire for the
evaluation of the utility, the usability and the acceptability of
visual data mining environments. A case study enables us to
concretely materialize the contribution of the measurements and also
to detect and explain (design and usage) errors. We contribute thus
to the improvement of the quality of this type of software.
|
|
Title: |
INTERACTIVE DATAMINING PROCESS BASED
ON HUMAN-CENTERED SYSTEM FOR BANKING MARKETING APPLICATIONS |
Author(s): |
Olivier Couturier, Engelbert Mephu
Nguifo and Brigitte Noiret |
Abstract: |
Knowledge Discovery in Databases
(KDD) is the new hope for banking marketing due to the increasing
collection of large databases. There is a paradox because the bank
must improve the development policy of customer loyalty by using
methods that do not allow to treat large quantities of data. Our
current work is the results of a study that we led on a association
rules mining in banking marketing problem. Our first encouraging
results steered our work towards a hierarchical association rules
mining, using a user-driven approach rather than an automatic
approach. The user is at the heart of the process by playing a role
of evolutionary heuristic. Mining process is oriented depending on
intermediate expert’s choices. The final aim of our approach is to
use the advantages of the two methods to decrease both number of
generated rules and more especially expertise time. We use visual
datamining in order to propose powerful and adapted tools for the
banking marketing service. This paper presents the results of our
research step for including the user into banking marketing
applications. |
|
Title: |
EVALUATION CONCEPT FOR INTEGRATED
KNOWLEDGE AND CO-OPERATION PLATFORMS |
Author(s): |
Claudia Loroff |
Abstract: |
This article introduces a concept for
evaluating integrated knowledge and co-operation platforms which was
derived from systematic examination of computer supported
co-operative work (CSCW) and knowledge management systems and from
research of available evaluation approaches to CSCW and knowledge
management systems. It consists of various evaluation perspectives
(individual, group, organisation, environment and technical system),
thereby introducing comprehensive objectives, specifying topics,
exemplary items, and potential survey methods for these
perspectives. Considering experiences made with this concept,
potential implementation scenarios are introduced. |
|
Title: |
DESIGN PRINCIPLES FOR DESKTOP 3D USER
INTERFACES - CASE MOVIE PLAZA |
Author(s): |
Marja Tyynelä, Timo Jokela, Minna
Isomursu, Petri Kotro and Olli Mannerkoski |
Abstract: |
3D user interfaces in desktop
applications are becoming more common and available to all users.
However, not many guidelines are available to support desktop 3D
user interface design. We derived a set of design principles from
the practices of a company specialized to 3D graphics and user
interfaces and made a prototype to evaluate these principles. The
results of our evaluations show that some of the principles - on the
structure of the space, navigation and interaction - helped users
while some others did not have the desired impact. We conclude that
design guidelines for 3D user interfaces can be derived from the
designers’ practices but research is needed to make principles more
specific and to test their affect more precisely. |
|
Title: |
TWO SIMPLE ALGORITHMS FOR DOCUMENT
IMAGE PREPROCESSING - MAKING A DOCUMENT SCANNING APPLICATION MORE
USER-FRIENDLY |
Author(s): |
Aleš Jaklič and Blaž Vrabec |
Abstract: |
Automatic Document scanning is a
useful part of an information system at personal identification
checkpoints such as airports, border crossings, banks etc. Current
applications usually require a great deal of carefulness of the
scanner operators – the document has to be positioned horizontally
and special care must be taken to detect corrupt scans that can
occur. In this work we describe ideas for two independent algorithms
for the document rotation correction and automatic detection of
corrupt scans. One algorithm relies on the Hough transformation and
the other on brightness gradient of the image. The output of each
algorithm is a cropped image of the document in horizontal
orientation, which can be used as input for further processing (such
as OCR). Also the estimate of scan corruption is returned. Also
shown are some testing results of the algorithm prototypes written
in MATLAB environment. |
|
Title: |
ULTRASONIC SENSORS FOR THE ELDERLY
AND CAREGIVERS IN A NURSING HOME |
Author(s): |
Toshio Hori and Yoshifumi Nishida |
Abstract: |
Workloads on caregivers in nursing
home are increasing as the imbalance between the number of elderly
people and caregivers becomes larger. Excessive workloads on
caregivers must be reduced not only because they become burdens for
caregivers but also because they deteriorate the quality of nursing
care. One of such workloads is routine patrol for monitoring the
status of the elderly and for detecting accidents on the elderly as
soon as possible. If the number of unnecessary patrols is minimized,
caregivers will be able to spend their time on high touch care and
humane communication. The authors have been developing an ultrasonic
3D tag system which locate ultrasonic tags in real time, and
employed the system in a nursing home to monitor positions of the
elderly people. If the system locates the elderly people
continuously and robustly, and if it can notify caregivers about the
occurrence of accident-prone activities promptly, caregivers will be
releaved from their unnecessary workloads. This paper describes the
research background, system overview, system implementations, and
experimental results. |
|
Title: |
DISTANCE LEARNING BY INTELLIGENT
TUTORING SYSTEM. PART I: AGENT-BASED ARCHITECTURE FOR USER-CENTRED
ADAPTIVITY |
Author(s): |
Antonio Fernández-Caballero, José
Manuel Gascueńa, Federico Botella and Enrique Lazcorreta |
Abstract: |
Agent technology has been suggested
by experts to be a promising approach to fully extend Intelligent
Tutoring Systems (ITS). By using intelligent agents in an ITS
architecture it is possible to obtain an individual tutoring system
adaptive to the needs and characteristics of every student. The
general architecture of the ITS proposed is formed by the three
components that characterize an ITS – the Student Model, the Domain
Model, and the Education Model. In the Student Model the knowledge
that the system has about the student (profile and interaction with
the system) is represented. In the Domain Model the knowledge about
the contents to be taught is stored. Precisely, in this model four
autonomous agents – the Preferences Agent, the Accounting Agent, the
Exercises Agent and the Tests Agent - have been defined. Lastly, the
Education Model provides the functionality that the teacher needs.
Across this module, the teacher changes his preferences, gives
reinforcement to the students, obtains statistics and consults the
matter. |
|
Title: |
DISTANCE LEARNING BY INTELLIGENT
TUTORING SYSTEM. PART II: STUDENT/TEACHER ADAPTIVITY IN AN
ENGINEERING COURSE |
Author(s): |
Antonio Fernández-Caballero, José
Manuel Gascueńa, Enrique Lazcorreta and Federico Botella |
Abstract: |
Intelligent Tutoring Systems (ITS)
have proven their worth in multiple ways and in multiple domains in
Education. In this article the application of an Intelligent
Tutoring System to an Engineering Course is introduced. The paper
also introduces an explanation of how the course adapts to the
students as well as to the teachers. User adaptation is provided by
means of the so called pedagogical strategies, which among others
specify how to proceed in showing the contents of the matter for a
better assimilation of the knowledge by the student. Thus, in this
paper the adaptation mechanisms implemented in the ITS, which permit
that the students learn better and the professors teach better, are
explained in extensive. |
|
Title: |
IDENTIFYING USABILITY ISSUES WITH AN
ERP IMPLEMENTATION |
Author(s): |
Heikki Topi, Wendy Lucas and Tamara
Babaian |
Abstract: |
Enterprise Resource Planning (ERP)
systems hold great promise for integrating business processes and
have proven their worth in a variety of organizations. Yet the gains
that they have enabled in terms of increased productivity and cost
savings are often achieved in the face of daunting usability
problems. While one frequently hears anecdotes about the
difficulties involved in using ERP systems, there is little
documentation of the types of problems typically faced by users. The
purpose of this study is to begin addressing this gap by
categorizing and describing the usability issues encountered by one
division of a Fortune 500 company in the first years of its
large-scale ERP implementation. Recognizing and understanding these
issues is a critical first step that must be undertaken in order to
address ERP usability problems, which cause decreases in
productivity and system usage, increases in costs for training and
support, and ultimately impact the effectiveness of the entire
installation. This study also demonstrates the promise of using
collaboration theory to evaluate usability characteristics of
existing systems and to design new systems. Given the impressive
results already achieved by some corporations with these systems,
imagine how much more would be possible if understanding how to use
them weren’t such an overwhelming task. |
|
Title: |
HEDONIC MOTIVATIONS IN THE WEB SITE:
EFFECTS OF MUSIC ON CONSUMER RESPONSES IN AN ONLINE SHOPPING
ENVIRONMENT |
Author(s): |
Carlota Lorenzo, Miguel Ŕngel Gomes,
Alejandro Mollá and Javier Garcia |
Abstract: |
Because of the increasing competitive
retail industry environment, retailers must be certain that their
stores are up-to-date and suggest an image that is appealing to
their target markets (Baker et al., 1992). In fact, one of the most
significant features of the total product is the place where it is
bought or consumed. In some cases, the place, or more specifically
the place atmosphere, is more influential than the product itself in
the purchase decision (Kotler, 1973-1974). A considerable body of
literature has been accumulated on atmospheric effects in
traditional stores; however, the impact of these factors in online
retail environments has not yet been well documented (Eroglu et al.,
2003). Some studies posit that although the instrumental qualities
or utilitarian elements of online shopping (e.g. ease and
convenience) are important predictors of consumers' attitudes and
purchase behaviours, the hedonic aspects of the web medium could
play an equally important role in shaping these behaviours (Childers
et al., 2001). This study analyzes the influence of a hedonic
atmospheric cue, specifically music (Eroglu et al., 2003; Childers
et al. (2001)), on shoppers' cognitive, emotional and behavioural
responses in an online apparel shopping environment. A
between-subjects experimental design is used to test our hypotheses.
In addition we developed an integrated methodology that allows the
simulation, tracking and recording of subjects’ behaviour within an
online shopping environment under different atmospheric conditions.
|
|
Title: |
LINC: A WEB-BASED LEARNING TOOL FOR
MIXED-MODE LEARNING |
Author(s): |
Tang-Ho Lę and Jean Roy |
Abstract: |
In this paper we discuss some basic
theories of learning and e-Learning. With the light of the
appropriate theories, we then describe the components and essential
features of our e-Learning system, the Learn IN Context System
(LINC). This tool aims to be used in institution’s courses in
mixed-mode learning. Finally, we report the initial experimentation
with this tool and some early results and evaluation. |
|
Title: |
USING MPEG-BASED TECHNOLOGIES FOR
BUILDING PERSONALIZED MULTIMEDIA INTERACTIVE ENVIRONMENTS WITH
CONTEXT-AWARENESS REQUIREMENTS: DEVELOPMENT OF AN APPLICATION FOR
INTERACTIVE TELEVISION |
Author(s): |
Joăo Benedito dos Santos Junior, Iran
Calixto Abrăo and Thelma Virgínia Rodrigues and Mario Guglielmo |
Abstract: |
We are using MPEG-4 technology to
build applications to be used in real environments. One of these
applications allows for teacher to send real-time lessons to
this/her students or to record them. The Tele-Learning System under
development includes: a) on the teacher side: a recording
workstation with two cameras, microphone, specific MPEG-4 software;
b) an IP network or an MPEG-2 TS satellite link; c) on the student
side: a PC with special MPEG software, and a special board if
receiving from satellite. This research focuses on the broadcast
scenario where a satellite board is used in a PC. Thus, the work
covers how to send the lesson even to a student that is not
connected to the intranet, using a satellite link, either over IP
embedded in the MPEG-2 TS or directly over MPEG-2 TS. For the
security part it may be necessary to have a low-band return channel
implemented, for example, through a mobile phone. The satellite
environment may require the redesign of the User Interface and the
retargeting of the elementary streams parameters in order to match
specific requirements and features of the medium. At this point, new
interaction criteria have been established from distribution of
MPEG-4 media objects and MPEG-7 scene descriptions on network
environments. Furthermore, context-awareness aspects are being added
for providing personalization on the teaching-learning environment
and MPEG-21 is being studied for applying to new multimedia
requirements. |
|
Title: |
MANAGING INTER-ACTIVITIES IN CSCW:
SUPPORTING USERS EMERGING NEEDS IN THE COOLDA PLATFORM |
Author(s): |
Gregory Bourguin and Arnaud
Lewandowski |
Abstract: |
The CSCW research domain still tries
to find a better way for supporting the users needs. Some groupware
systems propose global and integrated environments supporting
collaborative activities, but empirical studies show that these
environments usually omit to support some dimensions of the work. On
the other hand, some groups work with diverse applications that do
not know each other. Mainly inspired by results coming from Social
and Human Sciences, we believe that the complete CSCW environment
cannot be defined a priori. However, we also believe that a global
CSCW environment is really valuable for users. Taking our
foundations in the Activity Theory, we aim at creating a global but
tailorable environment that supports the dynamic integration of
external applications and manages the links between them, i.e. it
manages the inter-activities. This work is concretized in the CooLDA
platform. |
|
Title: |
A CONTROLLED EXPERIMENT FOR MEASURING
THE USABILITY OF WEBAPPS USING PATTERNS |
Author(s): |
F. Javier García, María Lozano,
Francisco Montero, Jose Antonio Gallud, Pascual González and Carlota
Lorenzo |
Abstract: |
Usability has become a critical
quality factor of software systems in general, and especially
important regarding Web-based applications. Measuring quality is the
key to developing high-quality software, and it is widely recognised
that quality assurance of software products must be assessed
focusing on the early stages of the development process. This paper
describes a controlled experiment carried out in order to
corroborate whether the patterns associated to a quality model are
closely related to the final Web application quality. The experiment
is based on the definition of a quality model and the patterns
associated to its quality criteria to prove that applications
developed using these patterns improve its usability in comparison
with other ones developed without using them. The results of this
experiment demonstrate that the use of these patterns really
improves the quality of the final Web application in a high degree.
The experiment is formally based on the recommendations of the ISO
9126-4. |
|
Title: |
A FRAMEWORK FOR THE EVALUATION OF
AUTOMOTIVE TELEMATICS SYSTEMS |
Author(s): |
Gennaro Costagliola, Sergio Di
Martino and Filomena Ferrucci |
Abstract: |
The evaluation of interfaces for
in-car communication and information applications is an important
and challenging task. Indeed, it is necessary not only to consider
the user interaction with the interface but also to understand the
effects of this interaction on driver-vehicle performances. As a
result, there is a strong need of tools and approaches that allow
researchers to effectively evaluate such interfaces while user is
driving. To address the problem in the paper we propose a framework
that has been specifically conceived for such evaluation. It is
based on the integration of a suitable car simulator and an in-car
system and allows us to get a high amount of data and carry out
repeatable tests in a safe and controlled environment. Moreover, the
proposed solution is not much expensive and quite simple to set-up. |
|
Title: |
DESIGNING GEOGRAPHIC ANALYSIS
PROCESSES ON THE BASIS OF THE CONCEPTUAL FRAMEWORK GEOFRAME |
Author(s): |
Cláudio Ruschel, Cirano Iochpe,
Luciana Vargas da Rocha and Jugurta Lisboa F. |
Abstract: |
The investment in geographic
information systems (GIS) is usually justified by their ability of
supporting the execution of geographic analysis processes (GP). The
conceptual design of a GP makes it independent of a specific GIS
product and enables designers to define the process at a high level
of abstraction using a language that enforces a set of logical
constraints and is yet easy to learn. On the other hand, in order to
support interoperability a GP conceptual model should be
sufficiently generic to allow a GP definition to be translated to
any of the logical data models implemented by existing GIS
commercial products. This paper presents an extension to GeoFrame, a
conceptual GIS framework that supports the conceptual design of
spatio-temporal, geographic databases (GDB). This extension is
actually a conceptual GP data model relying on a set of UML diagrams
as well as on a methodology of how to apply them to analysis process
design. On the basis of the PGeoFrame-A, the definition of a GP
starts by the identification of its associated use cases. Both
control and data flows are described by means of activity diagrams
with the new modeling constructs provided by UML 2.0. Input as well
as output data introduced in the workflow definition are described
in detail through a class diagram. In this way, CASE tools based on
UML 2.0 can be adapted to translate GP conceptual design to the
specific scripts as well as macro definition languages of different
existing GIS products. |
|
Title: |
PERFORMING REAL-TIME SCHEDULING IN AN
INTERACTIVE AUDIO-STREAMING APPLICATION |
Author(s): |
Julien Cordry, Nicolas Bouillot,
Samia Bouzefrane |
Abstract: |
The CEDRIC and the IRCAM conduct
since 2002 a project entitled "distributed orchestra" which proposes
to coordinate on a network the actors of a musical orchestra
(musicians, sound engineer, listeners) in order to produce a live
concert. At each site (musician), mainly two components are active:
the sound engine (FTS) and an auto-synchronisation module (nJam),
two modules which must treat audio streams in real time and exchange
them via the network. We propose in this paper to schedule the
processes generated by these components by using a real-time
scheduling technique. For this purpose, we choose to use Bossa, a
platform grafted on the Linux kernel in order to integrate new
real-time schedulers. We show, by using Bossa with an appropriate
real-time scheduling technique, that the application performances
are improved. |
|
Title: |
INTERNATIONAL STANDARDS AND USABILITY
MEASUREMENT |
Author(s): |
Hema Banati and P.S.Grover |
Abstract: |
The current trend of increased web
usage has recognized the need of usable websites. A site containing
relevant information may not gain user acceptance if the user finds
it difficult to use. Usability is often measured qualitatively.
However, it is felt that a quantifiable measure of usability shall
be more useful in comparing different websites. It can also provide
a measurable estimate of improvement required in the website. This
measure would gain wider acceptability, if obtained, by applying the
international standards of measurement. This paper measures
usability quantitatively using the international standard ISO/IEC TR
9126-2. Metrics specified in the standards are used to measure the
four major characteristics of usability, “Learnability”,
“Operability”, “Understandability” and “Attractiveness” for an
academic website. It was found that the “Learnability” level of the
website was very low, as compared to the Understandability level
present. This is not in conformation with the standards, which
mention the latter to be an indicator of the former. The
significance and relevance of each metric to usability of the
website was then examined in this light. The study also highlight
the long due need of standardizing the process of usability
measurement. |
|
Title: |
STUDENT'S EVALUATION OF WEB-BASED
LEARNING TECHNOLOGIES IN A HUMAN-COMPUTER INTERACTION COURSE |
Author(s): |
Dina Goren-Bar |
Abstract: |
The human-computer interface (HCI)
field is constantly changing and designers are challenged to develop
simple interactive systems implemented through sophisticated
technology. At Ben-Gurion University, the introductory HCI course
was originally taught in a face-to-face mode and covered theoretical
knowledge on HCI theories, principles and design, and practical
experience in designing and evaluating websites. When it became
apparent from students' course evaluations that they expected the
HCI course to provide them with more hands-on experience with
different types of interaction, communication devices, and design
dilemmas, the course was redesigned. The new course combines
face-to-face lessons, e-learning sessions and web-based
collaborative projects. While there is still room for improvement,
student's evaluations show significant increase in satisfaction with
the course. |
|
Title: |
A COOPERATIVE INFORMATION SYSTEM FOR
E-LEARNING - A SYSTEM BASED ON WORKFLOWS AND AGENTS |
Author(s): |
Latifa Mahdaoui and Zaia Alimazighi |
Abstract: |
In the E-learning platform, we can
consider three principal actors (Teacher, Learner and Administrator)
whose interact or cooperate between them among processes, then in
context of enterprise the e-learning process can be seen as a
cooperative information system where actors are managers and
employees. Many of these processes can be automated and then we
consider this work as a workflow process. The learning process is
naturally flexible because of different levels of learners and the
different ways to present a lesson or training process. We use an
oriented object Meta-Model based on UML to describe a process
concerning Tutor and Learner and we propose a Multi-Agent System
(MAS) based on ITS architecture to support the work of the actors
roles “tutor” and “learner”. |
|
Title: |
ELECTRONIC DOCUMENT CLASSIFICATION
USING SUPPORT VECTOR MACHINE-AN APPLICATION FOR E-LEARNING |
Author(s): |
Sandeep Dixit and L.K.Maheshwari |
Abstract: |
Most of the recent publications have
focussed on the theory or conceptual application of the SVM without
giving the exact details of a implementation with any SVM software.
In this paper we present a holistic approach towards building a
classifier. We present theory, application and use of a specific SVM
software(SVM Torch) for classification of electronic documents
giving every detail of how the freely available online SVM software
can be used to apply the SVM concept and categorise any given
document. The paper presents the theoretical framework for
categorisation of documents in general and text documents in
particular. Experimental results obtained by applying SVM to text
document are presented. Preprocessing of the document is also
presented. The experiment was conducted using SVMTorch software with
ten documents for training and five documents for testing.. SVM’s
performed the best when used with binary representation.We are
confident that the extent of details provided could as well serve as
a useful curriculum component for UG level students |
|
Title: |
CROSS-DOMAIN MAPPING: QUALITY
ASSURANCE AND E-LEARNING PROVISION |
Author(s): |
Hilary Dexter and Jim Petch |
Abstract: |
In order to ensure that a valid and
robust model of e-learning provision is developed it has to be based
on a thorough understanding of the e-learning provision domain. The
fullest and most detailed articulations of the e-learning
development process are found in quality checklists for e-learning
development. The problem this paper addresses is that posed by the
situation of having knowledge used for modeling in one domain
represented by artifacts in another. Using a number of checklist
sources, a composite list was developed for some aspects of the
e-learning development process. The checks address the activities
and their artifacts that should be monitored, and what the outcomes
of the checks should be in terms of what actions should be taken and
what changes made if the results do not meet quality criteria. A
small worked example of this cross domain mapping process is given. |
|
Title: |
MOBILE TELEPHONE TECHNOLOGY AS A
DISTANCE LEARNING TOOL |
Author(s): |
Yousuf M. Islam, Manzur Ashraf,
Zillur Rahman and Mawdudur Rahman |
Abstract: |
This paper presents the methodology,
results and effectiveness in the development of Mobile
telephonebased (Short Message Service-based) distance learning.
Proposed novel distance learning approach , which is about the
application of information technology to education, is being setup,
delivered and evaluated using real-life environment. Statistical
analysis of the achieved results of the learners conformed this
SMS-based learning being more similar to direct face-to-face
learning. |
|
Title: |
AN EXPLORATORY MODEL OF E-EDUCATION |
Author(s): |
J. H. Im and Soyoung C. Yim |
Abstract: |
The history of distance learning goes
well back when correspondence study started more than a century ago
(Moore, et al., 1996, p.19; Simonson, et al., 2000, p.22). Distance
learning has been evolving by adopting new technologies to improve
learning. Nonetheless, the distinction between distance learning and
traditional learning had been very clear up until recently. Unlike
other technologies, however, the Internet is making the distinction
blur by enabling merger of these two, thus causing confusion on
widely-accepted terminologies, concepts, and theories. This paper
attempts to develop a reference model which reduces such confusion
based on the old paradigm of distance learning; and clarifies newly
emerging learning modes and potential of totally reengineered
learning modes based on a new paradigm. |
|
Workshop on
Wireless Information Systems (WIS-2005)
|
Title: |
INTEGRATION OF MOBILE COMPUTING IN
APPLICATIONS FOR THE SERVICE SECTOR – DESIGN AND IMPLEMENTATION |
Author(s): |
Klaus-Georg Deck and Herbert
Neuendorf |
Abstract: |
For representative applications from
the service sector, scenarios for mobile systems are introduced, the
similar structure of which can be described with a common pattern. A
prototype of its implementation is introduced based on Web Services
in Java. Technical implementation is done with a Blackberry handheld
from T-Mobile/RIM using push technology within the GPRS net-work. |
|
Title: |
TRANSFERENCE AND STORAGE OF SPATIAL
DATA IN DISTRIBUTED WIRELESS GIS |
Author(s): |
A.K.Ramani, Sanjay Silakari, Sudheer
Koppireddy |
Abstract: |
There has been a great development in
Wireless GIS (WGIS) new century. Spatial data transferring and
storage in GIS is developed from wired to wireless network. This
paper first briefly introduces the technologies and strategies of
spatial data transferring and storing in distributed Wireless GIS.
Second, the schemes of spatial data transferring modes in wireless
GIS for improving the network transfer rates is introduced
emphatically, and the distributed transferring process technology of
GIS spatial data are also discussed. Based on these, we present the
storage strategies in wireless database, and introduce mobile
computing conception and three-tier wireless replication database.
Here, we emphasizes on dynamic replication strategies and methods in
wireless environment. Finally, we compare several of data storage
strategies in WGIS databases and the dynamic multi-tiers replication
strategy is proposed for wireless database storage. |
|
Title: |
ANALYSIS OF TRAFFIC AGENT SCHEME FOR
COVERAGE IMPROVEMENT IN WIRELESS LOCAL AREA NETWORKS |
Author(s): |
Hai-Feng Yuan, Yang Yang, Wen-Bing
Yao and Yong-Hua Song |
Abstract: |
Wireless Local Area Network (WLAN)
can provide high data-rate wireless multimedia applications to end
users in a limited geographical area and has been widely deployed in
recent years. For indoor WLAN systems, how to efficiently improve
service coverage is a challenging problem. In this paper, we propose
a coverage improvement scheme that can identify suitable Mobile
Stations(MS) in good service zones and use them as Traffic
Agents(TA) to relay traffic for those out-of-coverage MS’s. The
service coverage area of WLAN system is therefore expanded.
Mathematical analysis, verified by computer simulations, shows that
the scheme can effectively reduce blocking probability when the
system is lightly loaded. |
|
Title: |
A DECISION TREE APPROACH TO
VOICE-ENABLE MOBILE COMMERCE APPLICATIONS |
Author(s): |
Yandong Fan and Elizabeth Kendall |
Abstract: |
Speech interfaces have become
increasingly popular to support mobile commerce. Although spoken
dialogue systems have been studied for decades, they pertain to
specific domains. There is a lack of research on general approaches
for voice enabling. In this paper, we propose a decision tree
approach to personalize and voice-enable applications in the context
of mobile commerce. The system dynamically analyses user preferences
by mining the navigation and transaction logs. The user profile then
is used to personalize a tree-based product catalogue. We utilize
the Predictive Model Markup Language to store the resultant
catalogue. This XML document is used to construct conversational
dialogues for users to find the product information. The proposed
architecture has been verified and evaluated though implementing a
mobile car city application. |
|
Title: |
.NET AS A PLATFORM FOR WIRELESS
APPLICATIONS |
Author(s): |
Juha Järvensivu and Tommi Mikkonen |
Abstract: |
Wireless applications implemented in
mobile gadgets are a new trend in software development. One platform
on top of which such applications can be implemented is Windows,
where two different flavours of design environments are available.
.NET Framework (.NET) is aimed at full-fledged computing
environments, and it is used in e.g. laptops. In contrast, .NET
Compact Framework (.NETCF) is for smartphones and PDAs that consist
of more restricted hardware. From the development perspective .NET
and .NETCF are closely related as they rely on the same application
model. Moreover, .NETCF is a subset of .NET environment, with
features that are not relevant in smartphones or PDAs removed.
Therefore, it seems tempting to run the same applications in all
wireless Windows environments, disregarding the type of the device.
In this paper, we analyze the possibilities achieving this goal in
practise. |
|
Title: |
ANALYSIS OF ATTACKS AND DEFENSE
MECHANISMS FOR QOS SIGNALING PROTOCOLS IN MANETS |
Author(s): |
Charikleia Zouridaki, Marek Hejmo,
Brian L. Mark, Roshan K. Thomas and Kris Gaj |
Abstract: |
Supporting quality-of-service (QoS)
in a mobile ad hoc network (MANET) is a challenging task,
particularly in the presence of malicious users. We present a
detailed analysis of attacks directed at disrupting
quality-of-service in MANETs. We consider attacks on both
reservation-based and reservation-less QoS signaling protocols and
discuss possible countermeasures. Finally, we identify and discuss
the key issues in achieving secure QoS provisioning in MANETs. |
|
Title: |
WIRELESS ATA: A NEW DATA TRANSPORT
PROTOCOL FOR WIRELESS STORAGE |
Author(s): |
Serdar Ozler and Ibrahim Korpeoglu |
Abstract: |
The purpose of this paper is to
introduce a new data transport storage protocol that is designed
especially for wireless devices. We call the protocol WATA (Wireless
ATA), as its architecture is similar to current ATA and ATA-based
technologies. In this paper, we give basic technical details of the
protocol and discuss its main advantages and disadvantages over the
current protocols. |
|
Title: |
GPRS-BASED REAL-TIME REMOTE CONTROL
OF MICROBOTS WITH M2M CAPABILITIES |
Author(s): |
Diego López de Ipińa, Ińaki Vázquez,
Jonathan Ruiz de Garibay and David Sainz |
Abstract: |
Machine to Machine (M2M)
communication is gathering momentum. Many network operators deem
that the future of the business on data transmission lies on M2M. In
parallel, the application of robotics is progressively becoming more
widespread. Traditionally, robotics has only been applied to
industrial environments, but lately some more exoteric (e.g.
domestic) robots have arisen. Anyhow, those robots usually offer
very primitive communication means. Few researchers have considered
that a solution to this issue would be to combine those two emerging
fields. This paper describes our experiences combining M2M with
robotics to create a fleet of MicroBots, which are remotely
controllable through GPRS connection links. Those robots can be used
in dangerous environments to gather material samples or simply for
surveillance and security control. A sophisticated 3-tier
architecture is proposed that combined with a purposely built
protocol, optimized for wireless transmission, makes feasible the
real-time control of remote devices. |
|
Title: |
ENHANCING MESSAGE PRIVACY IN WEP |
Author(s): |
Darshan Purandare and Ratan Guha |
Abstract: |
The Wired Equivalent Privacy (WEP)
protocol for networks based on 802.11 standards has been shown to
have several security flaws. In this paper we have proposed a
modification to the existing WEP protocol to make it more secure. We
also develop an IV avoidance algorithm that eliminates
Initialization Vector (IV) collision problem. We achieve Message
Privacy by ensuring that the encryption is not breached. The idea is
to update the shared secret key frequently based on factors like
network traffic and number of transmitted frames. We show that
frequent rekeying thwarts all kinds of cryptanalytic attacks on the
WEP. |
|
Title: |
A DISTRIBUTED SECURITY ARCHITECTURE
FOR AD HOC NETWORKS |
Author(s): |
Ratan Guha, Mainak Chatterjee and
Jaideep Sarkar |
Abstract: |
Secure communication in ad hoc
networks is an inherent problem because of the distributiveness of
the nodes and the reliance on cooperation between the nodes. All the
nodes in such networks rely and trust other nodes for forwarding
packets because of their limitation in the range of transmission.
Due to the absence of any central administrative node, verification
of authenticity of nodes is very difficult. In this paper, we
propose a clusterhead-based distributed security mechanism for
securing the routes and communication in ad hoc networks. The
clusterheads act as certificate agencies and distribute certificates
to the communicating nodes, thereby making the communication secure.
The clusterheads execute administrative functions and hold shares of
network keys that are used for communication by the nodes in
respective clusters. Due to the process of authentication, there are
signalling and message overheads. Through simulation studies, we
show how the presence of clusterheads can substantially reduce these
overheads and still maintain secure communication. |
|
Title: |
THE ANALYSIS AND DESIGN STRATEGY IN
THE DEPLOYMENT OF WIRELESS COMMUNICATIONS FOR INNOVATIVE CAMPUS
NETWORKS |
Author(s): |
Jamaludin Sallim |
Abstract: |
This paper describes the fundamental
concept of analysis and design strategy for an effective deployment
of wireless communications for innovative universities or colleges
campus that insistently deploys the wireless networks. The extensive
use of wireless technologies in university campus has made various
respective computer applications such as electronic transactions and
electronic learning (e-learning) environments become more energetic.
Usually, for the innovative campus network, when deploying wireless
communications, most IT Managers/Engineers begin the project by
jumping into technical matters, such as deciding upon which
approach, technique or standard to use, which vendor to select, and
how to overcome the various limitations. These are important
elements of implementing wireless communications for innovative
campus; however prior to getting too far with the project, the
respective IT Managers/Engineers must give vigilant attention to
analysis and design strategy in order to wind up with an effective
deployment. |
|
Title: |
TRUST: AN APPROACH FOR SECURING
MOBILE AD HOC NETWORK |
Author(s): |
Chung Tien Nguyen and Olivier CAMP |
Abstract: |
When functionning in the ad hoc mode,
wireless networks do not rely on a predefined infrastructure for
achieving the basic network functionalities. Hosts of such networks
need to count on one another to keep in contact with the network and
carry out services such as routing, security, auto-configuration,...
Network services and, in particular, security thus strongly depend
on the way the nodes find the correct partners with which they can
cooperate efficiently. As consequence, it seems important for ad hoc
networks to provide a representation of trust together with a
mechanism to evaluate it. In this paper, we present ad hoc networks,
and show how trust is fundamental in the existing propositions to
improve their security. After identifying the characteristics of
existing trust models, we focus on those that should be implemented
in a trust model for ad hoc networks. |
|
Title: |
CHARGED LOCATION AWARE SERVICES |
Author(s): |
Krzysztof Piotrowski, Peter
Langendörfer, Michael Maaser, Gregor Spichal and Peter Schwander |
Abstract: |
Location aware services have been
envisioned as the killer application for the wireless Internet. But
they did not gain sufficient attention. We are convinced that one of
the major pitfalls is that there is up to now no way to charge for
this kind of services. In this paper we present an architecture
which provides the basic mechanisms needed to realize charged
location based services. The three major components are: a location
aware middleware platform, an information server and a micropayment
system. We provide performance data that clearly indicates that such
a system can be deployed without exhausting the resources of mobile
devices or infrastructure servers. |
|
Workshop on
Modelling, Simulation,Verification and
Validation of Enterprise Information Systems
(MSVVEIS-2005)
|
Title: |
TRADE-OFF ANALYSIS OF MISUSE
CASE-BASED SECURE SOFTWARE ARCHITECTURES: A CASE STUDY |
Author(s): |
Joshua J. Pauli and Dianxiang Xu |
Abstract: |
Based on the threat-driven
architectural design of secure information systems, this paper
introduces an approach for the tradeoff analysis of secure software
architectures in order to determine the effects of security
requirements on the system. We use a case study on a payroll
information system to show the approach from misuse case
identification through the application of the archi-tecture tradeoff
analysis. In the case study, we discuss how to make tradeoff
be-tween security and availability with respect to the number of
servers present in the system. |
|
Title: |
ON THE USE OF MODEL CHECKING IN
VERIFICATION OF EVOLVING AGILE SOFTWARE FRAMEWORKS: AN EXPLORATORY
CASE STUDY |
Author(s): |
Nan Niu and Steve Easterbrook |
Abstract: |
Evolution is a basic fact of software
life. Domain-specific agile software frameworks are key to modern
enterprise information systems (EIS). They promote reuse and rapid
development by capturing the commonalities in design and
implementation among a family of applications and by constraining
the space of possible solutions. In this paper, we propose a model
checking approach to formal verification of agile frameworks that
evolve over time and endure continuous maintenance activities. The
results obtained can be used to justify the maintenance activities
in software evolution and identify important but implicit
assumptions about the application domain of the framework. An
industrially relevant exploratory case study on a domain-specific,
light-weight, database- centric Web application framework is
conducted to validate our hypothesis and proactively open up new
research avenues arising from our investigation. |
|
Title: |
EXPANDING DATABASE SYSTEMS INTO
SELF–VERIFYING ENTITIES |
Author(s): |
Kaare J. Kristoffersen and Yvonne
Dittrich |
Abstract: |
The paper presents work-in-progress
aiming at deploying runtime verification techniques to observe
whether state changes in a database system conform with temporal
business rules. A high level language for tailoring enterprise
database systems with temporal business rules is defined.
Furthermore we present an algorithmic framework for checking
temporal business rules at runtime, i.e. we recommend on-line
checking of data in the system as opposed to post-checking, i.e.
off--line processing. A prototypical implementation of a runtime
verifier (called Verification Server) based on this algorithmic
framework is presented and discussed. |
|
Title: |
A FRAMEWORK FOR ENSURING SYSTEM
DEPENDABILITY FROM DESIGN TO IMPLEMENTATION |
Author(s): |
Xudong He |
Abstract: |
Software has been and will be a major
enabling technology for the proper functioning of our society. Many
software systems are often mission and safety critical and thus need
to be highly dependable. These highly dependable systems need to be
highly reliable, efficient, secure, and robust. How to develop and
ensure the dependability of these complex software-based systems is
a grand challenge. Currently a systematic engineering approach to
develop these systems in a reliable and cost-effective manner does
not exist. It is our strong belief that a highly dependable complex
software system cannot be developed without a rigorous development
process and a precise specification and design documentation. Recent
research has shown that it is especially important to explore
technologies how to handle dependability attributes at the software
architecture level for the following reasons: (1) software
architecture description presents the highest-level design
abstraction of a system. As a result it is relative simple compared
to a detailed system design; and (2) as the highest-level design
abstraction, a software architecture description precedes and
logically and structurally influences other system development
products. Prevention and detection of errors at software
architectural level are thus extremely important. However, assurance
of system dependability at software architecture design level is not
adequate to ensure system dependability at implementation level
since the implementation can be significantly different from the
design. It is a daunting task to verify an implementation satisfying
a design specification, and it is also a major challenging to check
dependability at code level. This paper presents a model-driven
framework to model, specify, and analyze software dependability at
software architecture design level; and furthermore to map a
software architecture design and the associated dependability
attributes to a Java implementation and run-time verification
mechanisms. We believe that such a framework will ensure
dependability at both design and implementation levels. |
|
Title: |
DERIVING TEST CASES FROM B MACHINES
USING CLASS VECTORS |
Author(s): |
W. L. Yeung and K. R. P. H. Leung |
Abstract: |
This paper proposes a
specification-based testing method for use in conjunction with the B
method. The method aims to derive a set of legitimate class vectors
from a B machine specification and it takes into account the
structure and semantics of the latter. A procedure for test case
generation is given. One advantage of the method is its potential to
be integrated with the B method via its support tools. |
|
Title: |
CONSISTENCY VERIFICATION OF A
NON-MONOTONIC DEDUCTIVE SYSTEM BASED ON OWL LITE |
Author(s): |
Jaime Ramírez and Angélica de Antonio |
Abstract: |
The aim of this paper is to show a
method that is able to detect a particular class of semantic
inconsistencies in a deductive system (DS). A DS verified by this
method contains a set of production rules, and an OWL Lite ontology
that defines the problem domain. The antecedent of a rule is a
formula in Disjunctive Normal Form, which encompasses first-order
literals and linear arithmetic constraints, and the consequent is a
list of actions that can add or delete assertions in a non-monotonic
manner. By building an ATMS-like theory the method is able to give a
specification of all the initial Fact Bases (FBs), and the rules
that would have to be executed from these initial FBs to produce an
inconsistency. |
|
Title: |
A UNIT TESTING FRAMEWORK FOR NETWORK
CONFIGURATIONS |
Author(s): |
Dominik Jungo, David Buchmann and
Ulrich Ultes-Nitsche |
Abstract: |
We present in this paper a unit
testing framework for network configurations which verifies that the
configuration meets prior defined requirements of the networks
behavior. This framework increases the trust in the correctness,
security and reliability of a networks configuration. Our testing
framework is based on a behavioral simulation approach as it is used
in hardware design. The unit testing framework is part of the SNSF
VeriNeC project. |
|
Title: |
HOW TO SYNTHESIZE RELATIONAL DATABASE
TRANSACTIONS EB3 ATTRIBUTE DEFINITIONS? |
Author(s): |
Frederic Gervais, Marc Frappier and
Regine Laleau |
Abstract: |
EB3 is a trace-based formal language
created for the specification of information systems (IS).
Attributes, linked to entities and associations of an IS, are
computed in EB3 by recursive functions on the valid traces of the
system. In this paper, we aim at synthesizing imperative programs
that correspond to EB3 attribute definitions. Thus, each EB3 action
is translated into a transaction. EB3 attribute definitions are
analysed to determine the key values affected by each action. Some
key values are retrieved from SELECT statements that correspond to
first-order predicates in EB3 attribute definitions. To avoid
problems with the sequencing of SQL statements in the transactions,
temporary variables and/or tables are introduced for these key
values. The SQL statements are ordered by table. Generation of
DELETE statements is straightforward, but tests must be defined in
the transactions to distinguish updates from insertions of tuples. |
|
Title: |
MODEL-CHECKING INHERENTLY FAIR
LINEAR-TIME PROPERTIES |
Author(s): |
Thierry Nicola, Frank Nießner and
Ulrich Ultes-Nitsche |
Abstract: |
The concept of linear-time
verification with an inherent fairness condition has been studied
under the names approximate satisfaction, satisfaction up to
liveness, and satisfaction within fairness in several publications.
Even though proving the general applicability of the approach,
reasonably efficient algorithms for inherently fair linear-time
verification (IFLTV) are lacking. This paper bridges the gap between
the theoretical foundation of IFLTV and its practical application,
presenting a model-checking algorithm based on a structural analysis
of the synchronous product of the system and property (Büchi)
automata. |
|
Title: |
VERIFICATION OF SMART HOMES
SPECIFICATIONS WHICH ARE BASED ON ECA RULES |
Author(s): |
Juan Carlos Augusto |
Abstract: |
Smart homes implementations are
usually based on Active Databases (ADBs). A core concept of ADBs is
the concept of Event-Condition-Action (ECA) rules allowing the
system to react to specific events occurring in contexts of interest
and advising on an the actions that should be taken in those
situations. Although research in ADBs has been conducted for quite a
few years, still no standard verification framework has emerged yet
from the area. In this paper we consider some options to verify
specifications of Smart Homes based on ADB-related concepts. |
|
Title: |
TOWARDS RUN-TIME COMPONENT
INTEGRATION ON UBIQUITOUS SYSTEMS |
Author(s): |
Macario Polo Usaola and Andres Flores |
Abstract: |
Based on our interest on Ubiquitous
Systems we are working on a Component-based Integration process.
This implies to evaluate whether components may satisfy a given set
of requirements. We propose a framework for such process and
describe Assessement and likely Adaptation in more detail. The
Assessment procedure is based on meta-data added to components,
involving assertions, and usage protocol. Assertions and usage
protocol are evaluated by properly applying a technique based on
Abstract Syntax Trees. We have developed a simple prototype in order
to implement the Assessment and Adaptation procedures. Thus we gain
experience about the complexity and effectiveness of our model. We
continue exploring other techniques to improve our process on
effi-cacy and reliability. |
|
Title: |
AUTOMATED RUNTIME VERIFICATION WITH
EAGLE |
Author(s): |
Allen Goldberg and Klaus Havelund |
Abstract: |
Space exploration missions are
increasingly relying on software. The amount of software, counted in
lines of code, for space missions doubles in size very four years,
counting a few thousands of lines in the seventies, to hundreds of
thousands of lines of code at present time. The risk of missions
failing due to software errors increases accordingly. The Automated
Software Engineering group at NASA Ames Research Center studies and
develops techniques for detecting errors in software. This
presentation focuses on some of that work, specifically what is
called runtime verification. Runtime verification consists of
monitoring and checking that program executions conform with
user-provided specifications and algorithms of correct behavior. We
shall see an example of a requirement specification language
developed specifically for runtime verification, including how
monitors are generated from specifications. We shall illustrate some
applications of this technology, for example to monitor and test a
planetary rover. A branch of runtime verification focuses on
detecting concurrency errors, such as deadlocks and data races. We
present several notions of deadlocks and data races together with
highly scalable runtime verification algorithms for detecting them
during test. |
|
Title: |
TEACHING SOFTWARE TESTING IN
INTRODUCTORY CS COURSES AND IMPROVING SOFTWARE QUALITY |
Author(s): |
Syed M. Rahman and Akram Salah |
Abstract: |
Undergraduates in computer science
typically begin their curriculum with a programming course or
sequence. Many researchers have found, however (e.g., [1,14]), that
most of the students who complete these courses, and even many who
complete a degree, are not proficient programmers and produce code
of low quality [9]. In this paper, we try to address this problem by
proposing a cultural shift in introductory programming courses. The
primary feature of our approach is that software testing is
presented as an integral part of programming practice; specifically,
a student who is to write a program will begin by writing a test
suite. Our initial results are that this approach can be successful.
Teaching basic concepts of software testing does not take much time,
it helps beginning students to understand the requirements, and it
helps them produce better-quality code. |
|
Title: |
TOWARDS APPLICATION SUITABILITY FOR
PVC ENVIRONMENTS |
Author(s): |
Andres Flores and Macario Polo |
Abstract: |
Pervasive Computing Environments
should support the feeling of continuity on users daily tasks. This
implies the availability of different resources. Applications are
the main resources which could be in high risk by degrading their
suitability. We propose a framework for Component-based Integration
process, based on our idea of composing/ adapting applications at
run-time. Our current focus is on component assessment, which
uncovers the syntactic and semantic levels. We will apply
metadata-based techniques and a process-oriented procedure for
simulation on SPIN which is initiated by Propositional Linear
Temporal Logic based queries. We continue exploring other techniques
to improve our process mainly on effcacy and reliability. |
|
Title: |
PETRI-NET MODELING OF PRODUCTION
SYSTEMS BASED ON PRODUCTION MANAGEMENT DATA |
Author(s): |
Dejan Gradisar and Gasper Music |
Abstract: |
Timed Petri nets can be used for the
modeling and analysis of a wide range of concurrent discrete-event
systems, e.g. production systems. This paper describes how to apply
timed Petri nets to the modeling of production systems. Information
about the structure of a production facility and about the products
that can be produced is usually given in production-data management
systems. We describe a method for using these data to
algorithmically build a Petri-net model. The Petri-net model can be
further used to develop different analysis of the treated system. |
|
Title: |
AN ACTIVE RULE BASE SIMULATOR BASED
ON PETRI NETS |
Author(s): |
Joselito Medina-Marín and Xiaoou Li |
Abstract: |
Development of event-condition-action
rules, in active databases, should be performed in a careful way,
because of the firing of a ECA rule set can produce an inconsistent
DB state. Simulation is a powerful tool to predict system behaviors,
so it can be predicted if ECA rule firings will generate
inconsistent DB states. In this research work, an ECA rule base
simulator is described, named ECAPNSim. ECAPNSim uses a Conditional
Colored Petri Net as a model to depict ECA rules. ECAPNSim can model
an ECA rule base, simulate its behavior, perform a static analysis
of ECA rules by using the CCPN obtained, and execute the firing
rules in a relational database system. |
|
Title: |
AN INTEGRATION SCHEME FOR CPN AND
PROCESS ALGEBRA APPLIED TO A MANUFACTURING INDUSTRY CASE |
Author(s): |
Manuel I. Capel, Juan A. Holgado and
Agustín Escámez |
Abstract: |
A semiformal development method for
obtaining a correct design of embedded control and real-time systems
is presented. The design is obtained from a Colored Petri Net (CPN)
model of a real-time system, which is subsequently transformed into
a formal system specification using CSP+T process algebra. The
method translates CPN modelling entities into abstract processes,
which allow the expression of concurrency and real-time constraints.
The correct design of a “key“ component (feed belt controller) of a
paradigmatic manufacturing problem (the Production Cell) is
discussed as to show the applicability of our method. |
|
Title: |
COMPUTING SIMULATION AND HEURISTIC
OPTIMIZATION OF THE MARINE DIESEL DRIVE GENERATING |
Author(s): |
Josko Dvornik, Srđan Dvornik and Eno
Tireli |
Abstract: |
The aim of this paper is to show the
efficiency of the System Dynamics Computer Simulation Modeling of
the dynamics behavior of Marine Diesel-Drive Generating Set, as one
of the most complex and non-linear marine technical systems. In this
paper Marine Diesel-Drive Generating Set will be presented as a
qualitative and quantitative system dynamics computer model with a
special automation aspect provided by two UNIEG-PID-regulators
(Electronics Universal PID Regulators). One of them will be used for
diesel-motor speed (frequency) regulation and the other will be used
for the synchronous electrical generator voltage regulation. |
|
Title: |
AN EXAMPLE OF BUSINESS PROCESS
SIMULATION USING ARENA |
Author(s): |
Joseph Barjis |
Abstract: |
In this paper a modeling methodology
for business systems analysis is discussed and introduced. The
modeling methodology is based on the business transaction concept
and Petri nets diagram. The transaction concept is used for
processes elicitation while Petri net diagram is used for
constructing business process model. Next to these two components,
the Arena simulation package is used to build an animated simulation
model. The simulation part of the paper is a case study based on a
real life example. Since the simulation model will be demonstrated
using the software, it is not included into this paper. |
|
Title: |
MODELLING, VERIFICATION AND
VALIDATION OF THE IEEE 802.15.4 FORWIRELESS NETWORKS |
Author(s): |
Paulo Sausen, Pedro Fernandes Ribeiro
Neto, Angelo Perkusich, Antonio Marcus Nogueira de Lima, Maria Ligia
B. Perkusich and Fabiano Salvadori |
Abstract: |
The Low-Rate Wireless Personal Area
Networks (LR-WPAN) is a new standard in wireless networks developed
by the Institute of Electrical and Electronic Engineers (IEEE) and
the National Institute of STandards (NIST) to transmit information
to short distances at low rates. The purpose of this paper is to
present a model of an mechanism Carrier Sense Multiple Access -
Collision Avoidance (CSMA-CA) unslotted capable of accessing the
environment. The mechanism CSMA-CA unslotted is utilized in the
latest IEEE 802.15.4 standard, which defines the wireless Medium
Access Control (MAC) and the Physical layer-specification (PHY) for
LR-WPAN. For the model construction, Hierarchical Coloured Petri
Nets (HCPN) will be used. HCPN are extensions of Coloured Petri Nets
(CPN). Design/CPN tools will be used for simulations, and the model
will be verified and validated by means of occurrence graphs
generation. |
|
Title: |
THE PORT-TRANSSHIPMENT SYSTEM
DYNAMICS SOFTWARE SIMULATOR |
Author(s): |
Josko Dvornik, Ante Munitic and Frane
Mitrovic |
Abstract: |
Port is place of interlace of
different kindle of cargo, and play important role in shipping
process, connecting different type of traffic in one united system,
and form interrupted traffic chain. The aim of this paper is: to
show the efficiency of System Dynamics Simulation Modeling during
the study of the dynamics behavior of the Port-Transshipment system,
and to find optimal solution for transshipment with regard to type
of the cargo and size of traffic of the cargo, direction and
dynamics of arriving and shipping the cargo. The System Dynamics
Modeling is in essence special, i.e. “holistic” approach to the
simulation of the dynamics behavior of natural, technical and
organization systems, and it contains quantitative and qualitative
Simulation Modeling of various natured realities. The concept of
optimization in System Dynamics is based on belief that the “manual
and iterative” procedure, i.e. optimization by the method “retry and
error” can be successfully executed using “heuristic optimization”
algorithm, with the help of digital computer, and in complete
coordination with System Dynamics Simulation Methodology.
|
|
Workshop on
Natural Language Understanding and Cognitive Science (NLUCS-2005)
|
Title: |
A MULTI-AGENT SYSTEM FOR DETECTING
AND CORRECTING “HIDDEN” SPELLING ERRORS IN ARABIC TEXTS |
Author(s): |
Chiraz Ben Othmane Zribi, Fériel Ben
Fraj and Mohamed Ben Ahmed |
Abstract: |
In this paper, we address the problem
of detecting and correcting hidden spelling errors in Arabic texts.
Hidden spelling errors are morphologically valid words and therefore
they cannot be detected or corrected by conventional spell checking
programs. In the work presented here, we investigate this kind of
errors as they relate to the Arabic language. We start by proposing
a classification of these errors in two main categories: syntactic
and semantic, then we present our multi-agent system for hidden
spelling errors detection and correction. The multi-agent
architecture is justified by the need for collaboration, parallelism
and competition, in addition to the need for information exchange
between the different analysis phases. Finally, we describe the
testing framework used to evaluate the system implemented. |
|
Title: |
A COMPUTATIONAL LEXICALIZATION
APPROACH |
Author(s): |
Feng-Jen Yang |
Abstract: |
Fine-Grained lexicalization has been
treated as a post process to refine the machine planned discourse
and make the machine generated language more coherent and more
fluent. Without this process, a system can still generate
comprehensible languages but may sound unnatural and sometimes
frustrate its users. To this end, generating coherent and natural
sounding language is a major concern in any natural language system.
In this paper, I present a lexicalization approach to refine the
machine generated language. |
|
Title: |
EVALUATING THE WORD SENSE
DISAMBIGUATION ACCURACY WITH THREE DIFFERENT SENSE INVENTORIES |
Author(s): |
Dan Tufis and Radu Ion |
Abstract: |
Comparing performances of word sense
disambiguation systems is a very difficult evaluation task when
different sense inventories are used and, even more difficult when
the sense distinctions are not of the same granularity. The paper
substantiates this statement by briefly presenting a system for word
sense disambiguation (WSD) based on parallel corpora. The method
relies on word alignment, word clustering and is supported by a
lexical ontology made of aligned wordnets for the languages in the
corpora. The wordnets are aligned to the Princeton Wordnet,
according to the principles established by EuroWordNet. The
evaluation of the WSD system was performed on the same data, using
three different granularity sense inventories. |
|
Title: |
TRANSCRIPT SEGMENTATION USING
UTTERANCE COSINE SIMILARITY MEASURE |
Author(s): |
Caroline Chibelushi, Bernadette Sharp
and Andy Salter |
Abstract: |
The problem we address in this paper
is the extraction of key issues discussed at meetings through the
analysis of transcripts. Whilst the task of topic extraction is an
easy task for humans it has proven difficult task to automate given
the unstructured nature of our transcripts. Our approach is based on
the notion of semantic similarity of utterances within the
transcripts. Therefore it is desirable to devise an appropriate
technique to measure the content similarity, semantic relationships,
and to capture the correct notion of distance for a particular task
at hand in a given domain. This paper describes the Utterance Cosine
Similarity (UCS) method which can be used to analyse transcripts,
and identify main topics discussed in the meetings by identifying
related utterances through the analysis of nouns used in the
transcript and a comparison of the frequency |
|
Title: |
MRE: A STUDY ON EVOLUTIONARY LANGUAGE
UNDERSTANDING |
Author(s): |
Donghui Feng and Eduard Hovy |
Abstract: |
The lack of well-annotated data is
always one of the biggest problems for most training-based dialogue
systems. Without enough training data, it’s almost impossible for a
trainable system to work. In this paper, we explore the evolutionary
language understanding approach to build a natural language
understanding machine in a virtual human training project. We build
the initial training data with a finite state machine. The language
understanding system is trained based on the automated data first
and is improved as more and more real data come in, which is proved
by the experimental results. |
|
Title: |
AN APPROACH TO NATURAL LANGUAGE
UNDERSTANDING BASED ON A MENTAL IMAGE MODEL |
Author(s): |
Masao Yokota |
Abstract: |
The Mental Image Directed Semantic
Theory (MIDST) has proposed an omnisensual mental image model and
its description language Lmd. This paper presents a brief sketch of
MIDST, and focuses on word meaning description and text
understanding in association with the mental image model. |
|
Title: |
LEXICAL COHESION: SOME IMPLICATIONS
OF AN EMPIRICAL STUDY |
Author(s): |
Beata Beigman Klebanov and Eli Shamir |
Abstract: |
Lexical cohesion refers to the
perceived unity of text achieved by the author's usage of words with
related meanings. Data from an experiment with 22 readers aimed at
eliciting lexical cohesive patterns they see in 10 texts is used to
shed light on a number of theoretical and applied aspects of the
phenomenon: which items in the text carry the cohesive load; what
are the appropriate data structures to represent cohesive texture;
what are the relations employed in cohesive structures. |
|
Title: |
AN INTRODUCTION TO THE SUMMARIZATION
OF EVOLVING EVENTS: LINEAR AND NON-LINEAR EVOLUTION |
Author(s): |
Stergos D. Afantenos, Konstantina
Liontou, Maria Salapata and Vangelis Karkaletsis |
Abstract: |
This system examines the
summarization of events which evolve through time. It discusses
different types of evolution taking into account the time in which
the incidents of an event are happening and the different sources
reporting on the specific event. It proposes an approach for
multi-document summarization which employs ``messages'' for
representing the incidents of an event and cross-document relations
that hold between messages according to certain conditions. The
paper also outlines the current version of the summarization system
we are implementing to realize this approach. |
|
Title: |
IDENTIFYING INFORMATION UNITS FOR
MULTIPLE DOCUMENT SUMMARIZATION |
Author(s): |
Seamus Lyons and Dan Smith |
Abstract: |
Multiple document summarization is
becoming increasingly important as a way of reducing information
overload, particularly in the context of the proliferation of
similar accounts of events that are available on the Web. Removal of
similar sentences often results in either partial or unwanted
elimination of important information. In this paper, we present an
approach to split sentences into their component clauses and use
these clauses to produce comprehensive summaries of multiple
documents describing particular events. Detailed analysis of all
clauses and clause boundaries may be complex and computationally
expensive. Our rule-based approach demonstrates that it is possible
to achieve high accuracy in a reasonable time. |
|
Title: |
MOTIVATIONS AND IMPLICATIONS OF VEINS
THEORY |
Author(s): |
Dan Cristea |
Abstract: |
The paper deals with the cohesion
part of a model of global discourse interpretation, usually known as
Veins Theory (VT). By taking from the Rhetorical Structure Theory
the notions of nuclearity and relations, but ignoring the relations’
names, VT computes from rhetorical structures strings of discourse
units, called veins, from which domains of accessibility can be
determined for each discourse unit. VT’s constructs best fit with an
incremental view on discourse processing. Linguistics and cognitive
observations that lead to the elaboration of the theory are
presented. Cognitive aspects like short-term memory and on-line
summarization are explained in terms of VT’s constructs.
Complementary remarks are made over anaphora and its resolution in
relation with the interpretation of discourse. |
|
Title: |
TREE DISTANCE IN ANSWER RETRIEVAL AND
PARSER EVALUATION |
Author(s): |
Martin Emms |
Abstract: |
The paper results on the use of tree
distance to perform an answer retrieval task. A number of variants
of tree distance are considered including sub-tree distance,
structural weighting, wild cards and lexical emphasis. Experiments
are described in which it is shown that improving parse quality
leads to better answer retrieval. The tree distance variants are
compared with each other and with string distance, and one of the
variants is shown to out perform string distance. |
|
Title: |
NATURAL LANGUAGE INTERFACE PUT IN
PERSPECTIVE: INTERACTION OF SEARCH METHOD AND TASK COMPLEXITY |
Author(s): |
QianYing Wang, Jiang Hu and Clifford
Nass |
Abstract: |
A 2x2 mixed design experiment (N=52)
was conducted to examine the effects of search method and task
complexity on users’ information-seeking performance and affective
experience in an e-commerce context. The former factor had two
within-participants conditions: keyword (KW) vs. natural language
(NL) search; the latter factor had two between-participants
conditions: simple vs. complex tasks. The results show that
participants in the complex task condition were more successful when
they used KW search than NL search. They thought the tasks were less
difficult and reported more enjoyment and confidence with KW search.
In the meantime, simple task participants performed better when they
used NL rather than KW search. They also perceived the tasks as
easier and more enjoyable, and had higher levels of confidence with
the results, when NL was used. The findings suggest that NL search
is not the panacea for all information retrieval tasks, depending on
the complexity of task. Implications for interface design and
directions for future research are discussed. |
|
Title: |
SYNTACTIC, SEMANTIC AND REFERENTIAL
PATTERNS IN BIOMEDICAL TEXTS: TOWARDS IN-DEPTH TEXT COMPREHENSION
FOR THE PURPOSE OF BIOINFORMATICS |
Author(s): |
Barbara Gawronska and Björn
Erlendsson |
Abstract: |
The paper concerns prerequisites for
high-quality automatic understanding of scientific texts for the
purpose of information fusion (combining information from different
sources) in the domain of bioinformatics. The authors focus on
syntactic analysis, lexical representation and classification of
verbs, and investigation of coreference patterns. A sample corpus of
biomedical abstracts is analyzed from syntactic, semantic, and
pragmatic perspective, and the results are related to the
possibility of automatic information extraction. Tools and resources
used for IE in the domain of news reports are evaluated with respect
to the biomedicine domain, and the necessary modifications are
discussed. |
|
Title: |
APPLYING A SEMANTIC INTERPRETER TO A
KNOWLEDGE EXTRACTION TASK |
Author(s): |
Fernando Gomez and Carlos Segami |
Abstract: |
A system that extracts knowledge from
encyclopedic texts is presented. The knowledge extraction component
is based on a semantic interpreter of English based on an enhanced
WordNet. The input to the knowledge extraction component is the
output of the semantic interpreter. The extraction task was chosen
in order to test the semantic interpreter. The following aspects are
described: the definition of verb predicates and semantic roles, the
organization of the inferences, an evaluation of the system, and a
session with the system. |
|
Title: |
AUTOMATIC SUMMARIZATION BASED ON
SENTENCE MORPHO-SYNTACTIC STRUCTURE: NARRATIVE SENTENCES COMPRESSION |
Author(s): |
Mehdi Yousfi-Monod and Violaine
Prince |
Abstract: |
We propose an automated text
summarization through sentence compression. Our approach uses
constituent syntactic function and position in the sentence
syntactic tree. We first define the idea of a constituent as well as
its role as an information provider, before analyzing contents and
discourse consistency losses caused by deleting such a constituent.
We explain why our method works best with narrative texts. With a
rule-based system using SYGFRAN's morpho-syntactic analysis for
French \cite{C84}, we select removable constituents. Our results are
satisfactory at the sentence level but less effective at the whole
text level, a situation we explain by describing the difference of
impact between constituents and relations. |
|
Title: |
A WEIGHTED MAXIMUM ENTROPY LANGUAGE
MODEL FOR TEXT CLASSIFICATION |
Author(s): |
Kostas Fragos, Yannis Maistros and
Christos Skourlas |
Abstract: |
The Maximum entropy (ME) approach has
been extensively used for various natural language processing tasks,
such as language modeling, part-of-speech tagging, text segmentation
and text classification. Previous work in text classification has
been done using maximum entropy modeling with binary-valued features
or counts of feature words. In this work, we present a method to
apply Maximum Entropy modeling for text classification in a
different way it has been used so far, using weights for both to
select the features of the model and to emphasize the importance of
each one of them in the classification task. Using the X square test
to assess the contribution of each candidate feature from the
obtained X square values we rank the features and the most prevalent
of them, those which are ranked with the higher X square scores,
they are used as the selected features of the model. Instead of
using Maximum Entropy modeling in the classical way, we use the X
square values to weight the features of the model and give thus a
different importance to each one of them. The method has been
evaluated on Reuters-21578 dataset for test classification tasks,
giving very promising results and performing comparable to some of
the “state of the art” systems in the classification field.
|
|
Title: |
WHEN SMART HOME MEETS PERVASIVE
HEALTHCARE SERVICES USING MOBILE DEVICES AND SENSOR NETWORKS– STATUS
AND ISSUES |
Author(s): |
Ti-Shiang Wang |
Abstract: |
The present work is focused on the
systematization of a process of knowledge acquisition for its use in
intelligent management systems. The result was the construction of a
computational structure for use inside the institutions (Intranet)
as well as outside them (Internet). This structure was called
Knowledge Engineering Suite, an ontological engineering tool to
support the construction of ontologies in a collaborative
environment and was based on observations made on the Semantic Web,
UNL (Universal Networking Language) and WordNet. We use both a
knowledge representation technique called DCKR to organize
knowledge, and psychoanalytic studies, focused mainly on Lacan and
his language theory to develop a methodology called Mind Engineering
to improve the synchronicity between knowledge engineers and
specialists in a particular knowledge domain. |
|
Title: |
A KNOWLEDGE REPRESENTATION AND
REASONING MODULE FOR A DIALOGUE SYSTEM IN A MOBILE ROBOT |
Author(s): |
Luís Seabra Lopes, António J. S.
Teixeira and Marcelo Quinderé |
Abstract: |
The recent evolution of Carl, an
intelligent mobile robot, is presented. The paper focuses on the new
knowledge representation and reasoning module, developed to support
high-level dialogue. This module supports the integration of
information coming from different interlocutors and is capable of
handling contradictory facts. The knowledge representation language
is based on classical semantic networks, but incorporates some
notions from UML. Question answering is based on deductive as well
as inductive inference. |
|
Title: |
CLOSING THE GAP: COGNITIVELY
ADEQUATE, FAST BROAD-COVERAGE GRAMMATICAL ROLE PARSING |
Author(s): |
Gerold Schneider, Fabio Rinaldi,
Kaarel Kaljurand and Michael Hess |
Abstract: |
We present Pro3Gres, a fast robust
broad-coverage and deep-linguistic parser that has been applied to
and evaluated on unrestricted amounts of text from unrestricted
domains. We show that it is largely cognitively adequate and discuss
related approaches. We argue that Pro3Gres contributes to closing
the gap between psycholinguistics and language engineering, between
probabilistic parsing and formal grammar-based parsing, between
shallow parsing and full parsing, and between deterministic parsing
and non-deterministic parsing. We also describe the successful
applications of Pro3Gres, focusing on its use for parsing research
texts from the BioMedical domain. |
|
Workshop on
Ubiquitous Computing (IWUC-2005)
|
Title: |
WHEN SMART HOME MEETS PERVASIVE
HEALTHCARE SERVICES USING MOBILE DEVICES AND SENSOR NETWORKS– STATUS
AND ISSUES |
Author(s): |
Ti-Shiang Wang |
Abstract: |
In this paper, to deliver healthcare
service pervasively, especially to the home space, we first discuss
the status and activities on healthcare infrastructures and systems
using mobile devices and sensor networks. We also provide the
information and illustrate the reasons why home healthcare will be
even more hot space in the near future. With the advance of wireless
network, mobile devices become more demanding for users to
communicate each other either for voice or data service, or both. In
addition, as medical record goes to digital form and will be
available any where, any time and used by any kind of mobile
devices, so that mobile healthcare becomes a hot topic and many
issues are currently working on. From the user side point of view,
advanced sensing devices and networks based on them provide rich
context and seamless connec-tion between users and mobile devices so
that the personal data or medical re-cord could be updated as needed
and quality of services can be improved. With the help of smart
sensors and sensor networks embedded either on body or in the home
space, the quality of personal healthcare can be improved in lower
cost as well. In this paper, we also address some issues to
implement the home-based pervasive healthcare applications and
provide a visionary scenario inte-grating smart home and healthcare
services. |
|
Title: |
A PLATFORM FOR UNIVERSAL ACCESS TO
APPLICATIONS |
Author(s): |
Nuno Valero Ribeiro and José Manuel
Brázio |
Abstract: |
This paper gives an insight on the
services that are necessary for a system capable of supporting one
practical application of the concept of Ubiquitous Computing. The
applied scenario is an academic campus and it is pretended that
students may access typical laboratorial computer applications
ubiquitously, i. e., anywhere and at anytime. We call it: Universal
Access to Applications. In this scenario, each user may access and
use an arbitrary and heterogeneous set of applications on any
computer and anywhere in the campus. A survey on technologic
solutions for enabling the access to non-native applications is
firstly summarized. Then we proceed with the design of such
distributed system using MoNet methodology. Four steps are covered
together with their main contributions: requirements capturing,
conception of a Logical Model, elaboration of a Functional Model,
and finally, setting of a Reference Model for implementation. From
this Reference Model it becomes clear that a careful choice on what
kind of middleware technology to adopt is fundamental for such a
system. Finally, conclusions from a proof-of-concept developed
platform (SDUA), based on the modelled system, are given. |
|
Title: |
TOWARDS ACCEPTABLE PUBLIC-KEY
ENCRYPTION IN SENSOR NETWORKS |
Author(s): |
Erik-Oliver Blaß and Martina
Zitterbart |
Abstract: |
One of the huge problems for security
in sensor networks is the lack of resources. Typical sensor nodes
such as the quite popular MICA and MICA2 Motes from UC Berkeley [1]
are based on a microcontroller architecture with only a few KBytes
of memory and severe limited computing ability. Strong publickey
cryptography is therefore commonly seen as infeasible on such
devices. In contrast to this prejudice this paper presents an
efficient and lightweight implementation of public-key cryptography
algorithms relying on elliptic curves. The code is running on Atmels
8Bit ATMEGA128 microcontroller, the heart of the MICA2 platform. To
our knowledge this implementation is the first to offer acceptable
encryption speed while providing adequate security in sensor
networks. The key to our fast implementation is the use of offline
precomputation and handcrafting. |
|
Title: |
AN INFRASTRUCTURED-ARCHITECTURAL
MODEL (IAM) FOR PERVASIVE & UBIQUITOUS COMPUTING |
Author(s): |
R. Gunasekaran and V. Rhymend
Uthariayaraj |
Abstract: |
An extensible and modular
architecture called IAM that addresses this information-routing
problem while leveraging significant existing work on composable
Internet services and adaptation for heterogeneous devices is
described here. IAM's central abstraction is the concept of a
trigger, a self-describing chunk of information bundled with the
spatial and/or temporal constraints that define the context in which
the information should be delivered. The IAM architecture manages
triggers at a centralized infrastructure server and arranges for the
triggers to be distributed to pervasive computing devices that can
detect when the trigger conditions have been satisfied and alert the
user accordingly. The main contribution of the architecture is an
infrastructure-centric approach to the trigger management problem.
We argue that pervasive computing devices benefit from extensive
support in the form of infrastructure computing services in at least
two ways. First, infrastructure adaptation services can help manage
communication among heterogeneous devices. Second, access to public
infrastructure services such as MapQuest and Yahoo can augment the
functionality of trigger management because they naturally support
the time and location dependent tasks typical of pervasive-computing
users. We describe our experience with a functional prototype
implementation that exploits GPS to simulate an AutoPC. |
|
Title: |
INTERACTING WITH OUR ENVIRONMENT
THROUGH SENTIENT MOBILE PHONES |
Author(s): |
Diego López de Ipińa, Ińaki Vázquez
and David Sainz |
Abstract: |
The latest mobile phones are offering
more multimedia features, better communication capabilities
(Bluetooth, GPRS, 3G) and are far more easily programmable
(extendible) than ever before. So far, the “killer apps” to exploit
these new capabilities have been presented in the form of MMS
(Multimedia Messaging), video conferencing and multimedia-on-demand
services. We deem that a new promising application domain for the
latest Smart Phones is their use as intermediaries between us and
our surrounding environment. Thus, our mobiles will behave as
personal butlers who assist us in our daily tasks, taking advantage
of the computational services provided at our working or living
environments. For this to happen, a key element is to add senses to
our mobiles: capability to see (camera), hear (michrophone), notice
(Bluetooth) the objects and devices offering computer services
within an environment. In this paper, we propose the MobileSense
system which adds sensing capabilities to mobile phones. We
illustrate its use in two scenarios: (1) making mobiles more
accessible to people with disabilities and (2) enabling the mobiles
as guiding devices within a museum. |
|
Title: |
INTEGRATED AUTHORIZATION FOR GRID
SYSTEM ENVIRONMENTS |
Author(s): |
Jiageng Li |
Abstract: |
Grid computing has received
widespread attention in recent years as a significant new research
field. Yet to date, there has been only a limited work on the grid
system authorization problem. In this paper, we address the
authorization problem and its requirements in a grid system
environment. We propose a new integrated authorization service that
tackles the authorization problem at two levels: grid system level
and organization unit level. It is shown that the new approach not
only meets the requirements of the authorization in grid system
environment but also overcomes the disadvantages found in existing
authorization designs. |
|
Title: |
SERVICE COMPOSITION BASED MIDDLEWARE
ARCHITECTURE FOR MOBILE GRID |
Author(s): |
M. A. Maluk Mohamed and D. Janakiram |
Abstract: |
Service Composition refers to the
construction of complex services with the help of more primitive and
easily executable services or components. With the proliferation of
wireless communication and the mobile Internet, the demand for
mobile data services has increased. In addition the recent spurt of
e-services and m-services has increased the importance of service
composition. We envisage service composition to play a crucial role
in providing mobile devices access to complex services. The basis
for our proposed approach is to virtualize the individual system
resources as services that can be described, discovered and
dynamically configured at runtime to execute an application. To
accomplish such a global service composition we propose to add
functional layers over the Anonymous Remote Mobile Cluster Computing
model. The idea behind such middleware is to use the available
resources efficiently and to hide the complexity inherent in
managing heterogeneous services. This paper describes the unique
capabilities of the proposed middleware and gives the layered view
of the proposed architecture. |
|
Title: |
ARCHITECTURAL PATTERNS FOR
CONTEXT-AWARE SERVICES PLATFORMS |
Author(s): |
P. Dockhorn Costa, L. Ferreira Pires
and M. van Sinderen |
Abstract: |
Architectural patterns have been
proposed in many domains as means of capturing recurring design
problems that arise in specific design situations. In this paper, we
present three architectural patterns that can be applied
beneficially in the development of context-aware services platforms.
These patterns present solutions for recurring problems associated
with managing context information and proactively reacting upon
context changes. We demonstrate the benefits of applying these
patterns by discussing the AWARENESS architecture. |
|
Title: |
ICRICKET: A PROGRAMMABLE BRICK FOR
KIDS' PERVASIVE COMPUTING APPLICATIONS |
Author(s): |
Fred Martin, Kallol Par, Kareem
Abu-Zahra, Vasiliy Dulsky and Andrew Chanler |
Abstract: |
The iCricket is a new
internet-enabled embedded control board with built-in motor and
sensor interface circuits. It is designed for use by pre-college
students and other programming novices. It includes a Logo virtual
machine with extensions that allow networked iCrickets communicate
with one another, retrieving sensor values and remotely running each
other's Logo procedures. The underlying implementation uses standard
HTTP protocols. The iCricket's key contribution is that it will
allow programming novices (children, artists, and other
non-engineers) to implement pervasive computing applications with an
easy-to-use, interactive language (Logo). This paper focuses the
iCricket hardware and software design. Later work will evaluate
results of using the design with various users. |
|
Title: |
QOS IMPLEMENTATION AND EVALUATION FOR
MOBILE AD HOC NETWORKS |
Author(s): |
Xuefei Li and Laurie Cuthbert |
Abstract: |
Future mobile Ad hoc networks
(MANETs) are expected to be based on all-IP architecture and be
capable of carrying the multitude of real time multimedia
applications such as voice, video and data. It is very necessary for
MANETs to have a reliable and efficient routing and quality of
service (QoS) mechanism to support diverse applications which have
varying and stringent requirements for delay, jitter, bandwidth,
packets loss. Providing multipath routing is very beneficial to
avoid traffic congestion and break of communications in MANETs where
routes are disconnected frequently due to mobility. Differentiated
Services (DiffServ) which have simple, efficient and scalable
characteristics can be used to classify network traffic into
different priority levels and apply different scheduling and queuing
mechanisms to obtain QoS guarantees. In this paper, we propose a
practical node-disjoint Multipath QoS Routing protocol of supporting
DiffServ (MQRD), which provides low routing overhead and end-to-end
QoS support. Simulation results show that MQRD achieves better
performance in terms of packet delivery ratio and average delay.
|
|
Title: |
PERVASIVE SECURE ELECTRONIC
HEALTHCARE RECORDS MANAGEMENT |
Author(s): |
Petros Belsis, Apostolos Malatras,
Stefanos Gritzalis, Christos Skourlas and Ioannis Chalaris |
Abstract: |
Pervasive environments introduce a
technological paradigm shift, giving a new impetus to the
functionality of applications, overcoming applicability barriers of
legacy applications. Electronic healthcare records management can
clearly benefit from the new challenges brought by this emerging
technology, due to its low cost and high percentage of user
adaptivity. Still, the sensitivity of medical data, poses new
requirements in the design of a secure infrastructure based on the
ad-hoc networking schema, which underlies pervasive environments. |
|
Title: |
SERVICE COMPOSITION IN EHOME SYSTEMS:
A RULE-BASED APPROACH |
Author(s): |
Michael Kirchhof and Philipp Stinauer |
Abstract: |
In this paper we will take a look at
systems combining automated homes, called eHomes, with enterprises
and virtual enterprises. We call these systems eHome systems. We
focus on the service composition in order to reduce complexity and
to leverage maintainability and extensibility of eHome services.
Talking about services, we mean any piece of software, which is
executed in a network environment, making the usage and
administration of ubiquitous appliances easier. Current situation
is, that the complete functionality is hard-coded into services
without the facilities to be extended or reused. Many logical
correlations (e.g., how to react if an alarm condition is raised)
are made explicit in an inappropriate way. To tackle this problem,
we introduce a declarative approach to specify logical correlations
and to combine functionalities and services to new services,
offering the required flexibility and comprehensiveness.
|
|
Title: |
A SENSORY ORIENTED MODEL FOR
MONITORING UBIQUITOUS ENVIRONMENTS |
Author(s): |
Soraya Kouadri Mostéfaoui |
Abstract: |
Recently context and context-aware
computing have gained a remarkable momentum and attracted the
attention of several researchers. The work presented in this paper
contributes to this topic by providing a generic and flexible model
for handling heterogeneous sensor data. Our work towards this goal
is not the first one; recently there have been many attempts in this
direction. However, to our knowledge, most of them are tightly
coupled to a particular set of application domains and lack
generality. The proposed model tries to overcome this shortcoming.
Our modelling concepts are founded on an XML based approach, in
which context |
|
Workshop on
Security In Information Systems (WOSIS-2005)
|
Title: |
A SECURE HASH-BASED STRONG-PASSWORD
AUTHENTICATION SCHEME |
Author(s): |
Shuyao Yu, Youkun Zhang, Runguo Ye
and Chuck Song |
Abstract: |
Password authentication remains to be
the most common form of user authentication. So far, many
strong-password authentication schemes based on hash functions have
been proposed, however, none is sufficiently secure and efficient.
Based on the analysis of attacks against OSPA protocol, we present a
hash-based Strong-Password mutual Authentication Scheme (SPAS),
which is resistant to DoS attacks, replay attacks, impersonation
attacks, and stolen-verifier attacks. |
|
Title: |
TRANSITIVE SIGNATURES BASED ON
BILINEAR MAPS |
Author(s): |
Changshe Ma, Kefei Chen, Shengli Liu
and Dong Zheng |
Abstract: |
The notion of transitive signature,
firstly introduced by Micali and Rivest, is a way to digitally sign
the vertices and edges of a dynamically growing, transitively closed
graph. All the previous proposed transitive signature schemes were
constructed from discrete logarithm, factoring, or RSA assumption.
In this paper, we introduce two alternative realizations of
transitive signature based on bilinear maps. The proposed transitive
signature schemes possess the following properties: (i) they are
provably secure against adaptive chosen-message attacks in the
random oracle model; (ii) there are no need for node certificates in
our transitive signature schemes, so the signature algebra is
compact; (iii) if using Weil pairing, our signature schemes are more
efficient than all previous proposed schemes. |
|
Title: |
PUBLIC-KEY ENCRYPTION BASED ON MATRIX
DIAGONALIZATION PROBLEM |
Author(s): |
Jiande Zheng |
Abstract: |
We propose in this paper a novel
public-key function based on matrix diagonalization problem over a
ring of algebraic integers, develop a scheme for message encryption
with it, and show that its one-way property is related to either the
complexity of extracting irrational roots from a high-order
polynomial equation, or the complexity of finding a secret composite
factor from a big integer, which is a product of a large number of
primes. The new public key cryptosystem has two original features
that distinguish it from existing ones: (a) It recognizes the
ability of adversaries to factor big integers; (b) It requires only
simple (without modulus) additions and multiplications for message
encryption and decryption, no high-order exponentiation is required. |
|
Title: |
A REAL-TIME INTRUSION PREVENTION
SYSTEM FOR COMMERCIAL ENTERPRISE DATABASES AND FILE SYSTEMS |
Author(s): |
Ulf T. Mattsson |
Abstract: |
Modern intrusion detection systems
are comprised of three basically different approaches, host based,
network based, and a third relatively recent addition called
procedural based detection. The first two have been extremely
popular in the commercial market for a number of years now because
they are relatively simple to use, understand and maintain. However,
they fall prey to a number of shortcomings such as scaling with
increased traffic requirements, use of complex and false positive
prone signature databases, and their inability to detect novel
intrusive attempts. This paper presents an overview of our work in
creating a practical database intrusion detection system. Based on
many years of Database Security Research, the proposed solution
detects a wide range of specific and general forms of misuse,
provides detailed reports, and has a low false-alarm rate.
Traditional commercial implementations of database security
mechanisms are very limited in defending successful data attacks.
Authorized but malicious transactions can make a database useless by
impairing its integrity and availability. The proposed solution
offers the ability to detect misuse and subversion through the
direct monitoring of database operations inside the database host,
providing an important complement to host-based and network-based
surveillance. |
|
Title: |
ANALYSING THE WOO-LAM PROTOCOL USING
CSP AND RANK FUNCTIONS |
Author(s): |
Siraj Shaikh and Vicky Bush |
Abstract: |
Designing security protocols is a
challenging and deceptive exercise. Even small protocols providing
straightforward security goals, such as authentication, have been
hard to design correctly, leading to the presence of many subtle
attacks. Over the years various formal approaches have emerged to
analyse security protocols making use of different formalisms.
Schneider has developed a formal approach to modeling security
protocols using the process algebra CSP. He introduces the notion of
rank functions to analyse the protocols. We demonstrate an
application of this approach to the Woo-Lam protocol. We describe
the protocol in detail along with an established attack on its
goals. We then describe Schneider’s rank function theorem and use it
to analyse the protocol. |
|
Title: |
A UML-BASED METHODOLOGY FOR SECURE
SYSTEMS: THE DESIGN STAGE |
Author(s): |
Eduardo B. Fernandez, Tami Sorgente
and María M. Larrondo-Petrie |
Abstract: |
We have previously proposed a
UML-based secure systems development methodology that uses patterns
and architectural layers. We studied requirements and analysis
aspects and combined analysis patterns with security patterns to
build secure conceptual models. Here we extend this methodology to
the design stage. Design artifacts provide a way to enforce security
constraints. We consider the use of views, components, and
distribution. |
|
Title: |
ID-BASED SERIAL MULTISIGNATURE SCHEME
USING BILINEAR PAIRINGS |
Author(s): |
Raju Gangishetti, M. Choudary
Gorantla, Manik Lal Das, Ashutosh Saxena and Ved P. Gulati |
Abstract: |
This paper presents an ID-based
serial multisignature scheme using bilinear pairings. We use Hess's
ID-based signature scheme as the base scheme for our multisignature
scheme. Our scheme requires a forced verification at every level to
avoid the overlooking of the signatures of the predecessors. We show
that the scheme is secure against existential forgery under adaptive
chosen message attack in the random oracle model. |
|
Title: |
AN ATTRIBUTE-BASED-DELEGATION-MODEL
AND ITS EXTENSION |
Author(s): |
Chunxiao Ye, Zhongfu Wu and Yunqing
Fu |
Abstract: |
In current delegation models,
delegation security fully depends on delegator and security
administrator. In many cases, we need a more secured delegation with
a strict constraint. Delegation constraint of current delegation
models is only delegation prerequisite condition. This paper
proposes an Attribute-Based-Delegation-Model (ABDM) with an extended
delegation constraint. Delegation constraint in ABDM includes
delegation attribute expression (DAE) and delegation prerequisite
condition (CR). In ABDM, delegatees must satisfy delegation
constraint (especially DAE) when assigned to a delegation role. With
delegation constraint, delegator can restrict the delegatee
candidates more strictly. ABDM relieves the security management
effort of delegator and security administrator in delegation. ABDM
also supports two new types of delegations: decided-delegatee and
undecided-delegatee. In ABDM, temporary and permanent delegation
constraints are the same, thus limit the scope of delegatee
candidates in temporary delegtion. So, if delegator wants to
temporary delegate his permissions to a person who doesn’t satisfy
delegation constraint for a short term, ABDM doesn’t support this
operation. For a more flexibility and security, we propose a
delegation model named ABDMX, which is an extension of ABDM. In
ABDMX, delegator can temporary delegate some high level permissions
to low level delegatee candidates for short term. But delegator
can’t permanent delegate high level permissions to low level
delegatee candidates. |
|
Title: |
A PROTOCOL FOR INCORPORATING
BIOMETRICS IN 3G WITH RESPECT TO PRIVACY |
Author(s): |
Christos K. Dimitriadis and Despina
Polemi |
Abstract: |
A common parameter in the security
mechanisms of Third Generation (3G) mobile systems is user
authentication, which is usually implemented by the use of a
Personal Identification Number (PIN) or a password. However,
knowledge as well as the possession of an item, does not distinguish
a person uniquely, revealing an inherent security weakness of
password and token-based authentica-tion mechanisms. Moreover, PIN
stealing, guessing or cracking have become very popular, with
software tools implementing relevant attacks and research pa-pers
describing sophisticated techniques for invading PIN security. This
paper proposes a secure protocol, called BIO3G, for embedding
biometrics in 3G secu-rity, which is differentiated from the common
practice of utilizing biometrics lo-cally, for gaining access to the
device, providing real end-to-end user strong au-thentication to the
mobile operator, requiring no storing or transferring of biomet-ric
data and eliminating at the same time any biometric enrolment and
administra-tion procedure, which are time-consuming for the user and
expensive for the mo-bile operator. |
|
Title: |
AN APPROACH FOR MODELING INFORMATION
SYSTEMS SECURITY RISK ASSESSMENT |
Author(s): |
Subhas C. Misra, Vinod Kumar and Uma
Kumar |
Abstract: |
In this paper, we present a
conceptual modeling approach, which is new in the domain of
information systems security risk assessment. The approach is
helpful for performing means-end analysis, thereby uncovering the
structural origin of security risks in an information system, and
how the root-causes of such risks can be controlled from the early
stages of the projects. The approach addresses this limitation of
the existing security risk assessment models by exploring the
strategic dependencies between the actors of a system, and analyzing
the motivations, intents, and rationales behind the different
entities and activities constituting the system. |
|
Title: |
STATEFUL DESIGN FOR SECURE
INFORMATION SYSTEMS |
Author(s): |
Thuong Doan, Laurent D. Michel,
Steven A. Demurjian and T. C. Ting |
Abstract: |
UML has gained wide acceptance as
tool for the design of component-based applications, containing
different diagrams (e.g., use-case, class, sequence, activity, etc.)
for representing functional requirements. However, UML is lacking in
its ability to model security requirements, which is the norm rather
than the exception in today's applications. This paper presents and
explains techniques that support stateful design for secure
information systems, for applications constructed using UML that
have been extended with role-based access control and mandatory
access control properties. From a security-assurance perspective, we
track all states of a design to insure that a new state (created
from a prior state) is always free of security inconsistencies, in
terms of the privileges of users (playing roles) against the
application's components. This paper examines the theory of our
approach, along with its realization as part of the design process
and within the UML tool Together Control Center. |
|
Title: |
ANALYSIS OF THE PHISHING EMAIL
PROBLEM AND DISCUSSION OF POSSIBLE SOLUTIONS |
Author(s): |
Christine Drake, Andrew Klein and
Jonathan Oliver |
Abstract: |
With the growth of email, it was only
a matter of time before social engineering efforts used to defraud
people moved online. Fraudulent phishing emails are specifically
designed to imitate legitimate correspondence from reputable
companies but fraudulently ask recipients for personal or corporate
information. Recent consumer phishing attempts include spoofs of
eBay, PayPal and Citibank. Phishing emails can lead to identity
theft, security breaches, and fi-nancial loss and liability.
Phishing also damages e-commerce because some people avoid Internet
transactions for fear they will become victims of fraud. In a recent
survey, both fraudulent and legitimate emails were misidentified 28
percent of the time and 90 percent of respondents misidentified at
least one email. Based on these results, we cannot expect consumers
alone to be able to recognize phishing emails. Instead, we must
combine multiple solutions to combat phishing, including technical,
legal, best business practices, and con-sumer education. |
|
Title: |
DETECTION OF THE OPERATING SYSTEM
CONFIGURATION VULNERABILITIES WITH SAFETY EVALUATION FACILITY |
Author(s): |
Peter D. Zegzhda, Dmitry P. Zegzhda
and Maxim O. Kalinin |
Abstract: |
In this paper, we address to formal
verification methodologies and Safety Evaluation Workshop, the
system analyzing facility, to verify property of the operating
systems safety. Using our technique it becomes possible to discover
security drawbacks in any security system based on access control
model of 'state machine' style. Through our case study of model
checking in Sample Vulnerability Checking (SVC), we show how SEW
tool can be applied in MS Windows 2000 to specify and verify safety
problem of system security. |
|
Title: |
VALIDATING THE SECURITY OF MEDUSA: A
SURVIVABILITY PROTOCOL FOR SECURITY SYSTEMS |
Author(s): |
Wiebe Wiechers and Semir Daskapan |
Abstract: |
In this paper a new approach for
enabling survivable secure communications in multi agent systems is
validated through CSP/FDR state analysis. The security validation of
this approach centers around three security properties:
confidentiality, integrity and authentication. Requirements for
these security properties are defined for every message generated by
this security protocol during its life cycle. A logical analysis of
these requirements is followed up by a thorough security validation,
based on a model-checking CSP/FDR analysis. Both analyses show that
with minor modifications the protocol is able to deliver on its
security requirements for the three tested security properties.
Finally, the protocol is optimized with possible improvements that
increase its efficiency whilst maintaining the security
requirements. |
|
Title: |
EXTERNAL OBJECT TRUST ZONE MAPPING
FOR INFORMATION CLUSTERING |
Author(s): |
Yanjun Zuo and Brajendra Panda |
Abstract: |
In a loosely-coupled system various
objects may be imported from different sources and the integrity
levels of these objects can vary widely. Like downloaded information
from the World Wide Web, these imported objects should be carefully
organized and disseminated to different trust zones, which meet the
security requirements of different groups of internal applications.
Assigning an object to a trust zone is called trust zone mapping,
which is essentially a form of information clustering and is
designed to guide internal applications when they are using objects
from different zones. We developed methods to perform trust zone
mapping based on objects’ trust attribute values. The defined
threshold selection operators allow internal applications to best
express their major security concerns while tolerating insignificant
issues to certain degrees. As two major trust attributes, the
primary and secondary trust values are explained and we illustrate
how to calculate each of them. |
|
Title: |
A SYSTEMATIC APPROACH TO ANONYMITY |
Author(s): |
Sabah S. Al-Fedaghi |
Abstract: |
Personal information anonymity
concerns with anonymizing information that identifies individuals;
in contrast to anonymizing activities such as downloading
copyrighted items on the Internet. It may refer to encrypting
personal data, “generalization and suppression” (Samarati, 2001),
‘untraceability’ or ‘unidentifiability’ of identity in the network,
etc. A common underlining notion is hiding the “identities” of
persons whom the data refers to. We introduce a systematic framework
of personal information anonymization by utilizing a new definition
of based on referents to persons in linguistic assertions.
Anonymization is classified with respect to its content, its
proprietor (the person it refers to) or its possessor. A general
methodology is introduced to anonymize private information, based on
canonical forms that include a personal identity. It is shown that
the method is applied both for textual and tabular data. |
|
Title: |
HONEYNET CLUSTERS AS AN EARLY WARNING
SYSTEM FOR PRODUCTION NETWORKS |
Author(s): |
Sushan Sudaharan, Srikrishna
Dhammalapati, Sijan Rai and Duminda Wijesekera |
Abstract: |
Due to the prevalence of distributed
and coordinated Internet attacks, many researchers and network
administrators study the nature and strategies of attackers. To
analyze event logs, using intrusion detection systems and active
network monitoring, Honeynets are being deployed to attract
potential attackers in order to investigate their modus operandi.
Our goal is to use Honeynet clusters as real-time warning systems in
production networks. Towards satisfying this objective, we have
built a Honeynet cluster and have run experiments to determine its
effectiveness. Majority of Honeynets function in isolation and do
not share information real time. In order to rectify this
deficiency, we built a federation of cooperating Honeynets (referred
to as Honeynet cluster) that shares knowledge of malicious traffic.
This paper describes the methods in building a hardware assisted
Honeynet cluster and testing its effectiveness. |
|
Title: |
SECURE UML INFORMATION FLOW USING
FLOWUML |
Author(s): |
Khaled Alghathbar, Duminda Wijesekera
and Csilla Farkas |
Abstract: |
FlowUML is a logic-based system to
validate information flow policies at the requirements specification
phase of UML based designs. It uses Horn clauses to specify
information flow polices that can be checked against flow
information extracted from UML sequence diagrams. FlowUML policies
can be written at a coarse grain level of caller-callee
relationships or at a finer level involving passed attributes.
Validating information flow requirements at an early stage prevents
costly fixes mandated during latter stages of the development life
cycle. |
|
Title: |
AN EFFECTIVE CERTIFICATELESS
SIGNATURE SCHEME BASED ON BILINEAR PAIRINGS |
Author(s): |
M. Choudary Gorantla, Raju
Gangishetti, Manik Lal Das and Ashutosh Saxena |
Abstract: |
In this paper we propose a
certificateless signature scheme based on bilinear pairings. The
scheme effectively removes secure channel for key issuance between
trusted authority and users and avoids key escrow problem, which is
an inherent drawback in ID-based cryptosystems. The scheme uses a
simple blinding technique to eliminate the need of secure channel
and user chosen secret value to avoid the key escrow problem. The
signature scheme is secure against adaptive chosen message attack in
the random oracle model. |
|
Title: |
CONTROLLED SHARING OF PERSONAL
CONTENT USING DIGITAL RIGHTS MANAGEMENT |
Author(s): |
Claudine Conrado, Milan Petkovic,
Michiel van der Veen and Wytse van der Velde |
Abstract: |
This paper describes a system which
allows the controlled distribution of personal digital content by
users. The system extends an existing Digital Rights Management
system that protects commercial copyrighted content by es-sentially
allowing users to become content providers. This fact however makes
the system vulnerable to the illegal content distribution, i.e.,
distribution by us-ers who do not own the content. To solve this
problem a solution is proposed which involves the compulsory
registration with a trusted authority of a user’s personal content.
During registration, content identity is initially checked to verify
whether the content is new. If it is, the association between user
identity and content is securely recorded by the authority, with
users also having the possibility to remain anonymous towards any
other party. In this way, the trusted authority can always verify
personal content ownership. Moreover, in case the initial content
identification fails and content is illegally registered, the
authority can ensure user accountability. |
|
Title: |
USING REPUTATION SYSTEMS TO COPE WITH
TRUST PROBLEMS IN VIRTUAL ORGANIZATIONS |
Author(s): |
Marco Voss and Wolfram Wiesemann |
Abstract: |
The concept of virtual organizations
(VO) denotes a relatively new organizational approach. It should
allow especially small- and medium-sized firms to rapidly cooperate
by forming ad-hoc organizations in order to exploit business
opportunities that would otherwise not be manageable for the
participants alone. VOs will span enterprise and national boarders.
This paper addresses the trust problem inherent in virtual
organizations and proposes reputation systems as a solution which
already proved functionality in many domains of computer science. We
finally present a reputation system for VO marketplaces that pays
special attention to the privacy requirements specific to this
scenario. |
|
Title: |
AN APPROACH FOR THE ANALYSIS OF
SECURITY STANDARDS FOR AUTHENTICATION IN DISTRIBUTED SYSTEMS |
Author(s): |
H. A. Eneh and O. Gemikonakli |
Abstract: |
In this paper, we present our
analysis of the leading standards for authentication in distributed
systems in order to illustrate the extensibility of a finite proof
system initially adopted by [3] but could only be illustrated with
Woo and Lam protocol. Our inference rule proved that Kerberos
version 5 remains vulnerable in scenarios of an attacker having
unlimited communication and computational power especially in a
single broadcast network. This vulnerability can aid a masquerade
participating in the protocol. We also prove the possibility of a
masquerade attack when an intruder participates in the SAML
protocol. Though our inference rule, as part of our pre-emptive
protocol tool still in early stages of development, may show some
analytical difficulties, it has the potential to reveal subtle flaws
that may not be detected by rules of the same family. |
|
Title: |
AN EFFICIENT AND SIMPLE WAY TO TEST
THE SECURITY OF JAVA CARDSTM |
Author(s): |
Serge Chaumette and Damien Sauveron |
Abstract: |
Till recently it was impossible to
have more than one single application running on a smart card.
Multiapplication cards, and especially Java Cards, now make it
possible to have several applications sharing the same physical
piece of plastic. Today, these cards accept to load code only after
an authentication. But in the future, the cards will be open an
everybody should be authorized to upload an application. This raises
new security problems by creating additional ways to attack Java
Cards. These problems and the method to test them are the topic of
this paper. The attacks will be illustrated with code samples. The
method presented here can be applied right now by authorised people
(e.g. ITSEF) to test the security of Java Cards since they have the
authentication keys and tomorrow a hacker may also be able to use
this method to attack cards without needing the keys. |
|
Title: |
TREE AUTOMATA FOR SCHEMA-LEVEL
FILTERING OF XML ASSOCIATIONS |
Author(s): |
Vaibhav Gowadia and Csilla Farkas |
Abstract: |
In this paper we present query
filtering techniques based on bottom-up tree automata for XML access
control. In our authorization model (RXACL), RDF statements are used
to represent security objects and to express the security policy. We
present concepts of simple security object and association security
object. Our model allows to express and enforce access control on
XML trees and their associations. We propose a query-filtering
technique that evaluate XML queries to detect disclosure of
association-level security objects. We use tree automata to
model-security objects. Intuitively a query Q discloses a security
object o iff the (tree) automata corresponding to o accepts Q. We
show that our schema-level method detects all possible disclosures,
i.e., it is complete. |
|
Title: |
TOWARDS A PROCESS FOR WEB SERVICES
SECURITY |
Author(s): |
Carlos Gutiérrez, Eduardo
Fernández-Medina and Mario Piattini |
Abstract: |
Web Services (WS) security has been
enormously developed by the major organizations and consortiums of
the industry during the last few years. This has carried out the
appearance of a huge number of WS security standards. This fact has
caused that organizations have been reticent about adopting
technologies based on this paradigm due to the learning curve
necessary to integrate security into their practical deployments. In
this paper, we present PWSSec (Process for Web Services Security)
that enables the integration of a set of specific security stages
into the traditional phases of WS-based systems development. PWSSec
defines three stages, WSSecReq, WSSecArch and WSSecTech that
facilitate the definition of WS-specific security requirements, the
development of a WS-based security architecture and the
identification of the WS security standards that the security
architecture must articulate to implement the security services,
respectively |
|
Title: |
COOPERATIVE DEFENSE AGAINST NETWORK
ATTACKS |
Author(s): |
Guangsen Zhang and Manish Parashar |
Abstract: |
Distributed denial of service (DDoS)
attacks on the Internet have become an immediate problem. As DDoS
streams do not have common characteristics, currently available
intrusion detection systems (IDS) can not detect them accurately. As
a result, defend DDoS attacks based on current available IDS will
dramatically affect legitimate traffic. In this paper, we propose a
distributed approach to defend against distributed denial of service
attacks by coordinating across the Internet. Unlike traditional IDS,
we detect and stop DDoS attacks within the intermediate network. In
the proposed approach, DDoS defense systems are deployed in the
network to detect DDoS attacks independently. A gossip based
communication mechanism is used to exchange information about
network attacks between these independent detection nodes to
aggregate information about the overall network attacks observed.
Using the aggregated information, the individual defense nodes have
approximate information about global network attacks and can stop
them more effectively and accurately. To provide reliable, rapid and
widespread dissemination of attack information, the system is built
as a peer to peer overlay network on top of the internet. |
|
Title: |
TOWARDS A UML 2.0/OCL EXTENSION FOR
DESIGNING SECURE DATA WAREHOUSES |
Author(s): |
Rodolfo Villarroel, Eduardo
Fernández-Medina, Juan Trujillo and Mario Piattini |
Abstract: |
At present, it is very difficult to
develop a methodology that fulfills all criteria and comprises all
security constraints in terms of confidentiality, integrity and
availability, to successfully design data warehouses. If that
methodology was developed, its complexity would avoid its success.
Therefore, the solution would be an approach in which techniques and
models defined by the most accepted model standards(such as UML)were
extended by integrating the necessary security aspects that, at
present, are not covered by the existing methodologies. In this
paper, we will focus on solving confidentiality problems in data
warehouses conceptual modeling by defining a Profile using the UML
2.0 extensibility mechanisms. In addition, we will define an OCL
extension that allows us to specify the static and dynamic security
constraints of the elements of data warehouses conceptual modeling,
and we will show the benefit of our approach by applying this
profile to an example. |
|
Title: |
A HONEYPOT IMPLEMENTATION AS PART OF
THE BRAZILIAN DISTRIBUTED HONEYPOTS PROJECT AND STATISTICAL ANALYSIS
OF ATTACKS AGAINST A UNIVERSITY’S NETWORK |
Author(s): |
Claudia J. Barenco Abbas, Alessandra
Lafetá, Giuliano Arruda and Luis Javier Garcia Villalba |
Abstract: |
This paper intends to describe the
deployment of a honeypot at University of Brasília (UnB), by
configuring an unique machine as part of the Distributed Honeypots
Project from the Brazilian Honeypots Alliance and the Honeynet.BR
Project. This work initially presents all the tools needed to
implement the honeypot environment, as well as the implementation
itself. Afterwards, the collected data about the attacks and their
analysis are presented. Finally, final statements are made and
future works are suggested. |
|
Title: |
SISBRAV – BRAZILIAN VULNERABILITY
ALERT SYSTEM |
Author(s): |
Robson de Oliveira Albuquerque,
Daniel Silva Almendra, Leonardo Lobo Pulcineli, Rafael Timoteo de
Sousa Junior, Claudia J. B. Abbas and Luis Javier Garcia Villalba |
Abstract: |
This paper describes the project and
implementation of a vulnerability search and alert system based on
free software. SisBrAV (acronym in Portuguese for Brazilian
Vulnerability Alert System), will consist in a spider mechanism that
explores several security-related sites for information on
vulnerabilities and an intelligent interpreter responsible for
analyzing and sorting the relevant data, feeding it into a database.
With that information in hands, an email notifier sends out alerts,
in Portuguese, about new vulnerabilities to registered clients,
according to the operating systems and services run in their
environment. In addition to the email notifier, a web server will
also be implemented, for systems administrators to perform an
on-demand custom search in the vulnerabilities database. |
|
Title: |
MANET - AUTO CONFIGURATION WITH
DISTRIBUTED CERTIFICATION AUTHORITY MODELS CONSIDERING ROUTING
PROTOCOLS USAGE |
Author(s): |
Robson de Oliveira Albuquerque, Maíra
Hanashiro, Rafael Timoteo de Sousa Junior, Claudia J. B. Abbas and
Luis Javier Garcia Villalba |
Abstract: |
In this paper, we discuss about
certification, authentication, auto configuration and routing for
mobile ad hoc networks (MANETs). The presented design is based on
the works [1], [2] and [3]. We describe distributed certification,
MAE authentication, auto configuration process and routing
protocols. Then, we show some problems of these models and we
propose some solutions considering routing and others protocol
modifications. |
|
Title: |
TOWARDS AN INTEGRATION OF SECURITY
REQUIREMENTS INTO BUSINESS PROCESS MODELING |
Author(s): |
Alfonso Rodríguez, Eduardo
Fernández-Medina and Mario Piattini |
Abstract: |
Business Processes are considered as
an essential resource for companies to optimize and assure their
quality by obtaining advantages with respect to their competitors.
Consequently, Business Process Modeling becomes relevant since it
allows us to represent the essence of the business. A notation to
model businesses must be able to capture the majority of the
requirements of the business. We have had the opportunity to check
that security requirements have been scarcely considered in
nowadays’ most used notations to model business processes. In this
work, we will present the security aspects that can be modelled from
the business experts’ dominion and that have been scarcely studied
in the business process modeling, a review of the main notations
used for modeling and a proposal to represent security requirements
considering the knowledge of the experts in the business. |
|
Title: |
RETURN ON SECURITY INVESTMENT (ROSI):
A PRACTICAL QUANTITATIVE MODEL |
Author(s): |
Wes Sonnenreich, Jason Albanese and
Bruce Stout |
Abstract: |
Organizations need practical security
benchmarking tools in order to plan effective security strategies.
This paper explores a number of techniques that can be used to
measure security within an organization. It proposes a benchmarking
methodology that produces results that are of strategic importance
to both decision makers and technology implementers. |
|
Workshop on
Computer Supported Activity Coordination (CSAC-2005)
|
Title: |
A WEB SERVICES BASED COMMUNICATION
SERVICES FRAMEWORK FOR COLLABORATIVE WORK |
Author(s): |
Jun Liu, Bo Yang and Wei Lu |
Abstract: |
This paper considers the problem of
integrating communication services that support group collaboration
systems. Past experience has shown that heterogeneous communication
services are extremely difficult to be integrated into collaboration
environment and extended to meet continuous changing re-quirements.
This paper aims at proposing a common, interoperable framework based
on Web Services technology for integrating communication services in
a collaboration environment. This framework allows the
implementation of reus-able communication services components that
can be plugged into the collabo-ration system and be invoked on
demand according to communication require-ments of collaboration
applications. Based on this framework, a prototype sys-tem called
Rich Media Collaborative Workplace is developed. This system
pro-vides an integrated collaborative workplace with benefits of
increasing produc-tivity, saving cost and improving efficiency. |
|
Title: |
A MACHINE LEARNING MIDDLEWARE FOR ON
DEMAND GRID SERVICES ENGINEERING AND SUPPORT |
Author(s): |
Wail M. Omar, A. Taleb Bendiab and
Yasir Karam |
Abstract: |
Over the coming years, many are
anticipating grid computing infrastructure, utilities and services
to become an integral part of future socio-economical fabric.
Though, the realisation of such a vision will be very much affected
by a host of factors including; cost of access, reliability,
dependability and security of grid services. In earnest, autonomic
computing model of systems’ self-adaptation, self-management and
self-protection has attracted much interest to improving grid
computing technology dependability, security whilst reducing cost of
operation. A prevailing design model of autonomic computing systems
is one of a goal-oriented and model-based architecture, where rules
elicited from domain expert knowledge, domain analysis or data
mining are embedded in software management systems to provide
autonomic systems functions including; self-tuning and/or
self-healing. In this paper, however, we argue for the need for
unsupervised machine learning utility and associated middleware to
capture knowledge sources to improve deliberative reasoning of
autonomic middleware and/or grid infrastructure operation. In
particular, the paper presents a machine learning middleware service
using the well-known Self-Organising Maps (SOM), which is
illustrated through a case-study scenario -- intelligent connected
home. The SOM service is used to classify types of users and their
respective networked appliances usage model (patterns) and their
services dependencies. The models are accessed by our experimental
self-managing infrastructure to provide Just-in-Time deployment and
activation of required services in line with learnt usage models and
baseline architecture of specified services assemblies. The paper
concludes with an evaluation and general concluding remarks.
|
|
Title: |
A WORKFLOW MODEL FOR INTEGRATING IC
DESIGN AND TESTING |
Author(s): |
Andres Mellik |
Abstract: |
This paper outlines the challenges
facing the domain of automated testing of mixed-signal integrated
circuits and how these can be tackled by enhancing communication
between the design and test engineers. An abstract model is
introduced for seem-less interaction of design and test teams, thus
enabling faster work-flow and a greater redundancy in the
correctness of communicated specification data. The latter is
embedded into a system-level model and completely integrated into
the process. An abstract model is proposed for realizing the
suggestive approach. The goal is to reduce the time for developing
and running test programs, which is a major cost factor in the
reducing life-cycles of mixed-signal devices. The paper emphasizes
obstacles in current settings and suggests workarounds. |
|
Title: |
A CONCEPTION OF MULTIAGENT MANAGEMENT
SYSTEM OF DISPERSED MARKET INFORMATION – E-NEGOTIATIONS AREA |
Author(s): |
Leszek Kiełtyka and Rafał Niedbał |
Abstract: |
The conception of multiagent system
(MAS) as a tool aiding dispersed market information management in
e-negotiations area was proposed in this article. The results of the
conducted surveys concerning among other things identification of
the application areas of intelligent software agents in the
enterprises are also presented here. Attention was paid to the role
of business negotiation in market information acquisition. Software
environment AgentBuilder enabling elaboration of the simulating
model of the proposed conception of the system was also described in
the present article. |
|
Title: |
USING TIMED MODEL CHECKING FOR
VERIFYINGWORKFLOWS |
Author(s): |
Volker Gruhn and Ralf Laue |
Abstract: |
The correctness of a workflow
specification is critical for the automation of business processes.
For this reason, errors in the specification should be detected and
corrected as early as possible - at specification time. In this
paper, we present a validation method for workflow specifications
using model-checking techniques. A formalized workflow
specification, its properties and the correctness requirements are
translated into a timed state machine that can be analyzed with the
Uppaal model checker. The main contribution of this paper is the use
of timed model checking for verifying time-related properties of
workflow specifications. Using only one tool (the model checker) for
verifying these different kinds of properties gives an advantage
over using different specialized algorithms for verifying different
kinds of properties. |
|
Title: |
A FRAMEWORK FOR DESIGNING
COLLABORATIVE TASKS IN A WEB-ENVIRONMENT |
Author(s): |
Dina Goren-Bar and Tal Goori |
Abstract: |
We present a framework that considers
both the collaboration activities as well as the tools involved
combining the artifact and process oriented approaches of knowledge
engineering. Following the framework stages, we designed an
Asynchronous Learning Network with a collaborative environment that
enables structured collaboration between group members. Hundred and
fifty (150) university students divided into teams of ten members
each performed two collaborative tasks within a university course.
As a preliminary evaluation we classified the messages sent by
students within the discussion forum. Feedback on uploads increased
significantly in the second assignment indicating that students
besides performing their own task also took part in other group’s
tasks creating a cooperative group that produced a collaborative
outcome. We discuss the suitability of the framework for the design
of Collaborative Environments for knowledge sharing and raise a few
topics for further research. |
|
Title: |
PROCESS MODELLING AND ACTIVITY
COORDINATION IN AN ACADEMIC SCHOOL WITHIN A HIGHER EDUCATION
ENTERPRISE: AN ISO 9001:2000 CERTIFICATION PROCESS |
Author(s): |
Daisy Seng and Leonid Churilov |
Abstract: |
To gain a leading edge in today’s
competitive environment, higher education enterprises are
implementing and obtaining International Standard Organisation (ISO)
9001:2000 certification for their quality management system (QMS).
In this paper, the use of ARIS (Architecture of Integrated
Information Systems) methodology to assist in process understanding
when implementing QMS is discussed. Introduction of the ISO
certified QMS into the School of ABC, XYZ University – the first
ever for an academic school in Australia, is used as a case study to
illustrate both the notion of a process-oriented HEE and the
elegance and power of ARIS. |
|
Title: |
IDENTITY MANAGEMENT FOR ELECTRONIC
NEGOTIATIONS |
Author(s): |
Omid Tafreschi, Janina Fengel and
Michael Rebstock |
Abstract: |
Using the Internet as the medium for
transporting sensitive business data poses risks to companies.
Before conducting business electronically, a company should take
preventive measures against data manipulations and possible data
misuse. One initial step could be obtaining certainty about the true
identity of a potential business partner responding to a request or
tender. In this paper we report on the development of a concept for
identity management to introduce trust for electronic negotiations.
We describe the character of electronic negotiations and give an
example for a possible use-case scenario of our concept. For this we
choose the most complex type of negotiations in the business domain,
which are interactive bilateral multiattributive negotiations. Based
on a general application architecture for such negotiations
developed in a research project, we show the necessity of security
provisions and introduce a security concept for identity management.
We argue that the development of authentication and authorization
services for the identity management of business partners involved
in a negotiation are not only crucial but also an enhancement for
electronic marketplaces. |
|
Title: |
OTHER WAY OF MAKING BUSINESS: A
VIRTUAL E-COMMERCE COMMUNITY / CVN PLATFORM |
Author(s): |
Roberto Naranjo, Jorge Moreno, Luz
Marina Sierra and Martha Mendoza |
Abstract: |
This article describes the current
problem in the business environment from the Cauca region – Colombia
(South America), and the proposed solution called Project CVN
(Spanish initials) “Business Virtual Community - for the Cauca
region - Internet Commercial Platform or BVC”. Based on a markets
research, the architecture of the added value conceived by the
project is described; these values support advertising,
collaboration, B2C, and B2B activities framed within the virtual
environment of the community. Below, the business model proposed for
the community and the logic architecture of the software is
described. Lastly the experiences and the learnt lessons throughout
the implementation of the project are exposed. |
|
Title: |
A WORKFLOW-BASED ENVIRONMENT TO
MANAGE SOFTWARE-TESTING PROCESS EXECUTIONS |
Author(s): |
Duncan Dubugras A. Ruiz, Karin
Becker, Bernardo Copstein, Flavio Moreira de Oliveira, Angelina
Torres de Oliveira, Gustavo Rossarolla Forgiarini, Cristiano Rech
Meneguzzi and Rafaela Lisboa Carvalho |
Abstract: |
This work describes a workflow-based
environment that manages the execution of software-testing
processes. Testing processes require that human and computer
resources be handled as dedicated resources, previously scheduled
for testing activities, with no overlapping. Two striking features
of this environment are: a) the efficient handling of resources by
taking into account the capabilities offered by resources required
by testing activities, and b) it provides a broader view of all
execution steps in a software-testing plan. Hence, it enables a
better planning of software-testing process executions, as well as
of human and computer resources involved. |
|
Title: |
IMPROVING SUPPLY CHAIN OPERATIONS
PERFORMANCE BY USING A COLLABORATIVE PLATFORM BASED ON A SERVICE
ORIENTED ARCHITECTURE |
Author(s): |
Rubén Darío Franco, Ángel Ortiz Bas,
Víctor Anaya and Rosa Navarro |
Abstract: |
Every new technology promises to
solve a lot of problems inside companies and to achieve unforeseen
performance improvements. Nowadays, Service-Oriented Architectures
begin to be promoted as new balsam where companies may realize their
visions and put all their new strategies in practice. Initially
focused on intra-organizational integration efforts, they begin to
be used when supporting inter-organizational business processes
engineering in networked organizations. Although these kinds of
initiatives in most of cases are lead by major companies, the INPREX
project (Spanish acronym for Interoperability in Extended
Processes), here presented, falls out this category. By contrast,
this is an undergoing initiative leaded by a Small and Medium
Enterprise (SME) and founded by a local government in Spain. In this
work, we introduce the IDIERE Platform which has been designed for
supporting three major requirements of networked enterprises:
openness, flexibility and dynamism when deploying and executing
distributed business processes |
|
Title: |
APPLICATION OF SOCIAL NETWORK THEORY
TO SOFTWARE DEVELOPMENT: THE PROBLEM OF TASK ALLOCATION |
Author(s): |
Chintan Amrit |
Abstract: |
To systematize software development,
many process models have been proposed over the years. These models
focus on the sequence of steps used by developers to create reliable
software. Though these process models have helped companies to gain
certification and attain global standards, they don’t take into
account interpersonal interactions and various other social aspects
of software development organizations. In this paper we tackle one
crucial part of the Coordination problem in Software Development,
namely the problem of task assignment in a team. We propose a
methodology to test a sample hypothesis based on how social networks
can be used to improve coordination in Software Industry. In a pilot
case study based on 4 teams of Masters Student working in a globally
distributed environment (Holland and India), the social network
structures along with the task distribution in each of the teams
were analyzed. In each case we observed that the patterns, which
could be used to test many hypothesis on team coordination and task
allocation between them. |
|
Title: |
REDUCTION OVER TIME: EASING THE
BURDEN OF PEER-TO-PEER BARTER RELATIONSHIPS TO FACILITATE MUTUAL
HELP |
Author(s): |
Kenji Saito, Eiichi Morino and Jun
Murai |
Abstract: |
A peer-to-peer complementary currency
can be a powerful tool for promoting exchanges and building
relationships for coordinated activities. i-WAT is a proposed such
currency usable on the Internet. It is based on the WAT System, a
polycentric complementary currency using WAT tickets as its media of
exchange: participants spontaneously issue and circulate the tickets
as needed, whose values are backed up by chains of trust. i-WAT
implements the tickets electronically by exchanging messages signed
in OpenPGP. This paper proposes an extension to the design of i-WAT
to facilitate mutual help among people in need. In particular, we
propose additional "reduction" tickets whose values are reduced over
time. By deferring redemption of such tickets, the participants can
contribute to reduce the debts of the issuers, as well as to
accelerate spending. Applications of this feature include a relief
to disaster-affected people. A reference implementation of i-WAT has
been developed in the form of a plug-in for an XMPP instant
messaging client. We have been putting the currency system into
practical use, to which the proposed feature will be added shortly.
|
|
Title: |
INTEGRATING AWARENESS SOURCES IN
HETEROGENEOUS COLLABORATION ENVIRONMENTS |
Author(s): |
Vijayanand Bharadwaj, Y. V. Ramana
Reddy and Sumitra Reddy |
Abstract: |
Collaboration in heterogeneous
environments involves dealing with variety of information sources
that generate information that users need to be aware of. Users must
be empowered to tailor the quality of awareness information.
Heterogeneity of sources and media adversely affects the quality of
group awareness. We propose a solution in terms of integrating the
sources at the in-formation level and provide a model for the same.
We discuss our progress in designing the model, its utility and
benefits. We believe that such a unifying framework can increase the
effectiveness of group awareness in supporting co-ordination and
execution of collaborative work. |
|
Joint Workshop on Web Services and
Model-Driven Enterprise Information Services
(WSMDEIS-2005)
|
Title: |
A MODEL-BASED APPROACH TO MANAGING
ENTERPRISE INFORMATION SYSTEMS |
Author(s): |
Robert France, Roger Burkhart and
Charmaine DeLisser |
Abstract: |
Organizations must evolve their
information systems (IS) in order to adapt to changes in their
environment or to maintain or enhance competitiveness. The use of
modern application integration technologies (e.g., middleware) and
advanced network technologies has resulted in IS that provide
services at unprecedented levels, but at the price of becoming more
complex and thus more difficult to evolve. By way of concrete
examples, this paper focuses on the use of system models expressed
in the Unified Modeling Language (UML) to effectively manage
information systems assets. The system models capture critical
information about an organization and are part of an overall
framework called the Application Mapping Framework or AMF. The AMF
can be used by IT architects and planners to track applications,
relate descriptions of system artifacts across different levels of
abstraction and support redundancy, gap and impact analyses. The
paper also identifies management roles needed to ensure that the AMF
repository contains comprehensive and up-to-date models.
|
|
Title: |
ONTOLOGY BASED MODEL TRANSFORMATION
INFRASTRUCTURE |
Author(s): |
Arda Goknil and N. Yasemin Topaloglu |
Abstract: |
Using MDA in ontology development has
been investigated in several works recently. The mappings and
transformations between the UML constructs and the OWL elements to
develop ontologies are the main concern of these research projects.
On the other hand, we propose another approach in order to achieve
the collaboration between MDA and ontology technologies. We propose
an ontology based model transformation infrastructure to transform
application models by using query statements, transformation rules
and models defined as ontologies in OWL. Using this approach in
model transformation infrastructure will enable us to use semantic
web and ontology facilities in model driven ar-chitecture. This
paper will discuss how these two technologies come together to
provide automatization in model transformations. |
|
Title: |
EVALUATION OF THE PROPOSED QVTMERGE
LANGUAGE FOR MODEL TRANSFORMATIONS |
Author(s): |
Roy Grřnmo, Mariano Belaunde, Jan
Řyvind Aagedal, Klaus-D. Engel, Madeleine Faugere and Ida Solheim |
Abstract: |
This paper describes the set of
requirements to a model-to-model transformation language as
identified in the MODELWARE project. We show how these requirements
divide into three main groups according to the way they can be
measured, how to decompose them into different grades of support and
how they can be weighted. All this information is then used as a
basis for an al-gorithm that can compute an overall score. The
evaluation framework has been applied to the current QVTMerge
submission which targets the OMG QVT standardization. |
|
Title: |
STEERING MODEL-DRIVEN DEVELOPMENT OF
ENTERPRISE INFORMATION SYSTEM THROUGH RESPONSIBILITIES |
Author(s): |
Ming-Jen Huang and Takuya Katayama |
Abstract: |
OMG proposed Model Driven
Architecture to solve existing business and technology problems. The
intention is clear but the implementation is unspecified. We
proposed a model-driven approach for development of enterprise
information system, RESTDA. In this paper, we describe a
domain-specific language - Business Models. It helps domain experts
to describe the running of a business without concerning any details
of technology. We also describe a rule-based approach to find
inconsistency of Business Model. It ensures the correctness for
further model transformation. Finally, we introduce a model
transformation mechanism utilizing the connections of roles,
responsibilities, and collaborations within different abstraction
levels. The connections can be implemented in rule-based engine for
transforming Business Models to source code. Our work provides a DSL
to help domain experts describe their works from pure business point
of view. Our model transformation mechanism also bridges the gap
between problem domain and solution domain tightly. |
|
Title: |
TOWARDS A FORMALIZATION OF MODEL
CONFORMANCE IN MODEL DRIVEN ENGINEERING |
Author(s): |
Thanh-Hŕ Pham, Mariano Belaunde and
Jean Bézivin |
Abstract: |
The principle of “everything is an
object” basically supported by two fundamental relationships
inheritance and instantiation has helped much in driving the object
technology in the direction of simplicity, generality and power of
integration. Similarly in the Model Driven Engineering (MDE) today,
the basic principle that “everything is a model” has many
interesting properties. The two relations representation and
conformance are suggested [B04] to be the two basic relations in the
MDE. This paper tends to support this ideas by investigating some
concrete examples of the conformance relation concerning three
technological spaces (TS) [KBA02]: Abstract/Concrete Syntax TS, XML
TS and Object-Oriented Modeling (OOM) TS. To go further in this
direction we try to formalize this relation in the OOM TS by using
the category theory – a very young and abstract but powerful branch
of mathematics. The OCL language is (partially) reused in this
scheme to provide a potentially useful environment supporting MDE in
a very general way. |
|
Title: |
DEPENDENCIES BETWEEN MODELS IN THE
MODEL-DRIVEN DESIGN OF DISTRIBUTED APPLICATIONS |
Author(s): |
Joăo Paulo A. Almeida, Luís Ferreira
Pires and Marten van Sinderen |
Abstract: |
In our previous work, we have defined
a model-driven design approach based on the organization of models
of a distributed application according to different levels of
platform-independence. In our approach, the design process is
structured into a preparation and an execution phase. In the
preparation phase, (abstract) platforms and transformation
specifications are defined. These results are used by a designer in
the execution phase to develop a specific application. In this
paper, we analyse the dependencies between the various types of
models used in our design approach, including platform-independent
and platform-specific models of the application, abstract platforms,
transformation specifications and transformation parameter values.
In order to examine the relations between the various models, we
consider models as modules and employ a technique to visualize
modularity which uses Design Structure Matrices (DSMs). This
analysis leads to requirements for the various types of models and
directives for the design process which reduce undesirable
dependencies between models. |
|
Title: |
FROM MAPPING SPECIFICATION TO MODEL
TRANSFORMATION IN MDA: CONCEPTUALIZATION AND PROTOTYPING |
Author(s): |
Slimane Hammoudi and Denivaldo Lopes |
Abstract: |
In this paper, we present in the
first part our proposition for a clarification of the concepts of
mapping and transformation in the context of Model Driven
Architecture (MDA), and our approach for mapping specification and
generation of transformation definition. In the second part, we
present the application of our approach from UML to JAVA platform.
We propose a metamodel for mapping specification and its
implementation as a plug-in for Eclipse. Once mappings are specified
between two metamodels (e.g. UML and JAVA), transformation
definitions are generated automatically using transformation
languages such as Atlas Transformation Language (ATL). We have
applied this tool to edit mappings between UML and JAVA metamodels.
Afterwards, we have used this mapping to generate ATL code to
achieve transformations from UML into JAVA. |
|
Title: |
AN XML-BASED SYSTEM FOR CONFIGURATION
MANAGEMENT OF TELECOMMUNICATIONS NETWORKS USING WEB-SERVICES |
Author(s): |
Adnan Umar, James J. Sluss Jr. and
Pramode K. Verma |
Abstract: |
As the utilization and the
application base of the Internet grows, the need for an improved
network management system becomes increasing apparent. It is
generally accepted that SNMP is not capable of tackling the arising
network management requirements and needs to be replaced. Also,
configuration management has been identified as one of the most
desired network management functionality. Recent research
publications suggest a growing interest in replacing SNMP by a Web
Services (XML)-based network management solution. In this paper we
present our methodology and design of our complete XML-based network
management system developed with the specific aim of performing
configuration management. |
|
Title: |
SERVICE ORIENTED MODEL DRIVEN
ARCHITECTURE FOR DYNAMIC WORKFLOW CHANGES |
Author(s): |
Leo Pudhota and Elizabeth Chang |
Abstract: |
Collaborative workflow management
systems in logistic companies require strong information systems and
computer support. These IT integration requirements have expanded
considerably with the advent of e-business; utilizing web services
for B2B (Business to Business) and P2P (Partner to Partner)
e-commerce. This paper proposes service oriented model driven
architecture for dynamic workflow changes and strategy for
implementation of these changes by isolation of services and
business processes where by existing workflow systems can easily
incorporate and integrate the changes following a step by step
process replacement synchronization in workflow. This paper will
also describe conceptual framework for prototype implementation
resulting in dynamic collaborative workflow management. |
|
Title: |
DESIGN AND PROTOTYPING OF WEB SERVICE
SECURITY ON J2ME BASED MOBILE PHONES |
Author(s): |
Ti-Shiang Wang |
Abstract: |
One of the main objectives in this
paper is to investigate how to manipulate the SOAP message and place
security functions in the header of SOAP message. Here, we will
present the design and implementation of web service security
application on J2ME based mobile devices. Basically this prototyping
includes two-stage approach. In the first stage, we study the
concept of proof in implementation of web services security on the
IBM laptop using IBM Web-Sephere Studio Device Developer (WSDD V
5.6) IDE [1]. In addition we im-port kXML/kSOAP APIs to process SOAP
message and use Bouncy Castle’s API [2] supporting cryptographic
algorithms for security implementations. In this paper, the security
functions we present here include five tasks: non-security, data
digest, data encryption using symmetric key, data encryption us-ing
asymmetric key, and digital signature. At each task, we will discuss
its cor-responding design, SOAP header message, time performance,
and return results in emulator. Based on the expected results from
the first stage, in the second stage, we use Nokia 6600 mobile phone
as a target mobile device to test our application and evaluate
performance at each task. Finally we will share our ex-perience and
lessons on this work in the conclusion and do the demonstration
using Nokia 6600 mobile phone in the conference if time permits.
|
|
Title: |
ARCHITECTURE FOR AN AUTONOMIC WEB
SERVICES ENVIRONMENT |
Author(s): |
Wenhu Tian, Farhana Zulkernine, Jared
Zebedee, Wendy Powley and Pat Martin |
Abstract: |
The growing complexity of Web service
platforms and their dynamically varying workloads make manually
managing their performance a tough and time consuming task.
Autonomic computing systems, that is, systems that are
self-configuring and self-managing, have emerged as a promising
approach to dealing with this increasing complexity. In this paper
we propose an architecture of an autonomic Web service environment
based on reflective programming techniques, where components at a
Web service hosting site tunes themselves and collaborate to provide
a self-managed and self-optimized system. |
|
Title: |
EXTENDING UDDI WITH RECOMMENDATIONS:
AN ASSOCIATION ANALYSIS APPROACH |
Author(s): |
Andrea Powles and Shonali
Krishnaswamy |
Abstract: |
This paper presents a novel
recommendation extension to UDDI that we term RUDDIS.
Recommendations can have potential benefits to both providers and
consumers of Web Services. We adopt a unique technique to making
recommendations that applies association analysis rather than
traditional collaborative filtering approach. We present the
implementation and demonstrate the functioning of RUDDIS in an
unobtrusive manner where the user has total control over the
recommendation process. |
|
Title: |
XML SCHEMA-DRIVEN GENERATION OF
ARCHITECTURE COMPONENTS |
Author(s): |
Ali El bekai and Nick Rossiter |
Abstract: |
It is possible to code by hand an XSL
stylesheet that validates an XML document against some or all
constraints of an XML schema. But the main goal of this paper is to
introduce general techniques as a technology solution for different
problems such as generation of (a) SQL schema from XMLSchema, (b)
XSL stylesheet from XMLSchema, and (c) XQuery interpreter. Each of
the techniques proposed in this paper employs XMLSchema-driven
generation architecture components with XSL stylesheets. As can be
seen the input is XMLSchema and XSL stylesheet and the output is
generic stylesheets. These stylesheets, as an integral part of our
development, can be used as interpreters for generating other types
of data such as SQL queries from XQueries, SQL data, SQL schema and
HTML format. Finally, we present algorithms for these types of
generator and show how we can generate the components automatically.
We also introduce examples to evaluate the generated components. |
|
Title: |
ARCHITECTURAL FRAMEWORK FOR WEB
SERVICES AUTHORIZATION |
Author(s): |
Sarath Indrakanti, Vijay Varadharajan
and Michael Hitchens |
Abstract: |
This paper considers the security
issues in the service oriented architectures and proposes an
authorization architecture for web services. It describes the
architectural framework, the administration and runtime aspects of
our architecture and its components for secure authorization of web
services as well as the support for the management of authorization
information. The paper also describes authorization algorithms that
support various possibilities of collecting credentials required to
authorize a client’s request. The proposed architecture has several
benefits, which are discussed in the paper. It is able to support
legacy applications exposed as web services as well as new web
service based applications built to leverage the benefits offered by
service oriented architectures; it can support multiple access
control models and mechanisms and is decentralized and distributed
and provides flexible management and administration of web services
and related authorization information. We believe that the proposed
architecture is easy to integrate into existing platforms and
provides enhanced security by protecting exposed web services. This
architecture is currently being implemented within the .NET
framework. |
|
Title: |
A FORMAL SEMANTICS FOR THE BUSINESS
PROCESS EXECUTION LANGUAGE FOR WEB SERVICES |
Author(s): |
Roozbeh Farahbod, Uwe Glässer and
Mona Vajihollahi |
Abstract: |
We define an abstract operational
semantics for the Business Process Execution Language for Web
Services (BPEL) based on the abstract state machine (ASM) formalism.
This way, we model the dynamic properties of the key language
constructs through the construction of a BPEL abstract machine in
terms of a distributed real-time ASM. Specifically, we focus here on
the process execution model and the underlying execution lifecycle
of BPEL activities. The goal of our work is to provide a well
defined semantic foundation for establishing the key language
attributes. The resulting abstract machine model provides a
comprehensive and robust formalization at various levels of
abstraction. |
|
Workshop on
Pattern Recognition in Information Systems (PRIS-2005)
|
Title: |
INTRUSION DETECTION MANAGEMENT SYSTEM
FOR ECOMMERCE SECURITY |
Author(s): |
Jens Lichtenberg and Jorge Marx Gómez |
Abstract: |
One of the main problems in eCommerce
applications and all other systems handling confidential information
in general, is the matter of security. This paper introduces the
idea of an intrusion detection management system to support the
security. Intrusion detection per se, is the act of detecting an
unauthorized intrusion by a computer or a network from the inside or
the outside of the affected system, making an intrusion the attempt
to compromise or otherwise do harm to other network devices. Next to
the normal intrusion detection system an Intrusion Management System
applies different Intrusion Detection Systems to not only detect a
threat but also analyze it and propose counter measures to avoid the
compromization of the guarded system. The numerous intrusion
detection systems are linked to the attack analyzer. The best system
coverage is achieved using detection systems that apply different
techniques. An exemplatory system might apply, with SNORT a
signature based system, and with INBOUNDS an anomaly detecting
system, and, thus, cover historically known attacks as well as
hazardous behavior. The attack analyzer gathers the information from
the IDS 1…n and diagnoses a treatment plan. The system manager or
the response planning module aiding the manager can also query the
analyzer for information about the attack character, possible goals
and the impending threat level. For the treatment plan, depending on
the analysis, a multitude of counter measures is identified and
ranked. The counter measure identification is done using data mining
techniques on a counter measure repository, the final ranking
through sorting algorithms. Of the numerous data mining techniques
applicable for diagnostic or analytic purposes the nearest neighbor
and the correlation coefficient techniques have been implemented. A
feasibility study has shown that an analyzer can match a problem
against a solution repository and find the optimal treatment
suggestions, applied with a ranking, in an acceptable short period
of time. Future work will include the analysis of attack
characteristics and goals, and the interaction between system
manager, response planning and execution module and the attack
analyzer. Furthermore the counter measure repository will be
evaluated and updated. |
|
Title: |
DATA MINING BASED DIAGNOSIS IN
RESOURCE MANAGEMENT |
Author(s): |
Mathias Beck and Jorge Marx Gómez |
Abstract: |
There are different solutions to
resource allocation problems in Resource Management Systems (RMS).
One of the most sophisticated ways to solve these problems is, if
supported by the RMS, an adjustment to Quality-of-Service (QoS)
settings during runtime. These settings affect the trade-off between
the resource usage and the quality of the services the executed
tasks create. But, to be able to determine the optimal reactive
changes to current QoS settings in an acceptable time, knowledge of
the resource allocation problem’s cause is necessary. This is
especially significant in an environment with real-time constraints.
Without this knowledge other solutions could be initiated, still an
improvement to the current resource allocation, but the optimal
compromise between resource requirements and QoS is likely to be
missed. A resource management system (RMS) with the ability to
adjust QoS settings can solve more resource allocation problems than
one providing reallocation measures only. But problem-depending only
optimal changes to QoS settings can solve the problem within timing
constraints and thus prevent expensive system failures. Depending on
the environment a RMS is used in, the failures could be a huge
financial loss or even a threat to human lives. “The real-time and
reliability constraints require responsive rather than best-effort
metacomputing.”[1] But the knowledge of a problem’s cause does not
only help to solve the problem within existing timing constraints
and to guarantee feasibility of the executed tasks, but helps to
maximize the quality of the generated services as well. |
|
Title: |
A COMPARISON OF DOCUMENT CLUSTERING
ALGORITHMS |
Author(s): |
Yong Wang and Julia Hodges |
Abstract: |
Document clustering is a widely used
strategy for information retrieval and text data mining. This paper
describes the preliminary work for ongoing research of document
clustering problems. A prototype of a document clustering system has
been implemented and some basic aspects of document clustering
problems have been studied. Our experimental results demonstrate
that the average-link inter-cluster distance measure and TFIDF
weighting function are good methods for the document clustering
problem. Other investigators have indicated that the bisecting
K-means method is the preferred method for document clustering.
However, in our research we have found that, whereas the bisecting
K-means method has advantages when working with large datasets, a
traditional hierarchical clustering algorithm still achieves the
best performance for small datasets. |
|
Title: |
A COMPARISON OF METHODS FOR WEB
DOCUMENT CLASSIFICATION |
Author(s): |
Julia Hodges, Yong Wang and Bo Tang |
Abstract: |
WebDoc is an automated classification
system that assigns Web documents to appropriate Library of Congress
subject headings based upon the text in the documents. We have used
different classification methods in different versions of WebDoc.
One classification method is a statistical approach that counts the
number of occurrences of a given noun phrase in documents assigned
to a particular subject heading as the basis for determining the
weights to be assigned to the candidate indexes (or subject
headings) that it generates. A second classification method that we
tested for our system uses a naďve Bayes approach. In this case, we
experimented with the use of smoothing to dampen the effect of
having a large number of 0s in our feature vectors (due to the
infrequent occurrence of many of the noun phrases). A third
classification method that we tested was a k-nearest neighbors
approach. With this approach, we tested two different ways of
determining the similarity of feature vectors: counting the number
of common feature values based on the occurrences of those features
and using the cosine coefficient approach, which computes the
normalized inner product of the two vectors being compared. In this
paper, we report the performance of each of the versions of WebDoc
in terms of recall, precision, and F-measures. |
|
Title: |
AUTOMATIC RECOGNITION OF POLLUTANTS
IN PACKAGED FOODS FROM X-RAY IMAGING |
Author(s): |
Giorgio Grasso, Rosa Maria Gembillo
and Maria Schepis |
Abstract: |
The quality and purity of
industrially packaged foods is today of fundamental importance,
given the level of expectation of consumers and the current laws
imposing serious liabilities on producers. This paper presents a
novel method for automatic recognition of pollutants in packaged
foods for industrial applications. To maximize the contrast between
foods and pollutants a dual acquisition method has been applied to
obtain a pair of images taken at two different x-ray source
voltages. Taking advantage from the wavelength dependence of
absorption coefficient for different materials. In order to further
increase the classification potential of the algorithms, the H
color spectrum was adopted, for its high discrimination
capabilities. The analysis of images is performed on-line utilizing
three independent methods. Over a series of experiments each of the
three strategies have given a correct classification rate of
pollutants ranging from 83% to 95%. To further increase the degree
of reliability of the automatic recognition process, the three
methods have been combined into a pollution coefficient. The
confidence achieved on the experimental set resulted in a 92%
correct classifications, for pollutants larger than 2mm. |
|
Title: |
DISTINCTION OF PATTERNS WITHIN
TIME-SERIES DATA USING CONSTELLATION GRAPHS |
Author(s): |
Mayumi Oyama-Higa, Michihiko Setogawa
and Teijun Miao |
Abstract: |
Constellation graphs for time-series
data(CGSTS) are very effective tool for displaying characteristic
patterns within time-series data. In the past, line graphs were the
tool of choice to analyze patterns within time-series data. The
advantage of using constellation graphs is that they make pattern
fluctuations easier to discern, and allow observation of partial
changes between periods. This paper compares the line graphs and
CGSTS. And we display several sample of time-series data and
concludes that time-series data are most easily interpreted via
constellation graphs. |
|
Title: |
A NEW RBF CLASSIFIER FOR BURIED TAG
RECOGNITION |
Author(s): |
Larbi Beheim, Adel Zitouni and Fabien
Belloir |
Abstract: |
This article presents noticeable
performances improvement of an RBF neural classifier. Based on the
Mahalanobis distance, this new classifier increases relatively the
recognition rate while decreasing remarkably the number of hidden
layer neurons. We obtain thus a new very general RBF classifier,
very simple, not requiring any adjustment parameter, and presenting
an excellent ratio performances/neurons number. A comparative study
of its performances is presented and illustrated by examples on real
databases. We present also the recognition improvements obtained by
applying this new classifier on buried tag. |
|
Title: |
SELECTIVE VISUAL ATTENTION IN
ELECTRONIC VIDEO SURVEILLANCE |
Author(s): |
James Mountstephens, Craig Bennett
and Khurshid Ahmad |
Abstract: |
In this paper we describe how a model
of visual attention, driven entirely by visual features can be used
to attend to “unusual” events in a complex surveillance environment.
For the purposes of illustration and elaboration we have used Itti
and Koch’s model of selective visual attention, used a program
developed by it’s authors and used a professional benchmark video
dataset produced by the EC sponsored CAVIAR project (80 video clips
comprising 90,000 frames). |
|
Title: |
UNSUPERVISED FILTERING OF XML STREAMS
FOR SYSTEM INTEGRATION |
Author(s): |
Ingo Lütkebohle, Sebastian Wrede and
Sven Wachsmuth |
Abstract: |
In the last years, computer vision
research is more and more shifting from algorithmic solutions to the
construction of active systems. However, available integration
frameworks in this area still suffer from many aspects, like
insufficient decoupling of components, long learning curves, missing
support for distributed and asynchronous processing, fixed control
strategies, or no resource control. Especially, a centralized
resource management typically leads to very complex control
strategies for distributed and asynchronous running systems. Many
processing components only need to compute new results if their
input data has significantly changed. This can be defined as a
pattern recognition task that analyzes the data flow in the system.
In the following, we will describe a generic solution for data-flow
reduction based on XML distance metrics. We present first results on
the application of this component in an integration framework for a
vision-based Human-computer interface within an augmented reality
scenario. |
|
Title: |
CAR LICENSE PLATE EXTRACTION FROM
VIDEO STREAM IN COMPLEX ENVIRONMENT |
Author(s): |
Giorgio Grasso and Giuseppe Santagati |
Abstract: |
The recognition of car license plates
has a variety of applications ranging from surveillance, to access
and traffic control, to law enforcement. Today a number of
algorithms have been developed to extract car license plate numbers
from imaging data. In general there two class of systems, one
operating on triggered high speed cameras, employed in speed limit
enforcement, and one based on video cameras mainly used in various
surveillance systems (car-park access, gate monitoring, etc). A
complete automatic plate recognition system, consists of two main
processing phases: the extraction of the plate region from the full
image; optical character recognition (OCR) to identify the license
plate number. This paper focuses on dynamic multi-method image
analysis for the extraction of car license plate regions, from live
video streams. Three algorithms have been deviced, implemented and
tested on city roads, to automatically extract sub-images containing
car plates only. The first criterion is based on the ratio between
the height and width of the plate, which has, for each type of
plate, a standard value; the second criterion is based on the
eccentricity of the image on the two dimensions, i.e. the projection
histogram of plate number pixels onto the reference axes of the
image; the third criterion is based on the intensity histogram of
the image. For each criterion a likelihood is defined, which reaches
its maximum when the tested sub-image is close to the standard value
for the type of plate considered. The tuning of the methods has been
carried on several video streams taken during travel on busy city
roads. The results for the overall recognition rate on single frames
is around 65%, whereas the multi-frame recognition rate is around
85%. The significant value for the performance of the method is the
latter, as typically a license plate is visible in 5-10 frames.
Based on three parameters ranking, the same system can potentially
distinguish and identify a wide range of license plate types. |
|
Title: |
APPEARANCE-BASED FACE RECOGNITION
USING AGGREGATED 2D GABOR FEATURES |
Author(s): |
King Hong Cheung, Jane You, Qin Li
and Prabir Bhattacharya |
Abstract: |
Current holistic appearance based
face recognition methods require a high dimensional feature space to
attain fruitful performance. In this paper, we have proposed a
relatively low feature dimensional, template-matching scheme to cope
with the transformed appearance-based face recognition problem. We
use aggregated Gabor filter responses to represent face images. We
investigated the effect of ``duplicate'' images (images from
different sessions) and the effect of facial expressions. Our
results indicate that the proposed method is more robust in
recognizing ``duplicate'' images with variations in facial
expression than the Principal Component Analysis method. |
|
Title: |
DYNAMIC FEATURE SELECTION AND
COARSE-TO-FINE SEARCH FOR CONTENT-BASED IMAGE RETRIEVAL |
Author(s): |
Jane You, Qin Li, King Hong Cheung
and Prabir Bhattacharya |
Abstract: |
We present a new approach to
content-based image retrieval by addressing three primary issues:
image indexing, similarity measure, and search methods. The proposed
algorithms include: an image data warehousing structure for dynamic
image indexing; a statistically based feature selection procedure to
form flexible similarity measures in terms of the dominant image
features; and a feature component code to facilitate query
processing and guide the search for the best matching. The
experimental results demonstrate the feasibility and effectiveness
of the proposed method. |
|
Title: |
NOVEL CIRCULAR-SHIFT INVARIANT
CLUSTERING |
Author(s): |
Dimitrios Charalampidis |
Abstract: |
Several important pattern recognition
applications are based on feature extraction and vector clustering.
Directional patterns may be represented by rotation-variant
directional vectors, formed from M features uniformly extracted in M
directions. It is often required that pattern recognition algorithms
are invariant under pattern rotation or, equivalently, invariant
under circular shifts of such directional vectors. This paper
introduces a K-means based algorithm (Circular K-means) to cluster
vectors in a circular-shift invariant manner. Thus, the algorithm is
appropriate for rotation invariant pattern recognition applications.
An efficient Fourier domain imple-mentation of the proposed
technique is presented to reduce computational complex-ity. An
index-based approach is proposed to estimate the correct number of
clusters in the dataset. Experiments illustrate the superiority of
CK-means for clustering direc-tional vectors, compared to the
alternative approach that uses the original K-means and
rotation-invariant vectors transformed from rotation-variant ones.
|
|
Title: |
INDUCTIVE STRING TEMPLATE-BASED
LEARNING OF SPOKEN LANGUAGE |
Author(s): |
Alexander Gutkin and Simon King |
Abstract: |
This paper deals with formulation of
alternative structural approach to the speech recognition problem.
In this approach, we require both the representation and the
learning algorithms defined on it to be linguistically meaningful,
which allows the speech recognition system to discover the nature of
the linguistic classes of speech patterns corresponding to the
speech waveforms. We briefly discuss the current formalisms and
propose an alternative -- a phonologically inspired string-based
inductive speech representation, defined within an analytical
framework specifically designed to address the issues of class and
object representation. We also present the results of the phoneme
classification experiments conducted on the TIMIT corpus of
continuous speech. |
|
Title: |
A MULTI-RESOLUTION LEARNING APPROACH
TO TRACKING CONCEPT DRIFT AND RECURRENT CONCEPTS |
Author(s): |
Mihai M. Lazarescu |
Abstract: |
This paper presents a multiple-window
algorithm that combines a novel evidence based forgetting method
with data prediction to handle different types of concept drift and
recurrent concepts. We describe the reasoning behind the algorithm
and we compare the performance with the FLORA algorithm on three
different problems: the STAGGER concepts problem, a recurrent
concept problem and a video surveillance problem. |
|
Title: |
KNOWLEDGE-BASED SILHOUETTE DETECTION |
Author(s): |
Antonio Fernández-Caballero |
Abstract: |
A general-purpose neural model that
challenges image understanding is presented in this paper. The model
incorporates accumulative computation, lateral interaction and
double time scale, and can be considered as biologically plausible.
The model uses - at global time scale t and in form of accumulative
computation - all the necessary mechanisms to detect movement from
the grey level change at each pixel of the image. The information on
the detected motion is useful as part of an object’s shape can be
obtained. On a second time scale base T< |
|
Title: |
MOTION DIRECTION DETECTION FROM
SEGMENTATION BY LIAC, AND TRACKING BY CENTROID TRAJECTORY
CALCULATION |
Author(s): |
Antonio Fernández-Caballero |
Abstract: |
Motion information can form the basis
of predictions about time-to impact and the trajectories of objects
moving through a scene. Firstly, a model that incorporates
accumulative computation and lateral interaction is presented.
Applied to artificial vision, the model uses in form of accumulative
computation all the necessary mechanisms to detect movement from the
grey level stripe change at each pixel of the image. By means of the
lateral interaction of each element with its neighbours, the model
is able to segment moving objects present in an indefinite sequence
of images. In a further step, moving objects are tracked using a
centroid-based trajectory calculation. More concretely, the proposed
solution is described in three steps: (1) segmentation by grey level
stripes, (2) lateral interaction in accumulative computation and (3)
centroid trajectory calculation. |
|
Title: |
BAGGING KNN CLASSIFIERS USING
DIFFERENT EXPERT FUSION STRATEGIES |
Author(s): |
Amer. J. AlBaghdadi and Fuad M.
Alkoot |
Abstract: |
Bagging KNN Classifiers using
Different Expert Fusion Strategies An experimental evaluation of
Bagging K-nearest neighbor classifiers (KNN) is performed. The goal
is to investigate whether varying soft methods of aggregation would
yield better results than Sum and Vote. We evaluate the performance
of Sum, Product, MProduct,Minimum, Maximum, Median and Vote under
varying parameters. The results over different training set sizes
show minor improvement due to combining using Sum and MProduct. At
very small sample size no improvement is achieved from bagging KNN
classifiers. While Minimum and Maximum do not improve at almost any
training set size, Vote and Median showed an improvement when larger
training set sizes were tested. Reducing the number of features at
large training set size improved the performance of the leading
fusion strategies. |
|
Title: |
EVALUATING PATTERN RECOGNITION
TECHNIQUES IN INTRUSION DETECTION SYSTEMS |
Author(s): |
Marcello Esposito, Claudio
Mazzariello, Francesco Oliviero, Simon Pietro Romano and Carlo
Sansone |
Abstract: |
Pattern recognition is the discipline
which studies the design and operation of systems capable to
recognize patterns with specific properties in data sources.
Intrusion detection, instead, is in charge of identifying anomalous
activities by analyzing a data source, be it the logs of an
operating system or in the network traffic. It is easy to find
similarities between such research fields, and it is straightforward
to think of a way to combine them. As to the descriptions above, we
can imagine an Intrusion Detection System (IDS) using techniques
proper of the pattern recognition field in order to discover an
attack pattern within the network traffic. What we propose in this
work is such a system, which exploits the results of research in the
field of data mining, in order to discover potential attacks. The
paper also presents some experimental results dealing with
performance of our system in a real-world operational scenario. |
|
Title: |
ACTIVITY IDENTIFICATION AND
VISUALIZATION |
Author(s): |
Richard J. Parker, William A. Hoff,
Alan Norton, Jae Young Lee and Michael Colagrosso |
Abstract: |
Understanding activity from observing
the motion of agents is simple for people to do, yet the procedure
is difficult to codify. It is impossible to enumerate all possible
motion patterns which could occur, or to dictate the explicit
behavioural meaning of each motion. We develop visualization tools
to assist a human user in labelling detected behaviours and
identifying useful attributes. We also apply machine learning to the
classification of motion into motion and behavioural labels. Issues
include feature selection and classifier performance. |
|
Title: |
INDEXATION OF DOCUMENT IMAGES USING
FREQUENT ITEMS |
Author(s): |
Eugen Barbu, Pierre Heroux, Sebastien
Adam and Eric Trupin |
Abstract: |
Documents exist in different formats.
When we have document images, in order to access some part,
preferably all, of the information contained in that images, we have
to deploy a document image analysis application. Document images can
be mostly textual or mostly graphical. If, for a user, a task is to
retrieve document images, relevant to a query from a set, we must
use indexing techniques. The documents and the query are translated
in a common representation. Using a dissimilarity measure (between
the query and the document representations) and a method to speed-up
the search process we may find documents that are from the user
point of view relevant to his query. The semantic gap between a
document representation and the user implicit representation can
lead to unsatisfactory results. If we want to access objects from
document images that are relevant to the document semantic we must
enter in a document understanding cycle. Understanding document
images is made in systems that are (usually) domain dependent, and
that are not applicable in general cases (textual and graphical
document classes). In this paper we present a method to describe and
then to index document images using frequently appearances of items.
The intuition is that frequent items represents symbols in a certain
domain and this document description can be related to the domain
knowledge (in an unsupervised manner). The novelty of our method
consists in using graph based summaries as a description for
document images. In our approach we use a bag of objects as
description for document images. From the document images we extract
graph based representations. In these graphs, we apply graph mining
techniques in order to find frequent and maximally subgraphs. For
each document image we construct a bag with all frequent subgraphs
found in the graph-based representations. This bag of “symbols”
represents the description of the document. |
|
Title: |
AUTOMATED ANNOTATION OF MULTIMEDIA
AUDIO DATA WITH AFFECTIVE LABELS FOR INFORMATION MANAGEMENT |
Author(s): |
Ching Hau Chan and Gareth J. F. Jones |
Abstract: |
The emergence of digital multimedia
systems is creating many new opportunities for rapid access to huge
content archives. In order to fully exploit these information
sources, the content must be annotated with significant features. An
important aspect of human interpretation of multimedia data, which
is often overlooked, is the affective dimension. Such information is
a potentially useful component for content-based classification and
retrieval. Much of the affective information of multimedia content
is contained within the audio data stream. Emotional features can be
defined in terms of arousal and valence levels. In this study
low-level audio features are extracted to calculate arousal and
valence levels of the audio stream. These are then mapped onto a set
of keywords with predetermined emotional interpretations.
Experimental results illustrate the use of this system to assign
affective annotation to multimedia data. |
|
Title: |
WEB PAGE CLASSIFICATION BASED ON WEB
PAGE SIZE AND HYPERLINKS AND WEB SITE HYPERLINK STRUCTURE |
Author(s): |
Denis L. Nkweteyim |
Abstract: |
This paper presents a new metric,
Page Rank × Inverse Links-to-word count Ratio (PR × ILW), used in
classifying Web pages as content or navigation. The metric combines
Web page size and number of hyperlinks present on the page, and the
Web page rank metric, based on the Web site topology, to compute the
new metric. We present a theoretical basis for the new metric, and
the results of a Web page classification study, which show that the
new metric, when combined with the links-to-word count ratio of Web
pages, accurately classifies them into the two categories: content
and navigation. |
|
Title: |
A NEW JOINLESS APRIORI ALGORITHM FOR
MINING ASSOCIATION RULES |
Author(s): |
Denis Nkweteyim and Stephen Hirtle |
Abstract: |
In this paper, we introduce a new
approach to implementing the apriori algorithm in association rule
mining. We show that by omitting the join step in the classical
apriori algoritm, and applying the apriori property to each
transaction in the transactions database, we get the same results.
We use a simulation study to compare the performances of the
classical to the new joinless algorithm under varying conditions and
draw the following conclusions: (1) the joinless algorithm offers
better space management; (2) the joinless apriori algorithm is
faster for small, but slower for large, average transaction widths.
We analyze the two algorithms to determine factors responsible for
their relative performances. The new approach is demonstrated with
an application to web mining of navigation sequences. |
|
Title: |
PICTURE ID AUTHENTICATION USING
INVISIBLE WATERMARK AND FACIAL RECOGNITION FEATURES |
Author(s): |
Wensheng Zhou and Hua Xie |
Abstract: |
Picture ID authentication is very
important for any identification verifications and extremely
critical for homeland security. Here we propose a unique picture ID
authentication apparatus which combines invisible watermark
embedding and detection technology with facial recognition
techniques. To demonstrate this apparatus, we implemented a system
that is capable of fast and secure verification on the integrity and
authenticity of ID documents with face image contents. The proposed
invisible watermarks tolerate most-common attacks. We believe with
only minor improvement this picture ID authentication system can be
deployed in real environment at airports and country borders.
|
|
Title: |
FAST ALGORITHM FOR OPTIMAL POLYGONAL
APPROXIMATION OF SHAPE BOUNDARIES |
Author(s): |
Prabhudev I. Hosur and Rolando A.
Carrasco |
Abstract: |
This paper presents a fast algorithm
for optimal polygonal approximation of shape boundaries, to generate
a polygon with the minimum number of vertices for a given maximum
tolerable approximation error. For this purpose, the directed
acyclic graph (DAG) formulation of the polygonal approximation
problem is considered. The reduction in computational complexity is
achieved by reducing the number of admissible edges in the DAG and
speeding up the process of determining whether the edge distortion
is within the tolerable limit. The proposed algorithm is compared
with other optimal algorithms in terms of the execution time. |
|