How to Combine Requirements and Interaction Design
Lecturer
|
Hermann Kaindl
Vienna University of Technology
Austria
|
|
Brief Bio
Hermann Kaindl joined the Vienna University of Technology in early 2003 as a full professor. In the same year, he was elected as a member of the University Senate. Prior to moving to academia, he was a senior consultant with the division of program and systems engineering at Siemens AG Austria. There he has gained more than 24 years of industrial experience in software development. His current research interests include software and systems engineering focusing on requirements engineering and architecting, and human-computer interaction as it relates to interaction design and automated generation of user interfaces. He has published 5 books and more than 220 refereed papers in journals, books and conference proceedings. He is a Senior Member of the IEEE, a Distinguished Scientist member of the ACM and a member of the AAAI, and is on the executive board of the Austrian Society for Artificial Intelligence.
|
Abstract
When the requirements and the interaction design for the user interface of a system are separated, they will most likely not fit together, and the resulting system will be less than optimal. Even if all the real needs are covered in the requirements and also implemented, errors may be induced by human-computer interaction through a bad interaction design and its resulting user interface. Such a system may even not be used at all. Alternatively, a great user interface of a system with features that are not required will not be very useful as well. This tutorial explains joint modeling of (communicative) interaction design and requirements (scenarios and use cases), through discourse models and domain-of-discourse models as developed by this proposer and his team. (They will also be briefly contrasted with the task-based modeling approach CTT.) While these models were originally devised for capturing interaction design only, it turned out that they can be also viewed as precisely and comprehensively specifying classes of scenarios, i.e., use cases. In this sense, they can also be utilized for specifying requirements. User interfaces for these software systems can be generated semi-automatically from our discourse models, domain-of-discourse models and specifications of the requirements. This is especially useful when user interfaces tailored for different devices are needed. So, interaction design facilitates requirements engineering to make applications both more useful and usable.
Keywords
Requirements, interaction design, user interfaces, scenarios, use cases
Target Audience
This tutorial is targeted towards people who are supposed to work on the requirements or the interaction design, e.g., requirements engineers, interaction designers, user interface developers, or project managers. It will be of interest for teachers and students as well.
The value for the attendees is primarily improved understanding of a potential separation of requirements engineering and interaction design, and how it can be overcome by combining them to make business applications both more useful and usable.
According to previous experience, this tutorial can be given successfully for 5 to 20 people attending.
Detailed Outline
1. Introduction 5min
1.1 Brief introduction of the tutor
1.2 Brief introduction of the participants
1.3 Motivation and overview
2. Background 15min
2.1 Requirements
2.2 Object-oriented modeling features and their UML representation
2.3 Scenarios / Use Cases
2.4 Interaction design
2.5 Ontologies
2.6 Speech acts
3. Functions / tasks, goals and scenarios / use cases 30min
3.1 Relation between scenarios and functions / tasks
3.2 Relation between goals and scenarios
3.3 Composition of these relations
3.4 A systematic design process based on these relations
3.5 Exercise
4. Requirements and object-oriented models 25min
4.1 Metamodel in UML
4.2 Requirements and objects
4.3 Exercise
5. Interaction design based on scenarios and discourse modeling 35min
5.1 Interaction Tasks derived from scenarios
5.2 Communicative Acts
5.3 Adjacency Pair
5.4 Rhetorical Structure Theory (RST) relations
5.5 Procedural constructs
5.6 Conceptual Discourse Metamodel
5.7 Duality with Task-based modeling
6. Use case specification 20min
6.1 Use case diagram
6.2 Use case report (RUP)
6.3 Sketch of flow of events through scenarios
6.4 Business process — Business Use Case
6.5 Specification based on discourse modeling
7. Exercises 25min
7.1 Try to understand the model sketch of a discourse
7.2 Try to model a discourse yourself
8. Sketch of automated user-interface generation 20min
8.1 Process of user-interface generation
8.2 Examples of generated user interfaces
8.3 Unified Communication Platform
9. Summary and conclusion 5min
Data Science using the Shell
Lecturers
|
Andreas Schmidt
Karlsruhe Institute of Technology
Germany
|
|
Brief Bio
Prof. Dr. Andreas Schmidt is a professor at the Department of Computer Science and Business Information Systems of the Karlsruhe University of Applied Sciences (Germany). He is lecturing in the fields of database information systems, data analytics and model-driven software development. Additionally, he is a senior research fellow in computer science at the Institute for Applied Computer Science of the Karlsruhe Institute of Technology (KIT). His research focuses on database technology, knowledge extraction from unstructured data/text, Big Data, and generative programming. Andreas Schmidt was awarded his diploma in computer science by the University of Karlsruhe in 1995 and his PhD in mechanical engineering in 2000. Dr. Schmidt has numerous publications in the field of database technology and information extraction. He regularly gives tutorials on international conferences in the field of Big Data related topics and model driven software development. Prof. Schmidt followed sabbatical invitations from renowned institutions like the Systems-Group at ETH-Zurich in Switzerland and the Database Group at the Max-Planck-Institute for Informatics in Saarbrucken/Germany.
|
|
Steffen G. Scholz
Karlsruhe Institute of Technology
Germany
|
|
Brief Bio
Dipl.-Ing Dr. Steffen G. Scholz has more than 15 years of R&D experience in the field of polymer micro & nano replication with a special focus on injection moulding and relevant tool-making technologies. He is an expert in process optimization and algorithm design and development for micro replication processes. He studied mechanical engineering with special focus on plastic processing and micro injection moulding and obtained his degree as from the University of Aachen (RWTH). He obtained his PhD from Cardiff University in the field of process monitoring and optimization in micro injection moulding and led a team in micro tool making and micro replication at Cardiff University. Dr. Scholz joined KIT in 2012, where he is now leading the group for process optimization, information management and applications (PIA).
|
Abstract
For data analysis, typically we load the data into a dedicated tool, like a relational database, the statistic program R, mathematica, or some other specialized tools to perform our analysis.
But often, there is also another option, which can be performed on nearly every computer, having the necessary amount of storage available. Many shells, like bash, csh, … provide a bunch of powerful tools to manipulate and transform data and also to perform some sort of analysis like aggregation, etc. Beside the free availability, these tools have the advantage that they can be used immediately, without transforming and loading the data into the target system before, and also, that they typically are stream based and so, huge amounts of data can be processed, without running out of main-memory. With the additional use of gnuplot, ambitious graphic plots can easily be generated.
The aim of this tutorial is to present the most useful tools like cat, grep, tr, sed, awk, comm, uniq, join, split, bzip2, bzcat, bzgrep, etc., and give an introduction on how they can be used together. So, for example, a wide number of queries which typically will be formulated with SQL, can also be performed using the tools mentioned before, as it will be shown in the tutorial.
The tutorial will also include hands-on parts, in which the participants do a number of practical data-analysis, transformation and visualization tasks.
Background Knowledge
Participants should be familiar using a shell like bash, sh, csh, DOS shell, …
Duration (3 hours)
- Introduction 15 min.
- Commands/tools for structured data 45 min.
- Hands-on Part I 30 min.
- ommands/tools for unstructured data 30 min.
- Visualization 30 min.
- Hands-on Part I 30 min.
Software Requirements for the hands-on parts:
- Unix and Mac users: none, the needed tools are already part of your distribution
- Windows users: Please install cygwin on your computer (https://www.cygwin.com/). gnuplot must be additional selected during the cygwin installation process.
Enterprise Modeling and Simulation for Education
Lecturer
|
Gerd Wagner
Brandenburg University of Technology
Germany
|
|
Brief Bio
Gerd Wagner is Professor of Internet Technology at Brandenburg University of Technology, Cottbus, Germany. He has studied Mathematics, Philosophy and Informatics in Heidelberg, San Francisco and Berlin. His research interests include modeling and simulation, foundational ontologies, and web engineering. He has more than 150 publications in these areas in international journals and books, 38 of which have been cited at least 38 times (according to Google Scholar). He is also the co-founder of the website web-engineering.info and the web-based simulation portal sim4edu.com.
|
Abstract
Enterprise modeling and simulation deals with making enterprise simulation models for purposes like performance analysis or management training. Enterprise simulation allows to measure the performance of a business enterprise given a set of management methods, e.g., for demand forecasting, product price planning and inventory control. Enterprise simulation games allow participants to operate a business enterprise by making decisions in several functional areas with the goal of maximizing business performance over a designated period of time in a competitive environment. These games are widely used in university education and in professional training. However, typically, these games are being developed in isolation, without (re-)using any general model, or methodology, or simulation engineering framework, in the form of proprietary assets that are not made accessible for open scientific and educational use. Remarkably, the e-learning field of enterprise simulation and the information systems field of enterprise modeling do not refer to each other, despite the fact that these two fields represent the two sides of the same coin. In this tutorial we present a basic conceptual framework for enterprise modeling and simulation based on the paradigm of Object Event Modeling and Simulation proposed in [1]. The framework supports the modeling of trading and manufacturing enterprises and includes a reference model for single product enterprises. We show how to instantiate the reference model for reconstructing the classical Lemonade Stand Game as an edcuational simulation and how to implement it using the web-based simulation framework OESjs [2]. We also discuss how to use and extend the reference model for creating other enterprise simulations. The tutorial is concluded with a discussion of possible connections between enterprise information systems modeling and enterprise simulation modeling. [1] Gerd Wagner. Information and Process Modeling for Simulation – Part I: Objects and Events. Journal of Simulation Engineering 1:1, 2018. http://JSimE.org [2] Available from http://sim4edu.com
Keywords
Enterprise Simulation, Educational Simulation
Aims and Learning Objectives
Learning how to make enterprise simulation models for educational purposes.
Target Audience
- Researchers interested in enterprise simulation
- Teaching faculty interested in developing their own eductaional enterprise simulation models
- E-Learning professionals interested in management training
Prerequisite Knowledge of Audience
This is an introductory tutorial, so no specific prerequisite knowledge is assumed. Familiarity with UML, BPMN and JavaScript is helpful.
Detailed Outline
Part I - Introduction to Object Event Modeling and Simulation
1 Ontological Foundations of Discrete Event Simulation
2 Information Modeling with UML Class Diagrams
3 Process Modeling with BPMN Process Diagrams
4 Making Object Event Simulation Models with UML and BPMN
5 Implementing and Executing Object Event Simulation Models with OESjs
Part II - A Conceptual Framework for Enterprise Modeling and Simulation
1 A Conceptual Information and Behavior Model of a Trading Enterprise
2 A Conceptual Information and Behavior Model of a Manufacturing Enterprise
Part III - Modeling and Simulation of Lemonade Stands
1 Introduction to JavaScript-based Simulation with OESjs
2 A Minimal Model of a Lemonade Stand
3 Adding a Model of Market Conditions
4 Adding Competition
5 Turning the Model into a Game by Adding User Interaction
Part IV - Outlook
1 Making Other Minimal Enterprise Simulation Models
2 Connections between EIS Modeling and Enterprise Simulation Modeling
Give me the Answer: Question Answering with Deep Learning and Applications in the Financial Domain
Lecturers
|
Ermelinda Oro
National Research Council (CNR)
Italy
|
|
Brief Bio
Ermelinda Oro is researcher at the High Performance Computing and Networking Institute of the Italian National Research Council (ICAR-CNR). She is founder and Chief Scientist of Altilia srl, a Smart Data company spin-off of the CNR. She obtained her PhD in Computer Science from the University of Calabria in 2011. She was visiting researcher at the University of Koblenz-Landau and at the University of Oxford. She taught in computer engineering and master courses at the University of Calabria, and Big Data and Marketing courses at LUISS University of Rome. Her research interests include Artificial Intelligence, Natural Language Processing, Deep Learning, Information Extraction and Querying, Knowledge Representation, Social Networks, Big/Smart data technologies.
|
|
Massimo Ruffolo
National Research Council (CNR)
Italy
|
|
Brief Bio
Massimo Ruffolo is researcher at the High Performance Computing and Networking Institute of the Italian National Research Council (ICAR-CNR). His research interests are: machine learning, information extraction, web wrapping, knowledge representation, natural language processing, document layout analysis, semantic web, knowledge management. Massimo Ruffolo is author of many scientific papers appeared in books, international journals and conferences belonging to the computer science field. He taught in several computer science engineering courses and masters and he is reviewer of international conferences and journals. Ruffolo is also involved in technology transfer as co-founder of spin-off companies operating in the fields of big data and artificial intelligence, his last intrapreneurial initiative is Altilia srl (www.altiliagroup.com) a spin-off of the CNR.
|
Abstract
Question Answering (QA) is a well-known and complex research problem. The development of Knowledge Bases (KBs) helped to create QA systems that exploit formal languages to query such databases. Unfortunately, KBs have intrinsic limitations because they have a fixed schema and are inevitably incomplete. Therefore, KBs can be satisfactory for small closed-domain problems but pose many construction and usability problems in many real world use case scenarios. Instead, teaching machines to read natural language documents to directly answer questions is still an unsolved challenge. Researchers and practitioners are working to define innovative solutions. Current state-of-the-art approaches for QA over documents are based on deep neural networks that encode the documents and the questions to determine the answers. Insight Engines based on Natural Language Processing (NLP) technology enables to query documents written in natural language eliminating the need to learn and use formal complex, proprietary query languages. This tutorial is divided into 4 brief parts. In the first part, we introduce learners to natural language processing with deep neural networks. In particular, we will see the foundation of Deep Learning, we will understand how to build neural networks and we will take a look at the most common deep neural networks. Then, in the second part of the tutorial, we will go through to the Question Answering solutions based on deep neural networks and the used datasets. We will also show to attendees how to run question-answering algorithms based on natural language reading comprehension. In the third part of the tutorial, we will understand how enterprises can benefit from question answering systems embedded in insight engines to transform documents in the knowledge that can drives business decisions. In particular, we will introduce the augmented intelligence paradigm that enables human beings and machines to easily cooperate in order to find answers to real-world problems. In the fourth part of the tutorial, attendees will go deeper thought the needs of financial institutions and proposed solutions related to Natural language processing with Deep Learning. We will work on case studies from the financial domain that require natural language based question answering. Therefore, this tutorial cover not only the theory but also show how question answering with deep neural networks is applied in industry.
Keywords
Deep Learning, Question Answering, Knowledge Representations, Knowledge Base, Natural Language Processing, Information Extraction, Insight Engines, Financial Domain
Aims and Learning Objectives
The tutorials will introduce some of the most important concepts related to natural language processing based on deep learning. It will also show to attendees how to run question-answering algorithms based on natural language reading comprehension. Attendees will understand the needs of financial institutions and proposed solutions related to Natural language processing with Deep Learning.
Target Audience
Students, researchers, practitioners, and professionals coming from scientific and/or enterprise community interested in Deep Learning, Question Answering, Knowledge bases, Natural Language Processing, and Business Applications related to the Financial domain.
Prerequisite Knowledge of Audience
No specific prior knowledge is required. However, knowing the basics of neural networks, python or similar languages, and the problems that financial institutions have in data management is certainly useful.
Detailed Outline
1. Introduction (15 min.)
- Brief introduction of the tutors and participants
- Motivations and overview
2. Natural Language Processing and Deep Learning (45 min)
- The Basics
- Word Vector representations
- Common architectures of Deep Neural Networks
- Examples
3. Question Answering with Deep Learning (45 min)
- Most recent and promising approaches
- Well known datasets
- A running example
- Testing together
4. Augmented Intelligence Paradigm (30 min)
- Human in the Loop AI
- Weak supervised deep learning
- Deep Insight Engines Overview
5. Application to the financial domain (30 min)
- Motivations
- Deep Insight Engine Running Example
- Testing together
6. Conclusion and Discussions (15 min)
Enterprise Architecture Management with ArchiMate
Lecturer
|
Dominik Bork
University of Vienna
Austria
|
|
Brief Bio
DOMINIK BORK works as post-doctoral researcher at the Research Group Knowledge Engineering in the Faculty of Computer Science at the University of Vienna. He received his PhD in information science from the University of Bamberg with the topic of consistent enterprise modelling with multiple views. His research interests cover conceptual modelling, meta modelling, multi-view modelling, and the specification of modelling methods. Dr. Bork is author of scientific papers that have been presented at international conferences like AMCIS, ECIS, KSEM, HICSS, and international journals like Enterprise Modeling and Information Systems Architectures, Cognitive Processing, and Interaction Design & Architectures.
|
|
Knut Hinkelmann
University of Applied Sciences Northwestern FHNW
Switzerland
|
|
Brief Bio
Prof Knut Hinkelmann is Dean of the Master of Science in Business Information Systems at the University of Applied Sciences and Arts Northwestern Switzerland FHNW. In 1988 he obtained a diploma in Computer Science from the University of Kaiserslautern and a Ph.D. in Natural Sciences from the Computer Science Department of the same university in 1995. From 1990 until 1998 he was researcher and later head of the Knowledge Management research group at the German Research Center for Artificial Intelligence (DFKI). From 1998 until 2000 he worked as product manager for Insiders Information Management GmbH. He joinded the University of Applied Sciences and Arts Northwestern Switzerland FHNW in August 2000 as a professor for Information Systems. From 2002 to 2008 he was dean of the Bachelor of Science in Business Information Technology. He was supervisor and external examiner of several PhD Theses and guest lecturer at University of Vienna, University of Krems and University of Camerino. Furthermore he was CEO of the KIBG GmbH from 1996 until 1998; and from 2006 until 2012 he was Scientific Advisor of STEAG & Partner AG. For information on Prof Hinkelmann’s research, see: http://www.hinkelmann.ch/knut/projects.php.
|
Abstract
Information systems arguably play an ever increasing role in the operations of modern enterprises. These information systems also evolved from supporting basic business functions to complex integrated enterprise platforms and ecosystems. Due to this complexity, enterprises increasingly adopt enterprise architecture as a means to manage complexity and change. Furthermore, enterprise architecture itself evolved from being a modeling exercise and a means to align business and IT to a corporate management function concerned with managing all facets of an enterprise. This tutorial investigates the pivotal role of enterprise architecture management as an essential strategy to manage enterprise change and thus sustainability. In particular, the focus is on how the widely adopted industry standard ArchiMate modeling language supports enterprise architecture management. After a brief introduction to the foundations of enterprise architecture management, ArchiMate, and the ArchiMate modeling tool, participants practically apply an ArchiMate modeling tool in a case study. Thereafter, a focus group discussion will be conducted to reflect on the fitness of ArchiMate for managing enterprise architectures.
Keywords
Enterprise Architecture Management, ArchiMate, Enterprise Modeling, Case Study
Aims and Learning Objectives
Participants will learn the foundations of enterprise architecture management. Participants will be introduced to the ArchiMate 3.0.1 modeling standard. Participants will learn how ArchiMate 3.0.1 can be applied to manage the enterprise architecture. Using a focus group discussion, capabilities of ArchiMate will be evaluated.
Target Audience
The tutorial addresses information science and computer science researchers and practitioners who are interested in enterprise architecture management. The tutorial will be of most benefit for participants who research or practice EAM. The tutorial will also benefit those interested to use an openly available EAM tool in their university courses.
Prerequisite Knowledge of Audience
This tutorial does not require any expert knowledge in a certain field. Information science and computer science students, researchers, and practitioners with a solid background and interest in modelling or enterprise architecture management are welcomed.
Detailed Outline
- Introduction to Enterprise Architecture Management 25 min
- Introduction to Archimate 3.0.1 20 min
- Introduction to the EAM modeling tool TEAM 15 min
- EAM Case Study 50 min - Focus Group Discussion 60 min
- Conclusion and Wrap-up 10 min
For more Information please visit http://austria.omilab.org/psm/content/EAMtutorialICEIS2018/info