Home      Log In      Contacts      FAQs      INSTICC Portal

Keynote Lectures

Semiotics in Visualisation
Kecheng Liu, Henley Business School, The University of Reading, United Kingdom

Why ERP Systems Will Keep Failing
Jan Dietz, Computer Science, Delft University of Technology, Netherlands

Conceptual Modeling in Agile Information Systems Development
Antoni Olivé, Universitat Politècnica de Catalunya, Spain

An Engineering Approach to Natural Enterprise Dynamics - From Top-down Purposeful Systemic Steering to Bottom-up Adaptive Guidance Control
José Tribolet, Computer Science and Engineering Department, INESC-ID / Instituto Superior Técnico, Portugal

Data Fraud Detection
Hans-J. Lenz, Inst. of Production, Information Systems and Operations Research, Freie Universitat Berlin, Germany


Semiotics in Visualisation

Kecheng Liu
Henley Business School, The University of Reading
United Kingdom

Brief Bio
Kecheng Liu, Fellow of the British Computer Society and Senior Fellow of the Higher Education Academy, currently holds the position of Professor of Applied Informatics at the University of Reading. Additionally, he serves as the Director of the Digital Talent Academy. With a diverse range of roles, including academic lead, senior advisor, and consultant, his extensive experience spans areas such as business and IT strategies, information management, and digital leadership and transformation in both public and private sectors. As a prominent researcher, he has published 300 papers in various journals and conferences and has authored and edited 25 books on topics ranging from organisational semiotics to business and IT strategy alignment, digital leadership, as well as business informatics (e.g. in big data analytics and AI for business transformation, healthcare and green finance).

Digital visualisation is a way of representing data, ranging from a simple form such as a graph or chart to a complex form like animated visualisations that allows user to interact with the underlying data through direct manipulation. There are various approaches in realising data visualisation such as data visualisation and visual analytics. Digital visualisation intertwines with a set of components such as data collection, data processing and transformation and a graphic engine that offers visualisation capabilities.
Visualisation is always purposeful that will illustrate relationships; discover patterns and interdependencies; or generate some hypothesis or theory. The user’s hypothesis would very much influence on what data would be interested in the analysis and be included in the visualisation. Therefore, there will be always a set of questions highly relevant in any visualisation, such as data availability, access, format (data itself and display format), meaning (i.e. interpretation), purpose of data presented, and effect of visualised data on the recipients. Such questions can be best answered by drawing input from semiotics.
Semiotics is a formal doctrine of signs introduced by Peirce back in the 1930’s. A special branch is organisational semiotics which has been developed by Stamper and his colleagues to study the effective use of information in business context. Data, under the study through visualisation, are signs. From a semiotic perspective, visualisation is a process of abduction, a key process of scientific inquiry, or a process of generating new knowledge. When encountered with a new phenomenon, prior knowledge will enable us to produce some initial, but often, plausible explanations. Abduction thus allows us to generate hypotheses (which should be plausible) and further to determine which hypothesis or proposition to be tested. Peirce defines abduction as “the process of forming explanatory hypotheses”, and the “only kind of argument which starts a new idea”.
There are four key steps in the process of abduction in data visualisation: 1) users establish the initial hypothesis based on what they see, 2) users derive some patterns of knowledge, 3) they verify the perceived visual objects based on their prior knowledge and lastly 4) they reaffirm the establish hypothesis. In short, subjectivity and user’s intention have significant impact on the visualisation which can be better examined with semiotics. A set of principles of data visualisation is proposed towards the end of the keynote speech.



Why ERP Systems Will Keep Failing

Jan Dietz
Computer Science, Delft University of Technology

Brief Bio
Jan Dietz is emeritus professor at Delft University of Technology, and visiting professor at the University of Lisbon and the Czech Technical University in Prague. He has always combined academic work with applying research outcomes in practice. He has supervised over 300 M.Sc.’s and 16 Ph.D.’s and he has published over 250 scientific and professional papers as well as several books. Jan Dietz is the spiritual father of DEMO (Design & Engineering Methodology for Organisations), founder of the Enterprise Engineering Institute (, and founder of the Ciao! Enterprise Engineering Network ( He is founding editor of The Enterprise Engineering Series, published by Springer. For more information, visit


ERP systems, like other kinds of enterprise information systems, are rarely a real success. The main cause of the many failures in practice is the prevailing wrong understanding of these systems. The prominent current idea is that an enterprise information system (EIS) is a product, like for example a car, which can deliberately be replaced by another one, without much ado. Unfortunately, this view is fundamentally wrong. The proper metaphor for an EIS is the nervous system of a human body. Like the nervous system is intrinsically and intensely connected to the body it supports, an EIS is (or should be) intrinsically and intensely connected to the organisation that it serves. Consequently, like a neurologic surgeon needs appropriate and thorough knowledge of both the nervous system and the human body, an EIS designer must not only have thorough knowledge of information systems. He or she must also, and particularly, have thorough and appropriate knowledge of organisations.

The needed thorough and appropriate knowledge is provided by the PSI-theory (Performance in Social Interaction), one of the theoretical pillars of the discipline of Enterprise Engineering. The PSI-theory re-establishes people as the ‘pearls’ of every organisation. Equipped with the right authority and bearing the corresponding responsibility, they deliver services to each other and to environmental actors, in universal patterns of social interaction, called transactions. The essential model of an organisation is a network of transactions and actors, fully abstracted from realisation and implementation. Over 20 years of practical application of the PSI-theory, notably through the DEMO methodology (Design and Engineering Methodology for Organisations), has clarified that organisations, like biological systems, can be said to have a genotype and a phenotype. The phenotype is the ever changing ‘outside’, as typically captured in organisational charts, in business process models, and in data models. The genotype is the hidden, very stable, ‘inside’. It can be revealed by the PSI-theory. If the genotype of an organisation is fully respected during the design of a supporting EIS, the EIS will perfectly fit its needs.

Although ERP systems are built on a common understanding of enterprises in the same domain, this understanding is rather different from an organisation’s genotype. The so-called architecture of an ERP system typically comprises descriptions of the functional areas that are supported, like Operation, Finance, and HRM. The additionally needed constructional knowledge, as contained in the essential model of an enterprise’s organisation, is mostly lacking. Consequently, ERP systems, once implemented, become ‘armours’, which frustrate organisations in their fundamental need to be authentic, i.e. to be compliant with their genotype. The situation can be compared with a human body in which a ‘wrong’ nervous system is implemented. As a consequence, one is for example not able anymore to fully lift one of the arms, and is condemned to ‘live with it’.



Conceptual Modeling in Agile Information Systems Development

Antoni Olivé
Universitat Politècnica de Catalunya

Brief Bio
Antoni Olivé is a professor of information systems at the Universitat Politècnica de Catalunya in Barcelona. Currently, he is the director of the PhD School. He has worked in the field of information systems for over 35 years, mainly in university and research contexts. His main interests have been, and are, conceptual modeling, requirements engineering, information systems design and databases. He has taught extensively on these topics. He has also conducted research on these topics, which has been published in international journals and conferences. He was the author of the book "Conceptual Modeling of Information Systems" (Springer, 2007).
He is a member of IFIP WG8.1 (Design and evaluation of information systems) where he served as chairman during 1989-1994, of the CAiSE Steering Committee, of the ER Steering Committee, of which he is currently the chair, and of the editorial boards of the journals "Journal of Database Management" and "Data and Knowledge Engineering". He was the Program Co-chair of the ER 2006 and the General Chair of CAISE 1997 and ER 2008. He is an ER Fellow.

The nature and the role of conceptual modeling in information systems development has neither in theory nor in practice been established satisfactorily. There are diverse views on what conceptual modeling is and on how to perform it. In one extreme, there is the view that conceptual modeling is an (optional) activity whose main purpose is to improve communication between the parties involved in the development process. In the other extreme, there is the view (shared by us) that conceptual modeling is an activity that is necessarily performed in all cases, and whose purpose is to define the conceptual schema, that is, the general knowledge a system needs to know to perform its functions. The latter has been captured in what we call the principle of necessity of conceptual schemas, which states that "To develop an information system it is necessary to define its conceptual schema". Agile development processes have added even more confusion to conceptual modeling. The value of "Working software over comprehensive documentation", stated in the manifesto for agile software development, seems to undermine conceptual schemas in favor of working code. However, as we explain in the talk, it does not need to be so. In the talk, we present a framework that describes the contents of conceptual schemas, the form they may take and the roles they play in information systems development. Based on that framework, we review the principle of necessity of conceptual schemas. We then apply the framework to the particular case of agile development, and discuss the validity of the principle of necessity in that case. The framework is intended to be useful for inspiring future research, and for improving the practice and teaching of conceptual modeling.



An Engineering Approach to Natural Enterprise Dynamics - From Top-down Purposeful Systemic Steering to Bottom-up Adaptive Guidance Control

José Tribolet
Computer Science and Engineering Department, INESC-ID / Instituto Superior Técnico

Brief Bio
José Tribolet is Full Professor of Information Systems at the Department of Computer Science and Engineering (DEI) and at the Department of Engineering and Management (DEG) (Joint Appointment) at the Instituto Superior Técnico (IST), Technical University of Lisbon, Portugal. He is senior researcher at the Information Systems Group at INESC-ID and promoter of the Centre for Organizational Design & Engineering at INESC-INOV. He is the coordinator of POSI-E3, the Professional Post-Graduate course in Information Systems and Enterprise Engineering of IST (3rd Degree Bologna Diploma). He serves presently as Chairman of the Department of Computer Science and Engineering (DEI/IST).
Dr. Tribolet holds a Ph.D. in Electrical Engineering and Computer Science from MIT (1977). He was a member of the research staff of Bell Laboratories, Murray Hill, NJ, from 1977 through 1979. He spent a full sabbatical year (1997-98) at MIT´s Sloan School ofManagement. He was a guest professor at IWI - the Institute for Information Management of the University of St. Gallen, in Switzerland,during the spring term of 2012.
He founded in 1980 the first non-state owned research institute in Portugal, INESC – Institute for Systems and Computer Engineering. INESC is today a holding of six research institutes nationwide, three of them having become formal Associated Laboratories of the Portuguese Science System. Dr. Tribolet is the President of INESC. He has been Chairman of the Department of Electrical and Computer Engineering and Chairman and Vice-President for Post- Graduate Studies of the Department of Computer Science and Engineering of IST. He is a founding member of the Portuguese Engineering Academy and a founder of the Informatics Engineering College of the Portuguese Engineers Professional Association.

Enterprises are dynamical systems, formed by a semantic web of active servers of two kinds: Carbon-based servers, normally called Humans, and silicon-based servers, called Computers.
These active elements change the state of the world through their individual and collective orchestrated networked actions, in real time. All an enterprise "does" is the sum total of the actions of its active servers. No more, no less!
An Enterprise is an entity by itself, whose existence is associated with intentions, missions, goals and purposes that are in some degree shared by its active elements and inform the prescribed organizational elements of its structures. An Enterprise is under permanent change, due to external and internal conditions, and the enacted changes occur either by spontaneous actions of its active servers or by intentional systemic change that propagates top-down and is purposefully adopted by the baseline servers.
This talk will show how relevant is the Engineering Body of Knowledge of Systems Theory and Dynamic Systems Control and the formal principles and methods of Enterprise Engineering to model, design and operate Enterprises and in particular, how to steer top-down strategic transformations and to combine them with adaptive bottom-up emergent adaptive phenomena.



Data Fraud Detection

Hans-J. Lenz
Inst. of Production, Information Systems and Operations Research, Freie Universitat Berlin

Brief Bio
In 1973 I got a doctoral degree (PhD like) in Statistics and Operations Research, Freie Universität Berlin, Germany. In 1978 I was offered a Professorship of Applied Computer Science and Statistics at Freie Universität Berlin, and one of Statistics at University of Bonn. I accepted the first one, and retired there in 2008.My present research concerns Business Intelligence, data quality assessment at the business and economics level in co-operation with the Institute of North America Studies, Berlin, data fraud detection, model-based controlling under uncertainty, and cost/benefit and risk calculations of oil / gas exploration and production in cooperation together with Technical University Berlin.My research activities in 1969 – 2008 led to ~ 25 books published or co-edited, ~ 350 technical papers authored or co-authored and ~ 25 PhD students supervised.I received a honorary membership from the Romanian Statistical Society in 2005, and in I got the Golden Medal of Freie Universität Berlin for excellence of service.

Data Fraud is a criminal activity done by at least one person who intentionally acts secretly to deprive other people of something of value, for their own benefit, i.e. profit or prestige.
Data Fraud happened and still happens everywhere in all centuries and in all fields of activities of human mankind: Business, economics, politics, science, health care, religious communities etc.
Data fraud is extensionally characterized by four fields: Data scout, plagiarism, manipulation and fabrication.
Data scout is spying out data like NSA and others secret services in GB, Russia etc. globally are doing. Data Plagiarism suppresses referring to the source or provenance of data used by the deceiver. Data Manipulation takes existing data and manipulates (“fine-tuning”) the content encapsulated in tables, diagrams or (historical) pictures where mostly numbers of all data types are manipulated. Finally, Data Fabrication generates artificial data in a brute force way thereby avoiding expensive data recording, time-consuming observations or running statistically well-planned experiments.
There is no and will be no omnibus test available to detect data fraud of all kinds. However, a bundle of techniques is available like substring and metadata matching, probability / frequency distribution dependent methods, Benford’s Law, multivariate inliers and outlier tests as well as tests of conformity between a given data set and a fully specified model. The main objective is to give hints to data fraud betrayers with low rates of false positives and negative cases.
Some famous historical and actual cases and a bundle of useful tests are presented.