|
Keynote Lectures |
|||||||||||||||||||||||
Keynote lectures are plenary sessions which are scheduled for taking about 45 minutes + 10 minutes for questions Keynote Lectures
List:
Brief Bio of Dr. J. K. Aggarwal J.K. Aggarwal has served on the faculty of The University of Texas at Austin College of Engineering in the Department of Electrical and Computer Engineering since 1964. He is currently one of the Cullen Professors of Electrical and Computer Engineering Professor Aggarwal earned his B.Sc. from University of Bombay, India in 1957, B. Eng. from University of Liverpool, Liverpool, England, 1960, M.S. and Ph.D. from University of Illinois, Urbana, Illinois, in 1961 and 1964 respectively. His research interests include image processing, computer vision and pattern recognition. The current focus of research is on the automatic recognition of human activity and interactions in video sequences, and on the use of perceptual grouping for the automatic recognition and retrieval of images and videos from databases. A fellow of IEEE (1976) and IAPR (1998), Professor Aggarwal received the Best Paper Award of the Pattern Recognition Society in 1975, the Senior Research Award of the American Society of Engineering Education in 1992 and the IEEE Computer Society Technical Achievement Award in 1996. He is the recipient of the 2004 K. S. Fu Prize of the IAPR and the 2005 Leon K. Kirchmayer Graduate Teaching Award of the IEEE. He is the author or editor of 7 books and 52 book chapters, author of over 200 journal papers, as well as numerous proceeding papers and technical reports. He has served as the Chairman of the IEEE Computer Society Technical Committee on Pattern Analysis and Machine Intelligence (1987-1989), Director of the NATO Advanced Research Workshop on Multisensor Fusion for Computer Vision, Grenoble, France (1989), Chairman of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (1993), and the President of the International Association for Pattern Recognition (1992-1994). He is a life fellow of IEEE and Golden Core Member of IEEE Computer Society. Abstract: The effort to develop computer systems able to detect humans and to recognize their activities is part of a larger effort to develop personal assistants. However, recognition of human activities will lead to other applications including virtual reality, smart monitoring and surveillance systems, motion analysis in sports, medicine and choreography, and vision-based user interfaces, etc. Thus recognition of human activity is a key area in video understanding, video being the preferred medium of communication today. The understanding of human activity is a diverse and complex subject that includes tracking and modeling human activity, and representing video events at the semantic level. Its scope ranges from understanding the actions of an isolated person to understanding the actions and interactions of a crowd. At The University of Texas at Austin, we are pursuing a number of projects on human motion. Professor Aggarwal will present his research on modeling and recognition of human actions and interactions. The work includes the study of interactions at the gross level as well as at the detailed level. The two levels present different problems in terms of observation and analysis. At the gross we model persons as blobs, and at the detailed level we conceptualize human actions in terms of an operational triplet ‘agent-motion-target’ similar to ‘verb argument structure’ in linguistics. We use dictionary-based definitions of human interactions as domain knowledge and construct the classification rules for the human interactions. The issues considered in these problems will illustrate the richness and the difficulty associated with understanding human motion. Application of the above research to monitoring and surveillance will be discussed.
Brief Bio of Dr. Georges Gardarin Georges GARDARIN was born in Riom, France on February 19, 1947. He enters Ecole Normale Supérieure de l'Enseignement Technique in 1968. From 1971 to 1978, he was assistant professor at Paris VI University. During that period, he was also consultant at Ordoprocessor where he built a computer system, and at Renault, where he designed a new distributed information system. He did his PhD Thesis in 1978 on Concurrency Control in Distributed Databases at University of Paris VI. From 1978 to 1980, he was visiting professor at UCLA, California. He published several papers on concurrency control and database integrity, notably with Prof. W. Chu and M. Melkanoff. From 1980 to 1990, he was professor at Paris VI University, teaching databases and distributed systems. He was also chief-scientist at INRIA where he headed the Sabre project, which was developing an object-relational parallel DBMS. From 1990 to 2000, he created and developed the PRiSM Research Laboratory at the new University of Versailles Saint-Quentin. On January 2000, he joined e-XMLMedia, a start-up he founded with industrial partners. The start-up was developing XML middleware to store and publish XML on top of classical DBMSs. The products are currently distributed by several licensed companies, one version being distributed in open source by XQuark. From January 2003, he is back Professor at the University of Versailles Saint-Quentin. He is leading a team developing projects on XML mediation and the semantic Web. An extended mediator with intelligent wrappers based-on text mining is notably being developed. The outcome should be an XQuery search engine for multiple heterogeneous data sources. Georges has written more than 120 papers in international journal and conferences, and several books in French, some of them being translated in English and Spanish. He wrote the most known French database book and has recently published a book on XML and information systems. Abstract: With the advent of XQuery as a standard for querying XML collections, several information mediator systems have been developed. They use XML as pivot language. More precisely, XML Mediators are focused on supporting the XQuery (or sometimes the SQL/XML) query language on XML views of heterogeneous data sources. Wrappers leverage data sources to XML views with basic query facilities. The data are integrated on demand by the mediator delegating sub-queries to wrappers. Using such information integration platforms to query the semantic Web is a challenge both for scalability and data semantic reasons. On another side, semantic Peer-to-peer (P2P) networks are emerging as an important infrastructure to manage distributed data, notably on the Web. An important goal concerns improving query capabilities of distributed heterogeneous data. Coupling data mediation and P2P technology, P2P data mediation strives to support efficiently advanced queries upon heterogeneous data sources annotated with various metadata and mapping schemes. In the talk, we will shortly analyze the main services provided by mediation systems and discuss their extension to the semantic Web in P2P mode. We will discuss the annotation service for describing sources and semantic mappings, the query service to express distributed semantic queries, and the routing services to route queries and results. Finally, we will survey some projects and report on PathFinder, an experimental P2P mediation system developed at University of Versailles.
Brief Bio of Dr. Anil K. Jain Anil Jain is a University Distinguished Professor in the Departments of Computer Science & Engineering at Michigan State University. He served as the Department Chair during 1995-99. He received his B.Tech. degree from Indian Institute of Technology, Kanpur in 1969 and M.S. and Ph.D. degrees from Ohio State University in 1970 and 1973, respectively. His research interests include statistical pattern recognition, data clustering, texture analysis, document image understanding and biometric authentication. He received awards for best papers in 1987 and 1991, and for outstanding contributions in 1976, 1979, 1992, 1997 and 1998 from the Pattern Recognition Society. He also received the 1996 IEEE Transactions on Neural Networks Outstanding Paper Award. He was the Editor-in-Chief of the IEEE Transactions on Pattern Analysis and Machine Intelligence (1991-1994). He is a fellow of the IEEE, ACM, and International Association of Pattern Recognition (IAPR). He has received a Fulbright Research Award, a Guggenheim fellowship and the Alexander von Humboldt Research Award. He delivered the 2002 Pierre Devijver lecture sponsored by the International Association of Pattern Recognition (IAPR) and received the 2003 IEEE Computer Society Technical Achievement Award. Holder of six patents in the area of fingerprint matching, he is the author of a number of books: Biometric Systems, Technology, Design and Performance Evaluation, Springer 2005, Handbook of Face Recognition, Springer 2005, Handbook of Fingerprint Recognition, Springer 2003 (received the PSP award from the Association of American Publishers), BIOMETRICS: Personal Identification in Networked Society, Kluwer 1999, 3D Object Recognition Systems, Elsevier 1993, Markov Random Fields: Theory and Applications, Academic Press 1993, Neural Networks and Statistical Pattern Recognition, North-Holland 1991, Analysis and Interpretation of Range Images, Springer-Verlag 1990, Algorithms For Clustering Data, Prentice-Hall 1988, and Real-Time Object Measurement and Classification, Springer-Verlag 1988. ISI has designated him as a highly cited researcher. According to Citeseer, the book Algorithms for Clustering Data by Jain and Dubes (Prentice-Hall, 1988) is ranked # 93 in Most Cited Articles in Computer Science (over all times). The survey paper "Data Clustering: A Review" by Jain, Murty and Flynn (ACM Computing Surveys, Vol. 31, No. 3, 1999, 264-323) is ranked # 28 in Most Cited Articles in Computer Science published in 1999. He is an Associate editor of the IEEE Transactions on Information Forensics and Security and is currently serving as a member of the study team on Whither Biometrics being conducted by the National Academies (CSTB). Abstract: A wide variety of systems require reliable personal recognition schemes to either confirm or determine the identity of an individual requesting their services. The purpose of such schemes is to ensure that only a legitimate user, and not anyone else, accesses the rendered services. Examples of such applications include secure access to buildings, computer systems, laptops, cellular phones and ATMs. Biometric recognition, or simply biometrics, refers to the automatic recognition of individuals based on their physiological and/or behavioral characteristics. By using biometrics it is possible to confirm or establish an individual’s identity based on “who she is”, rather than by “what she possesses” (e.g., an ID card) or “what she remembers” (e.g., a password). Current biometric systems make use of fingerprints, hand geometry, iris, face, voice, etc. to establish a person's identity. Biometric systems also introduce an aspect of user convenience. For example, they alleviate the need for a user to “remember” multiple passwords associated with different applications. In spite of the fact that several large-scale biometric systems have been deployed (e.g., US-VISIT program), design, implementation and performance evaluation of an automatic biometric recognition system is an extremely challenging problem. A biometric system that uses a single biometric trait for recognition has to contend with problems related to non-universality of the trait, spoof attacks, limited degrees of freedom, large intra-class variability, and noisy data. Some of these problems can be addressed by integrating the evidence presented by multiple biometric traits of a user (e.g., face and iris). Such systems, known as multimodal biometric systems, demonstrate substantial improvement in recognition performance. In this talk, we will present various applications of biometrics, challenges associated in designing robust biometric systems, state-of-the-art recognition performance, fusion strategies for implementing a multimodal biometric system, and the need for securing the biometric system itself.
Brief Bio of Dr. Matthias Jarke Matthias Jarke is Professor of Information Systems at RWTH Aachen University and Executive Director of the Fraunhofer FIT Institute for Applied Information Technology. Jarke holds master degrees in business administration and computer science and a doctorate in business informatics, both from the University of Hamburg, Germany. Prior to joining Aachen, he held faculty positions at New York University’s Stern School of Business and at the University of Passau. His research area is information systems support for cooperative activities in business, engineering, and culture. He has been coordinator of three European research projects in these areas. Jarke was Editor in Chief of Information Systems from 1993-2003, and has served as program chair of major international conferences such as VLDB, EDBT, CoopIS, and CAiSE. He is elected senior reviewer for software engineering for the German national science foundation DFG, and since 2004 president of the German Informatics Society, GI. Abstract: The Internet has not only enabled worldwide access to heterogeneous information sources such as web pages or traditional database contents, but also increasingly serves as a medium for multimedia information and opinion exchange. Community Information Systems address the combination of these two trends of heterogeneous worldwide information access and cooperative discussion and work. This combination creates a lot of new opportunities e.g. in the educational and cultural sector, but entails also serious risks and socio-political problems. New technical solutions are required for problems such as shared definition of IS structure in such communities, high variability and strong guidance in user interfaces, security and trust management. In particular, this requires a schema organization that can change itself gradually, yet in a controlled manner, i.e. has the property of being reflexive. The talk will give an overview of interdisciplinary research in this area, and present the ATLAS architecture developed at RWTH Aachen University. A number of real-world application examples, including a major effort for the reconstruction of a cultural heritage research community in Afghanistan, will illustrate the approach.
Brief Bio of Dr. Timos Sellis Prof. Timos Sellis received his diploma degree in Electrical Engineering in 1982 from the National Technical University of Athens (NTUA), Greece. In 1983 he received the M.Sc. degree from Harvard University and in 1986 the Ph.D. degree from the University of California at Berkeley, where he was a member of the INGRES group, both in Computer Science. In 1986, he joined the Department of Computer Science of the University of Maryland, College Park as an Assistant Professor, and became an Associate Professor in 1992. Between 1992 and 1996 he was an Associate Professor at the Computer Science Division of NTUA, where he is currently a Full Professor. Prof. Sellis is also the head of the Knowledge and Database Systems Laboratory at NTUA. His research interests include peer-to-peer database systems, data warehouses, the integration of Web and databases, and spatial database systems. He has published over 120 articles in refereed journals and international conferences in the above areas and has been invited speaker in major international events.. Prof. Sellis is a recipient of the prestigious Presidential Young Investigator (PYI) award given by the President of USA to the most talented new researchers (1990), and of the VLDB 1997 10-Year Paper Award for his work on spatial databases. He was the president of the National Council for Research and Technology of Greece (2001-2003) and a member of the VLDB Endowment (1996-2000). He also serves as a member of the ACM SIGMOD Advisory Board. Abstract: Peer-to-peer (P2P) computing has attracted a lot of attention both in academia and industry. In P2P systems, autonomous peers (computers) are all treated in a uniform way, they can join and leave the system at any time, and essentially they form a large distributed system. Although keyword searching and routing in such networks has received a lot of activity in the last few years, only a few researchers have addressed the case where peers hold non-traditional types of information or even complete (say relational) database management systems. On the other hand, research in distributed, heterogeneous database systems has been around for many years; however, the database community has only recently started working on enhancing P2P systems with data management capabilities. In this talk we will focus on two major problems that deal with these issues: first, we describe problems and challenges in query processing on P2P networks. In such networks, peers hold structured databases and each peer holds some mappings with some other peers; such mappings allow peers to exchange information by translating (according to these mappings) attributes so as to fit their schemas. The standard practice of answering a query, is to consecutively re-write it along the propagation path, which often results in significant loss of information. We will present an adaptive and bandwidth-efficient solution to the problem in the context of an unstructured, purely decentralized system. Our method allows peers to individually choose which rewritten version of a query to answer, and discover information-rich sources left hidden otherwise. The second problem deals with extending searching and routing algorithms in the case where peers hold spatial information. Until recently, research has focused mostly on P2P systems that host one-dimensional data (i.e. strings, numbers, etc). However, the need for P2P applications with multi-dimensional data is emerging. Yet, existing indexing and search techniques are not suitable for such applications: most indices for multi-dimensional data have been developed for centralized environments. Our focus is on structured P2P systems that share spatial information. We present a totally decentralized indexing and routing technique that is suitable for spatial data, i.e. it handles P2P applications in which spatial information of various sizes can be dynamically inserted or deleted, and peers can join or leave. The proposed technique preserves well locality, and supports efficient routing especially for popular and/or close areas.
Brief Bio of Dr. John B. Oommen Dr. John Oommen was born in Coonoor, India on September 9, 1953. He obtained his B.Tech. degree from the Indian Institute of Technology, Madras, India in 1975. He obtained his M.E. from the Indian Institute of Science in Bangalore, India in 1977. He then went on for his M.S. and Ph. D. which he obtained from Purdue University, in West Lafayettte, Indiana in 1979 and 1982 respectively. He joined the School of Computer Science at Carleton University in Ottawa, Canada, in the 1981-82 academic year. He is still at Carleton and holds the rank of a Full Professor. His research interests include Automata Learning, Adaptive Data Structures, Statistical and Syntactic Pattern Recognition, Stochastic Algorithms and Partitioning Algorithms. He is the author of more than 235 refereed journal and conference publications and is a Fellow of the IEEE. Dr. Oommen is on the Editorial Board of the IEEE Transactions on Systems, Man and Cybernetics, and Pattern Recognition. Abstract: All modern-day Database Management Systems (DBMS) use histograms in approximating query result sizes in the query optimizer. This is because histograms are simple structures, and can be easily utilized in determining efficient Query Evaluation Plans (QEPs). Oommen and Thiyagarajah introduced two new histogram methods, namely the Rectangular Attribute Cardinality Map (R-ACM), and the Trapezoidal Attribute Cardinality Map (T-ACM). The superiority of these in yielding more accurate query result-size estimates has been well demonstrated, and the resulting superior QEPs for a theoretically-modeled database has been shown. In this talk, apart from highlighting the power of the ACMs, we make a “conceptual leap” and demonstrate how the ACMs can be incorporated into a real-life DBMS. This has been done by designing and implementing a prototype which sits on top of an ORACLE 9i system. The integration is achieved in C/C++ and PL/SQL, and serves as a prototype “plug-in” to the ORACLE system, since it is fully integrated and completely transparent to users. The superiority of utilizing the ACM histograms is rigorously validated by conducting an extensive set of experiments on the TPC-H benchmark data sets, and by testing the system on equi-select and equi-join queries. The talk also explains the entire set of experimental results obtained by integrating the underlying algorithms into the ORACLE query optimizer.
|
||||||||||||||||||||||||