Navigation auf uzh.ch
Speaker: Prof. Dr. Matthias Hollick
Host: Burkhard Stiller
Multihop wireless networks such as mesh, sensor or ad hoc networks have been confined to lab environments for decades. However, due to the progress and ready availability of basic technologies to realize these networks, they are increasingly deployed in the wild. But are these networks fit for survival? Do we need to worry about digital predators or smaller digital pests? How do network mechanisms operate under harsh environmental conditions? This talk discusses past, present, and future of multihop wireless networks with particular emphasis on the development of wilderness survival skills, which we consider to be decisive in their success.
Prof. Dr. Matthias Hollick is heading the Secure Mobile Networking Lab (SEEMOO) at the Computer Science Department of Technische Universität Darmstadt, Germany since 2009. He has been researching and teaching at TU Darmstadt, Universidad Carlos III de Madrid (UC3M), and the University of Illinois at Urbana-Champaign (UIUC). His research focus is on robust, secure and privacy-preserving as well as quality-of-service-aware communication for mobile and wireless networks.
Speaker: Prof. Dr. Martin Theobald
Host: Michael Böhlen
Recent advances in the field of information extraction have paved the way for the automatic construction and growth of large, semantic knowledge bases from Web sources. Knowledge bases like DBpedia or YAGO today contain hundreds of millions of facts about real-world entities and their relationships among each other, which are captured in the popular Resource Description Framework (RDF) format. However, the very nature of the underlying extraction techniques entails that the resulting RDF knowledge bases may face a significant amount of incorrect, incomplete, or even inconsistent factual knowledge, which makes efficient and reliable query answering over this kind of uncertain RDF data a challenge. Our query engine, coined URDF, performs query answering in uncertain RDF knowledge bases via a combination of Datalog-style deduction rules, consistency constraints, and probabilistic inference, which will be the main subject of this talk. Specifically, by casting the above scenario into a probabilistic database setting, we develop a new top-k algorithm for query answering, which - for the first time in the context of probabilistic databases - allows us to fully integrate data and confidence computations over this kind of probabilistic input data. Extensions of our framework include the automatic learning of these deduction rules from RDF data sources, as well as the consideration of temporal deduction rules and consistency constraints over time-annotated, probabilistic facts.
Prof. Dr. M. Theobald joined in 2012 the ADReM research group at the University of Antwerp as an Associate Professor, teaching courses on Databases and Information Retrieval. Between 2008 and 2012, he worked as a Senior Researcher at the Max Planck Institute for Informatics in Saarbrücken. Before, he spent two years as a Post-Doc at the Stanford Infolab, working on the Trio probabilistic database system as well as on the BioAct and WebBase projects. The doctoral degree was received from Saarland University in 2006 for the work on the TopX search engine for the ranked retrieval of XML data. Martin is currently a member of the editorial advisory board of Elsevier's Information Systems and he served on the program committees and as a reviewer for numerous international journals, conferences and workshops, including TODS, TKDE, VLDB-J, PVDLB, SIGMOD, SIGIR, ICDE, WSDM and WWW.
Speaker: Prof. Nigel Collier, Ph.D.
Host: Michael Volk
Semantic understanding from biomedical texts has seen growing interest in recent years. Text mining approaches can help scientists makes sense of the complex data locked away inside the unstructured narratives of scientific texts, electronic health records as well as more informal sources such as news media. Progress on extracting the facts that are hidden within these massive data sets requires interdisciplinary collaboration to develop intelligent tools that genuinely meet the needs of experts. In this talk I will introduce two text mining systems which exemplify the challenges, algorithms and resources that are being investigated. (1) BioCaster is a public health surveillance system developed for early alerting against infectious diseases using Web-based news and social media. It uses a fusion of supervised machine learning for document classification, expert rules to normalise significant entities and their relationships (e.g. victim - location / victim - disease associations) and time series analysis algorithms to perform aberration detection. The system has been in use since 2008 by a G7 grouping of health ministries and international organisations. (2) The second system, Phenominer, is currently under development and aims to identify facts about the relationship between genes, diseases and phenotypes in scientific texts. The first stages of the system have been assembled by combining state-of-the-art syntactic parsing, machine learning (SVMs) as well as prior domain knowledge in biomedical ontologies and annotated corpora. I will focus in particular on our experimental approach to complex entity extraction which is still poorly understood. Phenominer is joint work with D. Rebholz-Schuhmann, A. Oellrich, M. V. Tran, H. Q. Le and Q. T. Ha.
Prof. Nigel Collier, Ph.D., is a Marie Curie Research Fellow at the European Bioinformatics Institute in Cambridge and an Associate Professor at the National Institute of Informatics in Tokyo. He received his Ph.D. in Computational Linguistics from the UMIST (now the University of Manchester) in 1996. He was awarded a Toshiba Fellowship to work on machine translation and then spent 2 years as a postdoc at the University of Tokyo until 2000 where he helped establish the GENIA project which has provided benchmark annotated data to the biomedical text mining community. In 2000 he became associate professor at the National Institute of Informatics in Tokyo where he was Principal Investigator on several projects related to knowledge acquisition from biomedical literature. In 2008 he was awarded a prestigious Japan Science and Technology Agency Sakigake award to investigate early alerting of infectious diseases from Web media. The resulting BioCaster system has been widely used by international human and animal health agencies. Then in 2012 he was awarded an EC Marie Curie Fellowship to conduct research into the acquisition and linking of phenotypes in scientific texts. Dr. Collier's research interests span a range of areas in intelligent text understanding. His primary focus is the integration of natural language processing, knowledge representation and machine learning for knowledge acquisition and discovery in the biomedical domain.
Speaker: Prof. Dr. Arie van Deursen
Host: Thomas Fritz
A substantial body of research on software evolution addressses the level of commits as mined from source code repositories. In this presentation, we will explore two larger units of change. One of these is the pull request as used on GitHub. It typically represents a coherent change set used for bug fixes or new features. The other change type is at the level of full libaries, for which we will use the full history of Maven Central as our data set. To that end, we will look at difference between versions, and draw conclusions on the general impact of library incompatibilities. And, to draw these coarse grain conclusions over a full set of libaries, we will show how we make use of fine-grained change sets derived at the individual statement level, making use of the ChangeDistiller technology as developed at SEAL in Zurich.
Prof. Dr. Arie van Deursen is a professor at Delft University of Technology, where he leads the Software Engineering Research Group. His research interests include software testing, software architecture, and social aspects of software engineering.
Speaker: Cristian Danescu-Niculescu-Mizil, Ph.D.
Host: Avi Bernstein
Much of online social activity takes the form of natural language, from product reviews to conversations on social-media platforms. I will show how analyzing these interactions from the perspective of language use can provide a new understanding of social dynamics in online communities. I will describe two of my efforts in this direction. The first project leverages insights from psycholinguistics to build a novel computational framework that shows how key aspects of social relations between individuals are embedded in (and can be inferred from) their conversational behavior. In particular, I will discuss how power differentials between interlocutors are subtly revealed by how much one individual immediately echoes the linguistic style of the person they are responding to. The second project explores the relation between users and their community, as revealed by patterns of linguistic change. I will show that users follow a determined lifecycle with respect to their susceptibility to adopt new community norms, and how this insight can be harnessed to predict how long a user will stay active in the community. This talk includes joint work with Susan Dumais, Michael Gamon, Dan Jurafsky, Jon Kleinberg, Jure Leskovec, Lillian Lee, Bo Pang, Christopher Potts and Robert West.
Cristian Danescu-Niculescu-Mizil, Ph.D. is a faculty member of the Max Planck Institute SWS. His research aims at developing computational frameworks that can lead to a better understanding of human social behavior, by unlocking the unprecedented potential of the large amounts of natural language data generated online. His work tackles problems related to conversational behavior, opinion mining, computational semantics and computational advertising. He is the recipient of several awards, including the WWW 2013 Best Paper Award and a Yahoo! Key Scientific Challenges award, and his work has been featured in popular-media outlets such as the New Scientist, Nature News, NPR and the New York Times. Cristian Danescu-Niculescu-Mizil received his Ph.D. in computer science from Cornell University and was a postdoctoral researcher in the computer science and linguistics departments at Stanford University. Earlier, he earned a master's degree from Jacobs University Bremen and an undergraduate degree from the University of Bucharest.
Speaker: Prof. Douglas Vogel, Ph.D.
Host: Daning Hu
Healthcare is currently in a global crisis mode with little to suggest that existing ways of doing things will lead to long-term effectiveness. Fortunately, there are a number of technological innovations that can be brought to bear to affect change. These include support for extending traditional forms of healthcare as well as introducing new options e.g., quantified self and food quality assurance. The strategic direction lies in personal empowerment with an extended view of healthcare in an integrated environment beyond the scope of traditional healthcare systems. However, issues abound. The purpose of this address is to begin a dialog that can help all of us and society move towards sustainable healthcare through application of technology in an atmosphere of enhanced quality of life.
Prof. Doug Vogel, Ph.D., is Professor of Information Systems at the City University of Hong Kong and is an Association for Information Systems (AIS) Fellow as well as AIS President and Director of the eHealth Research Institute for the Harbin Institute of Technology School of Management in China. He received his M.S. in Computer Science from U.C.L.A. and his Ph.D. in Management Information Systems from the University of Minnesota where he was also research coordinator for the MIS Research Center. Professor Vogel has published widely and directed extensive research on group support systems, knowledge management and technology support for education. He has been recognized as the most cited IS author in Asia-Pacific. He is currently engaged in introducing mobile devices and support for integrated collaborative applications in educational and healthcare systems.