Navigation auf uzh.ch
Thursday, March 17, 2011
13.30-14.30, BIN 1.D.22 (new location!) former: BIN 0.B.06
Abstract
Due to their pivotal
role in software engineering, software engineers spend considerable
effort on the quality assurance of software requirements specifications.
As they are mainly described in natural language, relatively few means
of automated quality assessment exist. We found, however, that clone
detection, a technique widely applied to source code, is promising to
assess one important quality aspect in an automated way, namely
redundancy that stems from copy&paste operations. I describe in this
talk a large-scale case study that applied clone detection to 28
requirements specifications with a total of 8,667 pages. I report on the
amount of redundancy found in real-world specifications, discuss its
nature as well as its consequences, and evaluate in how far existing
code clone detection approaches can be applied to assess the quality of
requirements specifications in practice.
Short Bio
Stefan Wagner
received a diploma in computer science from the University of Applied
Sciences, Augsburg, an MSc in distributed and multimedia information
systems from Heriot-Watt University, Edinburgh, and a doctoral degree
from Technische Universität München. Since 2007, he has been working at
TU München in the Software Systems Engineering group of Prof. Broy
heading the competence centre fro software quality. He leads the
consortiums project Quamoco in which he develops jointly with partners
like SAP and Siemens a new quality model. Dr. Wagner has published more
than 50 national and international contributions to quality
specification and evaluation, software testing, clone detection, and
requirements engineering.
Thursday, March 17, 2011
16.00-17.00, BIN 2.A.10
Abstract
Reliably predicting
software defects is one of the holy grails of software engineering.
Researchers have devised and implemented a plethora of defect prediction
approaches, which vary in terms of accuracy, complexity, and the input
data they require. However, the absence of an established benchmark
makes it hard, if not impossible, to compare approaches. We discuss a
benchmark for defect prediction, which we exploit to compare well-known
techniques, together with novel approaches we devised. We measure the
predicting performances in two scenarios: entity classification
(defect-prone or not) and ranking. We also take into account the effort
needed to review the entities. Based on the results of our comparison,
we illustrate a number of insights on the prediction approaches, and we
outline future research directions in the field.
Short Bio
Marco D'Ambros
is a postdoctoral researcher at the University of Lugano, Switzerland.
He earned his PhD in software engineering from the same University in
October 2010, under the supervision of Prof. Michele Lanza. He received
MSc degrees from both Politecnico di Milano (Italy) and the University
of Illinois at Chicago. His research interests lie in the domain of
software engineering with a special focus on software evolution,
software visualization, and defect prediction & analysis. He
authored more than 25 technical papers, and is the creator of several
software visualization and program comprehension tools.
Friday, March 18, 2011
08.30-09.30, BIN 1.B.18
Abstract
In the development of a
software system, large amounts of new information are produced
continuously. Source code, bugs, iteration plans and documentation, to
name just a few, are changed or newly created by developers of the
software system every day. As a developer works on the system, she does
not only produce or change the information in such artifacts, she also
has to find information to answer questions and stay aware of the
relevant information. However, to effectively complete her task, the
developer only requires the small portion of information that is
absolutely pertinent to her work. To support developers in coping with
this overload and managing the information, we propose two
developer-centric models. The information fragment model supports the
automatic integration of information to help rank, filter and interpret
the information a developer might be interested in. The
Degree-of-Knowledge (DOK) model provides a means to automatically
determine the core of what a developer knows. This knowledge model can
then be used to identify the information a developer might be interested
in.
Short Bio
Thomas Fritz
is a Ph.D. candidate in the Department of Computer Science at the
University of British Columbia. He completed his Diplom thesis as part
of the OBASCO (Objects, Aspects and Components) group at the Ecole des
Mines de Nantes, France, and completed his Diplom degree at the
Ludwig-Maximilians-University Munich, Germany, in 2005. He also has
experience working as an intern with several companies including the IBM
labs in Zurich and Ottawa. His research focuses on how to help software
developers better manage the information and systems on which they
work. At ICSE 2010, Thomas won an ACM SIGSOFT Distinguished Paper Award
and came third in the ACM Student Research Competition.
Friday, March 18, 2011
11.00-12.00, BIN 1.B.18
Abstract
In the last decade, the
use of open source software has become an important factor to reduce
the costs of IT projects. Unfortunately, the success of many projects is
hindered by the lack of documentation. Many libraries are difficult to
use because implicit rules (such as a call to close() must be preceded
by a call to open()) are undocumented. To approach this problem,
specification mining aims at learning such specifications from program
executions. In this talk, I will take two steps towards making mined
specifications applicable in practice. First, I will introduce object
behavior models, a specification mining technique that yields concise
specifications for the behavior of individual objects. Second, I will
approach the problem of incomplete specifications by combining test case
generation with specification mining.
Short Bio
Valentin Dallmeier
studied computer science at the universities of Passau and Saarbrücken
(Germany). He received his diploma in 2005 from Saarland University and
joined the chair for software engineering lead by Andreas Zeller in the
same year. His research interests are applications of dynamic program
analysis, in particular fault localization and mining specifications. In
2010, his dissertation work was awarded the Ernst Denert award for
software engineering.
Friday, March 18, 2011
14.30-15.30, BIN 1.B.18
Abstract
In this talk I argue
that we need a radical change in the way we approach software assessment
both in practice and in research. Assessment is a critical software
engineering activity, often accounting to as much as 50% of the overall
development effort. However, in practice this activity is regarded as
secondary and it is dealt with in an ad-hoc way. This does not service.
We should recognize it explicitly and approach it holistically as a
discipline. Why holistically? Because software evolution is a
multidimensional phenomenon that exhibits itself in multiple forms and
at multiple levels of abstraction. For example, software evolution spans
over multiple topics such as modeling, data mining, visualization,
human-computer interaction, or even language design. There exists an
extensive body of research in each of these areas, but these approaches
are mostly disparate and thus have little overall impact. We need a new
kind of research effort that deals with their integration. Ultimately,
assessment is a human activity that concerns taking decisions in
specific situations. Thus, to be effective, assessment must go beyond
general technicalities and deal with those specific situations. For
example, instead of having a predefined generic tool, we should be able
to craft one that deals with the constraints of the system under study.
To accommodate the scale of the problem, the research methods should be
adapted to the task as well. First, it is critical to integrate tool
building into the research process, because without scalable tools we
cannot handle large. Second, we have to work collaboratively both to
integrate our conceptual approaches and to share the practical costs of
tool building.
Short Bio
Tudor Gîrba
attained his PhD in 2005 from the University of Berne, Switzerland, and
he now works as an independent consultant. His main expertise lies in
the area of software engineering with focus on software and data
assessment. Among others, since 2003 he leads the work on the Moose
analysis platform (http://moosetechnology.org). He published all sorts
of peer reviewed publications, he served in program committees for
several dozen international venues, and he is regularly invited to give
talks and lectures. He is currently advocating that assessment must be
recognized as a critical software engineering activity. He coined the
term "humane assessment" (http://humane-assessment.com), and he is
currently helping companies to assess and to manage large software
systems and data sets.