Navigation auf uzh.ch

Suche

Department of Informatics s.e.a.l

PTConf - Performance Test (Re)Configuration

Introduction

Performance tests in the form of software microbenchmarks are a simple form of assessing software performance on fine-granular level, i.e., methods. They are the unit-test equivalent for performance. Usually a performance test is executed repeatedly (many thousand times) to retrieve rigorous test results for the unit under test. The number of repetitions depends on the system the tests are executed in and are defined by developers either through command-line arguments or annotations. Currently there are no tools that support developers in making the choice how to correctly configure performance tests for their system.

Goals of this master project

In this master project, the students are expected to investigate the current state of performance-test configuration, design a solution which supports (re)configuration, implement the tool which performs the (re)configuration, and include it in a currently developed tool for continuous integration (Jenkins plugin). In this project the students will focus on performance tests written in Java with JMH (Java Microbenchmarking Harness which is part of OpenJDK). In the first part, the students will conduct a mining software repositories study, where the source code of projects with JMH-benchmark is parsed to explore existing configurations. Second, they will investigate test results whether their quality changes between versions and define a set of code changes that influences the performance-test results. Third, machine learning techniques should be applied to predict the quality change of the results. And forth, the configuration of performance tests will be before the execution starts (based on the predictions) and dynamically which stops the execution if the quality has achieved a defined level.

Task description

The main tasks of the project are (depending on number of students, this list is adaptable):

  • Familiarize with performance testing in Java/JMH.

  • Extract source code from Github repositories.

  • Develop a AST parser that extracts information from the source code.

  • Study execution results for performance/quality changes and their root causes (e.g., added loop, syscall, etc.).

  • Apply machine learning models (probably linear regression) to predict performance-quality changes based on historical execution information and a set of source-code changes.

  • Implement tooling that reconfigures existing test configurations (through automatic source-code changes) and adapt JMH to support dynamic stoppage criteria when the result quality has reached a certain level.

  • Include the tooling in an existing Jenkins plugin for performance-test execution in CI (developed as part of CSPA - Continuous Software Performance Assessment).

  • Write a report summarizing the results of the master project and present it to the assistant/professor

 

This projects is designed for three students but can be adapted for different group sizes, and it is also available as a master thesis in reduced size and slightly different focus.

References

  • [1] Leitner and Bezemer - An Exploratory Study of the State of Practice of Performance Testing in Java-Based Open Source ProjectsLink
  • [2] Laaber et al. - Performance Testing in the Cloud. How Bad is it Really?Link
  • [3] Georges et al. - Statistically rigorous ava performance evaluation Link

Posted: 24.10.2018

Contact: Christoph Laaber

Weiterführende Informationen

Title

Teaser text