Navigation auf uzh.ch

Suche

Department of Informatics s.e.a.l

BenGen - Automatic Performance Test Suite (Benchmarking) Generation

Introduction

A common technique in software engineering is functional testing (e.g. unit testing), whereas non-functional testing such as performance testing is less-well understood. Hence hardly any projects have a test suite that focusses on performance [1]. Benchmarks are hard to write, because developers have to know implementation internals of programming languages or their runtime environments (i.e. virtual machines) as well as rigorous performance measurement techniques to collect reliable results. 

Goals of this master project

This master project’s goal is to either take an already existing test suite (e.g. JUnit test suite) or identify common paths through a method to test, and generate from those a good benchmark test suite. The students are expected to understand the problems of performance testing and differences to unit testing, in order to identify suitable unit test cases that serve as a basis for the benchmark generation. The second/alternative approach finds the "usual" path through a method, either from execution data (e.g., production system or unit test suite) or by applying symbolic execution, and generates a work load (parameter values) from it. The main outcome of the project should be a code generator that takes a functional test suite or a method set to be tested, and generates a performance test suite that can be used to assess software performance on a small granular level.

Task description

The main tasks of this project are: 

  • Familiarise with the current state of practice in performance testing (benchmarking).
  • Get an in depth knowledge of what is important for reliably assessing software performance.
  • Study unit test suites in the Java ecosystem, in particular their common patterns, to find suitable approaches for the performance test generation.
  • Identify unit tests (selection/filtering) that are not suited for performance test generation through static analysis.
  • Find the "usual" path through a method, with either execution traces or symbolic execution
  • Write a code generator (in a JVM language, we are open to your language of choice) that produces valid benchmark suites
  • Depending on the number of project members further fine grained analyses/transformations can be applied which result in better performance test quality. 
  • Write a final report summarising the results of the work, and present it in front of the assistant

 

This project is also available as a master thesis in reduced size and slightly different focus.

References

  • [1] Philipp Leitner, Cor-Paul Bezemer (2017). An Exploratory Study of the State of Practice of Performance Testing in Java-Based Open Source Projects. In Proceedings of the 7th ACM/SPEC International Conference on Performance Engineering (ICPE)

Posted: 12.3.2018

Contact: Christoph Laaber

Weiterführende Informationen

Title

Teaser text