As a software application is developed and maintained, changes to the source code may cause unintentional slowdowns in functionality. These slowdowns are known as performance regressions. Projects which are concerned about performance oftentimes create performance regression tests, which can be run to detect performance regressions. Ideally we would run these tests on every commit, however, these tests usually need a large amount of time or resources in order to simulate realistic scenarios.
The paper entitled "Perphecy: Performance Regression Test Selection Made Simple but Effective" presents a technique to solve this problem by attempting to predict the likelihood that a commit will cause a performance regression. They use static and dynamic analysis to gather several metrics for their prediction, and then they evaluate those metrics on several projects. This thesis seeks to replicate and expand on their work.
This thesis aims in revisiting the above-mentioned research paper by replicating its experiments and extending it by including a larger set of code changes to better understand how several metrics can be combined to approximate a better prediction of any code change that may potentially introduce deterioration at the performance of the software execution.
This thesis has successfully replicated the existing study along with generating more insights related to the approach, and provides an open-source tool that can help developers with detecting any performance regression within code changes as software evolves.
Library of Congress Subject Headings
Computer architecture; Machine learning
Software Engineering (MS)
Department, Program, or Center
Software Engineering (GCCIS)
Mohamed Wiem Mkaouer
Christian D. Newman
J. Scott Hawker
Hannigan, Kevin, "An Empirical Evaluation of the Indicators for Performance Regression Test Selection" (2018). Thesis. Rochester Institute of Technology. Accessed from
RIT – Main Campus