Moving beyond the mean: Analyzing variance in software engineering experiments

1Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Software Engineering (SE) experiments are traditionally analyzed with statistical tests (e.g., t-tests, ANOVAs, etc.) that assume equally spread data across groups (i.e., the homogeneity of variances assumption). Differences across groups’ variances in SE are not seen as an opportunity to gain insights on technology performance, but instead, as a hindrance to analyze the data. We have studied the role of variance in mature experimental disciplines such as medicine. We illustrate the extent to which variance may inform on technology performance by means of simulation. We analyze a real-life industrial experiment on Test-Driven Development (TDD) where variance may impact technology desirability. Evaluating the performance of technologies just based on means—as traditionally done in SE—may be misleading. Technologies that make developers obtain similar performance (i.e., technologies with smaller variances) may be more suitable if the aim is minimizing the risk of adopting them in real practice.

Cite

CITATION STYLE

APA

Santos, A., Oivo, M., & Juristo, N. (2018). Moving beyond the mean: Analyzing variance in software engineering experiments. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11271 LNCS, pp. 167–181). Springer Verlag. https://doi.org/10.1007/978-3-030-03673-7_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free