Deriving metric thresholds from benchmark data

  • Alves T
  • Ypma C
  • Visser J
  • 87


    Mendeley users who have this article in their library.
  • 91


    Citations of this article.


A wide variety of software metrics have been proposed and a broad range of tools is available to measure them. However, the effective use of software metrics is hindered by the lack of meaningful thresholds. Thresholds have been proposed for a few metrics only, mostly based on expert opinion and a small number of observations. Previously proposed methodologies for systematically deriving metric thresholds have made unjustified assumptions about the statistical properties of source code metrics. As a result, the general applicability of the derived thresholds is jeopardized. We designed a method that determines metric thresholds empirically from measurement data. The measurement data for different software systems are pooled and aggregated after which thresholds are selected that (i) bring out the metric's variability between systems and (ii) help focus on a reasonable percentage of the source code volume. Our method respects the distributions and scales of source code metrics, and it is resilient against outliers in metric values or system size. We applied our method to a benchmark of 100 object-oriented software systems, both proprietary and open-source, to derive thresholds for metrics included in the SIG maintainability model.

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document


Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free