Evaluation of Jackknife and Bootstrap for defining confidence intervals for pairwise agreement measures

54Citations
Citations of this article
71Readers
Mendeley users who have this article in their library.

Abstract

Several research fields frequently deal with the analysis of diverse classification results of the same entities. This should imply an objective detection of overlaps and divergences between the formed clusters. The congruence between classifications can be quantified by clustering agreement measures, including pairwise agreement measures. Several measures have been proposed and the importance of obtaining confidence intervals for the point estimate in the comparison of these measures has been highlighted. A broad range of methods can be used for the estimation of confidence intervals. However, evidence is lacking about what are the appropriate methods for the calculation of confidence intervals for most clustering agreement measures. Here we evaluate the resampling techniques of bootstrap and jackknife for the calculation of the confidence intervals for clustering agreement measures. Contrary to what has been shown for some statistics, simulations showed that the jackknife performs better than the bootstrap at accurately estimating confidence intervals for pairwise agreement measures, especially when the agreement between partitions is low. The coverage of the jackknife confidence interval is robust to changes in cluster number and cluster size distribution. © 2011 Severiano et al.

Cite

CITATION STYLE

APA

Severiano, A., Carriço, J. A., Robinson, D. A., Ramirez, M., & Pinto, F. R. (2011). Evaluation of Jackknife and Bootstrap for defining confidence intervals for pairwise agreement measures. PLoS ONE, 6(5). https://doi.org/10.1371/journal.pone.0019539

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free