Bias in estimating the variance of K-fold cross-validation

39Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Most machine learning researchers perform quantitative experiments to estimate generalization error and compare the perforniance of different algorithms (in particular, their proposed algorithmn). In order to be able to draw statistically convincing conclusions, it is important to estimate the uncertainty of such estimates. This paper studies the very commonly used K-fold cross-validation estimator of generalization performance. The main theorem shows that there exists no universal (valid under all distributions) unbiased estimator of the variance of K-fold cross-validation, based on a single computation of the K-fold cross-validation estimator. The analysis that accompanies this result is based on the eigen-decomposition of the covariance matrix of errors, which has only three different eigenvalues corresponding to three degrees of freedom of the matrix and three components of the total variance. This analysis helps to better understand the nature of the problem and how it can make naive estimators (that don't take into account the error correlations due to the overlap between training and test sets) grossly underestimate variance. This is confirmed by numerical experiments in which the three components of the variance are compared when the difficulty of the learning problem and the number of folds are varied. © 2005 Springer Science+Business Media, Inc.

Cite

CITATION STYLE

APA

Bengio, Y., & Grandvalet, Y. (2005). Bias in estimating the variance of K-fold cross-validation. In Statistical Modeling and Analysis for Complex Data Problems (pp. 75–95). Springer US. https://doi.org/10.1007/0-387-24555-3_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free