Leave One Out Error, Stability, and Generalization of Voting Combinations of Classifiers

107Citations
Citations of this article
101Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We study the leave-one-out and generalization errors of voting combinations of learning machines. A special case considered is a variant of bagging. We analyze in detail combinations of kernel machines, such as support vector machines, and present theoretical estimates of their leave-one-out error. We also derive novel bounds on the stability of combinations of any classifiers. These bounds can be used to formally show that, for example, bagging increases the stability of unstable learning machines. We report experiments supporting the theoretical findings.

Cite

CITATION STYLE

APA

Evgeniou, T., Pontil, M., & Elisseeff, A. (2004). Leave One Out Error, Stability, and Generalization of Voting Combinations of Classifiers. Machine Learning, 55(1), 71–97. https://doi.org/10.1023/B:MACH.0000019805.88351.60

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free