Optimal aggregation of classifiers in statistical learning

484Citations
Citations of this article
115Readers
Mendeley users who have this article in their library.

Abstract

Classification can be considered as nonparametric estimation of sets, where the risk is defined by means of a specific distance between sets associated with misclassification error. It is shown that the rates of convergence of classifiers depend on two parameters: the complexity of the class of candidate sets and the margin parameter. The dependence is explicitly given, indicating that optimal fast rates approaching O(n -1) can be attained, where n is the sample size, and that the proposed classifiers have the property of robustness to the margin. The main result of the paper concerns optimal aggregation of classifiers: we suggest a classifier that automatically adapts both to the complexity and to the margin, and attains the optimal fast rates, up to a logarithmic factor. © Institute of Mathematical Statistics, 2004.

Cite

CITATION STYLE

APA

Tsybakov, A. B. (2004). Optimal aggregation of classifiers in statistical learning. Annals of Statistics, 32(1), 135–166. https://doi.org/10.1214/aos/1079120131

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free