Minimax bounds for active learning

13Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper aims to shed light on achievable limits in active learning. Using minimax analysis techniques, we study the achievable rates of classification error convergence for broad classes of distributions characterized by decision boundary regularity and noise conditions. The results clearly indicate the conditions under which one can expect significant gains through active learning. Furthermore we show that the learning rates derived are tight for "boundary fragment" classes in d-dimensional feature spaces when the feature marginal density is bounded from above and below. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Castro, R. M., & Nowak, R. D. (2007). Minimax bounds for active learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4539 LNAI, pp. 5–19). Springer Verlag. https://doi.org/10.1007/978-3-540-72927-3_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free