Analysis of perceptron-based active learning

72Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We start by showing that in an active learning setting, the Perceptron algorithm needs Ω(1/ε2) labels to learn linear separators within generalization error e. We then present a simple selective sampling algorithm for this problem, which combines a modification of the perceptron update with an adaptive filtering rule for deciding which points to query. For data distributed uniformly over the unit sphere, we show that our algorithm reaches generalization error ε after asking for just Õ(d log 1/ε) labels. This exponential improvement over the usual sample complexity of supervised learning has previously been demonstrated only for the computationally more complex query-by-committee algorithm. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Dasgupta, S., Kalai, A. T., & Monteleoni, C. (2005). Analysis of perceptron-based active learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3559 LNAI, pp. 249–263). Springer Verlag. https://doi.org/10.1007/11503415_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free