We introduce efficient margin-based algorithms for selective sampling and filter-ing in binary classification tasks. Experiments on real-world textual data reveal that our al-gorithms perform significantly better than popular and similarly efficient competitors. Using the so-called Mammen-Tsybakov low noise condition to parametrize the instance distribu-tion, and assuming linear label noise, we show bounds on the convergence rate to the Bayes risk of a weaker adaptive variant of our selective sampler. Our analysis reveals that, exclud-ing logarithmic factors, the average risk of this adaptive sampler converges to the Bayes risk at rate N −(1+α)(2+α)/2(3+α) where N denotes the number of queried labels, and α > 0 is the exponent in the low noise condition. For all α > √ 3 − 1 ≈ 0.73 this convergence rate is asymptotically faster than the rate N −(1+α)/(2+α) achieved by the fully supervised version of the base selective sampler, which queries all labels. Moreover, for α → ∞ (hard margin condition) the gap between the semi-and fully-supervised rates becomes exponential. Preliminary versions of this paper appeared in the proceedings of NIPS 2002 (Margin-based algorithms for information filtering), COLT 2003 (Learning probabilistic linear-threshold classifiers via selective sampling), and NIPS 2008 (Linear classification and selective sampling under low noise conditions).
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below