More theorems about scale-sensitive dimensions and learning

21Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a new general-purpose algorithm for learning classes of [0, l]-valued functions in a generalization of the prediction model, and prove a general upper bound on the expected absolute error of this algorithm in terms of a scale-sensitive generalization of the Vapnik dimension proposed by Alon, Ben-David, Cesa-Bianchi and Haussler. We give lower bounds implying that our upper bounds cannot be improved by more than a constant in general. We apply this result, together with techniques due to Haussler, to obtain new upper bounds on packing numbers in terms of this scale-sensitive notion of dimension. Using a different technique, we obtain new bounds on packing numbers in terms of Kearns and Schapire's fat-shattering function. We show how to apply both packing bounds to obtain improved general bounds on the sample complexity of agnostic learning. For each ∈ > 0, we establish weaker sufficient and stronger necessary conditions for a class of [0, l]-valued functions to be agnostically learnable to within ∈, to be an ∈-uniform Glivenko-Cantelli class, and to be agnostically learnable to within e by an algorithm using only hypotheses from the class.

Cite

CITATION STYLE

APA

Bartlett, P. L., & Long, P. M. (1995). More theorems about scale-sensitive dimensions and learning. In Proceedings of the 8th Annual Conference on Computational Learning Theory, COLT 1995 (Vol. 1995-January, pp. 392–401). Association for Computing Machinery. https://doi.org/10.1145/225298.225346

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free