Metric entropy and minimax risk in classification

  • Haussler D
  • Opper M
N/ACitations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We apply recent results on the minimax risk in density estimation to the related problem of pattern classification. The notion of loss we seek to minimize is an information theoretic measure of how well we can predict the classification of future examples, given the classification of previously seen examples. We give an asymptotic characterization of the minimax risk in terms of the metric entropy properties of the class of distributions that might be generating the examples. We then use these results to characterize the minimax risk in the special case of noisy two-valued classification problems in terms of the Assouad density and the Vapnik-Chervonenkis dimension.

Cite

CITATION STYLE

APA

Haussler, D., & Opper, M. (1997). Metric entropy and minimax risk in classification (pp. 212–235). https://doi.org/10.1007/3-540-63246-8_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free