We apply recent results on the minimax risk in density estimation to the related problem of pattern classification. The notion of loss we seek to minimize is an information theoretic measure of how well we can predict the classification of future examples, given the classification of previously seen examples. We give an asymptotic characterization of the minimax risk in terms of the metric entropy properties of the class of distributions that might be generating the examples. We then use these results to characterize the minimax risk in the special case of noisy two-valued classification problems in terms of the Assouad density and the Vapnik-Chervonenkis dimension.
CITATION STYLE
Haussler, D., & Opper, M. (1997). Metric entropy and minimax risk in classification (pp. 212–235). https://doi.org/10.1007/3-540-63246-8_13
Mendeley helps you to discover research relevant for your work.