Probably Almost Bayes Decisions

5Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, we investigate the problem of classifying objects which are given by feature vectors with Boolean entries. Our aim is to "(efficiently) learn probably almost optimal classifications" from examples. A classical approach in pattern recognition uses empirical estimations of the Bayesian discriminant functions for this purpose. We analyze this approach for different classes of distribution functions of Boolean features: k th order Bahadur-Lazarsfeld expansions and k th order Chow expansions. In both cases, we obtain upper bounds for the required sample size which are small polynomials in the relevant parameters and which match the lower bounds known for these classes. Moreover, the learning algorithms are efficient. © 1996 Academic Press, Inc.

Cite

CITATION STYLE

APA

Anoulova, S., Fischer, P., Pölt, S., & Simon, H. U. (1996). Probably Almost Bayes Decisions. Information and Computation, 129(1), 63–71. https://doi.org/10.1006/inco.1996.0074

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free