In this paper, we investigate the problem of classifying objects which are given by feature vectors with Boolean entries. Our aim is to "(efficiently) learn probably almost optimal classifications" from examples. A classical approach in pattern recognition uses empirical estimations of the Bayesian discriminant functions for this purpose. We analyze this approach for different classes of distribution functions of Boolean features: k th order Bahadur-Lazarsfeld expansions and k th order Chow expansions. In both cases, we obtain upper bounds for the required sample size which are small polynomials in the relevant parameters and which match the lower bounds known for these classes. Moreover, the learning algorithms are efficient. © 1996 Academic Press, Inc.
CITATION STYLE
Anoulova, S., Fischer, P., Pölt, S., & Simon, H. U. (1996). Probably Almost Bayes Decisions. Information and Computation, 129(1), 63–71. https://doi.org/10.1006/inco.1996.0074
Mendeley helps you to discover research relevant for your work.