Abstract
In this paper, we investigate the problem of classifying objects which are given by feature vectors with Boolean or real entries. Our aim is to `(efficiently) learn probably almost optimal classifications' from examples. A classical approach in pattern recognition uses empirical estimations of the Bayesian discriminant functions for this purpose. We analyze this approach for different classes of distribution functions: In the Boolean case we look at the k-th order Bahadur-Lazarsfeld expansions and k-th order Chow expansions and in the continuous case at the class of normal distributions. In all cases, we obtain polynomial upper bounds for the required sample size. The bounds for the Boolean case improve and extend results from [FPS91].
Cite
CITATION STYLE
Anoulova, S., Fischer, P., Poelt, S., & Simon, H. U. (1992). PAB-decisions for boolean and real-valued features. In Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory (pp. 353–362). Publ by ACM. https://doi.org/10.1145/130385.130425
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.