Bayes and Tukey meet at the center point

6Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The Bayes classifier achieves the minimal error rate by constructing a weighted majority over all concepts in the concept class. The Bayes Point [1] uses the single concept in the class which has the minimal error. This way, the Bayes Point avoids some of the deficiencies of the Bayes classifier. We prove a bound on the generalization error for Bayes Point Machines when learning linear classifiers, and show that it is at most ∼ 1.71 times the generalization error of the Bayes classifier, independent of the input dimension and length of training. We show that when learning linear classifiers, the Bayes Point is almost identical to the Tukey Median [2] and Center Point [3]. We extend these definitions beyond linear classifiers and define the Bayes Depth of a classifier. We prove generalization bound in terms of this new definition. Finally we provide a new concentration of measure inequality for multivariate random variables to the Tukey Median.

Cite

CITATION STYLE

APA

Gilad-Bachrach, R., Navot, A., & Tishby, N. (2004). Bayes and Tukey meet at the center point. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 3120, pp. 549–563). Springer Verlag. https://doi.org/10.1007/978-3-540-27819-1_38

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free