Linear classifiers are nearly optimal when hidden variables have diverse effects

4Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We analyze classification problems in which data is generated by a two-tiered random process. The class is generated first, then a layer of conditionally independent hidden variables, and finally the observed variables. For sources like this, the Bayes-optimal rule for predicting the class given the values of the observed variables is a two-layer neural network.We show that, if the hidden variables have non-negligible effects on many observed variables, a linear classifier approximates the error rate of the Bayes optimal classifier up to lower order terms. We also show that the hinge loss of a linear classifier is not much more than the Bayes error rate, which implies that an accurate linear classifier can be found efficiently. © The Author(s) 2011.

Cite

CITATION STYLE

APA

Bshouty, N. H., & Long, P. M. (2012). Linear classifiers are nearly optimal when hidden variables have diverse effects. Machine Learning, 86(2), 209–231. https://doi.org/10.1007/s10994-011-5262-7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free