We consider the problem of attribute-efficient learning in query and mistake-bound models. Attribute-efficient algorithms make a number of queries or mistakes that is polynomial in the number of relevant variables in the target function, but only sublinear in the number of irrelevant variables. We consider a variant of the membership query model in which the learning algorithm is given as input the number of relevant variables of the target function. We show that in this model, any projection and embedding closed class of functions (including parity) that can be learned in polynomial time can be learned attribute-efficiently in polynomial time. We show that this does not hold in the randomized membership query model. In the mistake-bound model, we consider the problem of learning attribute-efficiently using hypotheses that are formulas of small depth. Our results extend the work of A. Blum, L. Hellerstein, and N. Littlestone (J. Comput. System Sci. 50 (1995), 32-40) and N. Bshouty, R. Cleve, S. Kannan, and C. Tamon (in "Proceedings, 7th Annu. ACM Workshop on Comput. Learning Theory," pp. 130-139, ACM Press, New York, 1994). © 1998 Academic Press.
CITATION STYLE
Bshouty, N., & Hellerstein, L. (1998). Attribute-Efficient Learning in Query and Mistake-Bound Models. Journal of Computer and System Sciences, 56(3), 310–319. https://doi.org/10.1006/jcss.1998.1571
Mendeley helps you to discover research relevant for your work.