The representation of recursive languages and its impact on the efficiency of learning

3Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

In the present paper we study the learnability of enumerable families L of uniformly recursive languages in dependence on the number of allowed mind changes, i.e., with respect to a well-studied measure of efficiency. We distinguish between exact learnability (L has to be learnt w.r.t. the hypothesis space L itself), class preserving learning (L has to be inferred w .r.t. some hypothesis space G having the same range as t), and class comprising inference (L has to be inferred w.r.t. some hypothesis space G that has a range including range(L)) as well as between learning from positive and from both, posxtive and negative examples. The measure of efficiency is applied to prove the superiority of class comprising learning algorithms over class preserving learning which itself turns out to be superior to exact learning algorithms. In particular, we considerably improve results obtained previously and show that a suitable choice of the hypothesis space may result in a considerable speed up of learning algorithms, even if instead of positive and negative data only positive examples will be presented. Furthermore, we completely separate all modes of learning with a bounded number of mind changes from class preserving learning that avoids overgeneralization.

Cite

CITATION STYLE

APA

Lange, S. (1994). The representation of recursive languages and its impact on the efficiency of learning. In Proceedings of the Annual ACM Conference on Computational Learning Theory (Vol. Part F129415, pp. 256–267). Association for Computing Machinery. https://doi.org/10.1145/180139.181139

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free