We study the learnability of enumerable families L of uniformly recursive languages in dependence on the number of allowed mind changes, i.e., with respect to a well-studied measure of efficiency. We distinguish between exact learnability (L has to be inferred w.r.t. C) and class preserving learning (L has to be inferred w.r.t. some suitable chosen enumeration of all the languages from L) as well as between learning from positive and from both, positive and negative data. The measure of efficiency is applied to prove the superiority of class preserving learning algorithms over exact learning. We considerably improve results obtained previously and establish two infinite hierarchies. Furthermore, we separate exact and class preserving learning from positive data that avoids overgeneralization. Finally, language learning with a bounded number of mind changes is completely characterized in terms of recursively generable finite sets. These characterizations offer a new method to handle overgeneralizations and resolve an open question of Mukouchi (1992).
CITATION STYLE
Lange, S., & Zeugmann, T. (1993). Language learning with a bounded number of mind changes. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 665 LNCS, pp. 682–691). Springer Verlag. https://doi.org/10.1007/3-540-56503-5_67
Mendeley helps you to discover research relevant for your work.