Learnability of co-r.e. classes

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The object of investigation in this paper is the learnability of co-recursively enumerable (co-r.e.) languages based on Gold's [11] original model of inductive inference. In particular, the following learning models are studied: finite learning, explanatory learning, vacillatory learning and behaviourally correct learning. The relative effects of imposing further learning constraints, such as conservativeness and prudence on these various learning models are also investigated. Moreover, an extension of Angluin's [1] characterisation of identifiable indexed families of recursive languages to families of conservatively learnable co-r.e. classes is presented. In this connection, the paper considers the learnability of indexed families of recursive languages, uniformly co-r.e. classes as well as other general classes of co-r.e. languages. A containment hierarchy of co-r.e. learning models is thereby established; while this hierarchy is quite similar to its r.e. analogue, there are some surprising collapses when using a co-r.e. hypothesis space; for example vacillatory learning collapses to explanatory learning. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Gao, Z., & Stephan, F. (2012). Learnability of co-r.e. classes. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7183 LNCS, pp. 252–263). https://doi.org/10.1007/978-3-642-28332-1_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free