Merging uniform inductive learners

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The fundamental learning model considered here is identification of recursive functions in the limit as introduced by Gold [8], but the concept is investigated on a meta-level. A set of classes of recursive functions is uniformly learnable under an inference criterion I, if there is a single learner, which synthesizes a learner for each of these classes from a corresponding description of the class. The particular question discussed here is how unions of uniformly learnable sets of such classes can still be identified uniformly. Especially unions of classes leading to strong separations of inference criteria in the uniform model are considered. The main result is that for any pair (I, I′) of different inference criteria considered here there exists a fixed set of descriptions of learning problems from I, such that its union with any uniformly I-learnable collection is uniformly I′-learnable, but no longer uniformly I-learnable.

Cite

CITATION STYLE

APA

Zilles, S. (2002). Merging uniform inductive learners. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 2375, pp. 201–216). Springer Verlag. https://doi.org/10.1007/3-540-45435-7_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free