The task rehearsal method of life-long learning: Overcoming impoverished data

91Citations
Citations of this article
42Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The task rehearsal method (TRM) is introduced as an approach to life-long learning that uses the representation of previously learned tasks as a source of inductive bias. This inductive bias enables TRM to generate more accurate hypotheses for new tasks that have small sets of training examples. TRM has a knowledge retention phase during which the neural network representation of a successfully learned task is stored in a domain knowledge database, and a knowledge recall and learning phase during which virtual examples of stored tasks are generated from the domain knowledge. The virtual examples are rehearsed as secondary tasks in parallel with the learning of a new (primary) task using the ηMTL neural network algorithm, a variant of multiple task learning (MTL). The results of experiments on three domains show that TRM is effective in retaining task knowledge in a representational form and transferring that knowledge in the form of virtual examples. TRM with ηMTL is shown to develop more accurate hypotheses for tasks that suffer from impoverished training sets.

Cite

CITATION STYLE

APA

Silver, D. L., & Mercer, R. E. (2002). The task rehearsal method of life-long learning: Overcoming impoverished data. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2338, pp. 90–101). Springer Verlag. https://doi.org/10.1007/3-540-47922-8_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free