Enhancing Lifelong Language Learning by Improving Pseudo-Sample Generation

0Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

To achieve lifelong language learning, pseudo-rehearsal methods leverage samples generated from a language model to refresh the knowledge of previously learned tasks. Without proper controls, however, these methods could fail to retain the knowledge of complex tasks with longer texts since most of the generated samples are low in quality. To overcome the problem, we propose three specific contributions. First, we utilize double language models, each of which specializes in a specific part of the input, to produce high-quality pseudo samples. Second, we reduce the number of parameters used by applying adapter modules to enhance training efficiency. Third, we further improve the overall quality of pseudo samples using temporal ensembling and sample regeneration. The results show that our framework achieves significant improvement over baselines on multiple task sequences. Also, our pseudo sample analysis reveals helpful insights for designing even better pseudo-rehearsal methods in the future.

Cite

CITATION STYLE

APA

Kanwatchara, K., Horsuwan, T., Lertvittayakumjorn, P., Kijsirikul, B., & Vateekul, P. (2022). Enhancing Lifelong Language Learning by Improving Pseudo-Sample Generation. Computational Linguistics, 48(4), 819–848. https://doi.org/10.1162/coli_a_00449

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free