ELLE: Efficient Lifelong Pre-training for Emerging Data

32Citations
Citations of this article
71Readers
Mendeley users who have this article in their library.

Abstract

Current pre-trained language models (PLM) are typically trained with static data, ignoring that in real-world scenarios, streaming data of various sources may continuously grow. This requires PLMs to integrate the information from all the sources in a lifelong manner. Although this goal could be achieved by exhaustive pretraining on all the existing data, such a process is known to be computationally expensive. To this end, we propose ELLE, aiming at efficient lifelong pre-training for emerging data. Specifically, ELLE consists of (1) function preserved model expansion, which flexibly expands an existing PLM's width and depth to improve the efficiency of knowledge acquisition; and (2) pre-trained domain prompts, which disentangle the versatile knowledge learned during pretraining and stimulate the proper knowledge for downstream tasks. We experiment ELLE with streaming data from 5 domains on BERT and GPT. The results show the superiority of ELLE over various lifelong learning baselines in both pre-training efficiency and downstream performances. The codes are publicly available at https://github.com/thunlp/ELLE.

Cite

CITATION STYLE

APA

Qin, Y., Zhang, J., Lin, Y., Liu, Z., Li, P., Sun, M., & Zhou, J. (2022). ELLE: Efficient Lifelong Pre-training for Emerging Data. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 2789–2810). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-acl.220

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free