Knowledge Inheritance for Pre-trained Language Models

21Citations
Citations of this article
96Readers
Mendeley users who have this article in their library.

Abstract

Recent explorations of large-scale pre-trained language models (PLMs) have revealed the power of PLMs with huge amounts of parameters, setting off a wave of training ever-larger PLMs. However, it requires tremendous computational resources to train a large-scale PLM, which may be practically unaffordable. In addition, existing large-scale PLMs are mainly trained from scratch individually, ignoring that many well-trained PLMs are available. To this end, we explore the question how could existing PLMs benefit training large-scale PLMs in future. Specifically, we introduce a pre-training framework named “knowledge inheritance” (KI) and explore how could knowledge distillation serve as auxiliary supervision during pre-training to efficiently learn larger PLMs. Experimental results demonstrate the superiority of KI in training efficiency. We also conduct empirical analyses to explore the effects of teacher PLMs' pre-training settings, including model architecture, pre-training data, etc. Finally, we show that KI could be applied to domain adaptation and knowledge transfer. The implementation is publicly available at https://github.com/thunlp/Knowledge-Inheritance.

References Powered by Scopus

Aligning books and movies: Towards story-like visual explanations by watching movies and reading books

1701Citations
N/AReaders
Get full text

Revisiting Knowledge Distillation via Label Smoothing Regularization

440Citations
N/AReaders
Get full text

ChemProt-3.0: A global chemical biology diseases mapping

127Citations
N/AReaders
Get full text

Cited by Powered by Scopus

A Systematic Review of Transformer-Based Pre-Trained Language Models through Self-Supervised Learning

39Citations
N/AReaders
Get full text

Cross-Lingual Consistency of Factual Knowledge in Multilingual Language Models

20Citations
N/AReaders
Get full text

Multimodality Representation Learning: A Survey on Evolution, Pretraining and Its Applications

8Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Qin, Y., Lin, Y., Yi, J., Zhang, J., Han, X., Zhang, Z., … Zhou, J. (2022). Knowledge Inheritance for Pre-trained Language Models. In NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 3921–3937). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.naacl-main.288

Readers over time

‘21‘22‘23‘24‘2509182736

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 26

70%

Researcher 7

19%

Lecturer / Post doc 4

11%

Readers' Discipline

Tooltip

Computer Science 40

91%

Linguistics 2

5%

Neuroscience 1

2%

Engineering 1

2%

Save time finding and organizing research with Mendeley

Sign up for free
0