Self-Evolution Learning for Discriminative Language Model Pretraining

8Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Masked language modeling, widely used in discriminative language model (e.g., BERT) pretraining, commonly adopts a random masking strategy. However, random masking does not consider the importance of the different words in the sentence meaning, where some of them are more worthy to be predicted. Therefore, various masking strategies (e.g., entity-level masking) are proposed, but most of them require expensive prior knowledge and generally train from scratch without reusing existing model weights. In this paper, we present Self-Evolution learning (SE), a simple and effective token masking and learning method to fully and wisely exploit the knowledge from data. SE focuses on learning the informative yet under-explored tokens and adaptively regularizes the training by introducing a novel Token-specific Label Smoothing approach. Experiments on 10 tasks show that our SE brings consistent and significant improvements (+1.43∼2.12 average scores) upon different PLMs. In-depth analyses demonstrate that SE improves linguistic knowledge learning and generalization.

Cite

CITATION STYLE

APA

Zhong, Q., Ding, L., Liu, J., Du, B., & Tao, D. (2023). Self-Evolution Learning for Discriminative Language Model Pretraining. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 4130–4145). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.254

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free