Abstract
Pre-trained language models (PLMs) aim to learn universal language representations by conducting self-supervised training tasks on large-scale corpus. Since PLMs capture word semantics in different contexts, the quality of word representations highly depends on word frequency, which usually follows a heavy-tailed distribution in the pre-training corpus. Thus, the embeddings of rare words on the tail are usually poorly optimized. In this work, we focus on enhancing language model pre-training by leveraging definitions of the rare words in dictionary. To incorporate a rare word definition as a part of input, we fetch it from the dictionary and append it to the end of the input text sequence. In addition to training with the masked language modeling objective, we propose two novel self-supervised pre-training tasks on word-level and sentence-level alignment between the input text and rare word definition to enhance language representations. We evaluate the proposed model named Dict-BERT on the GLUE benchmark and eight specialized domain datasets. Extensive experiments show that Dict-BERT significantly improves the understanding of rare words and boosts model performance on various NLP downstream tasks.
Cite
CITATION STYLE
Yu, W., Zhu, C., Fang, Y., Yu, D., Wang, S., Xu, Y., … Jiang, M. (2022). Dict-BERT: Enhancing Language Model Pre-training with Dictionary. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 1907–1918). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-acl.150
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.