Unsupervised Improvement of Factual Knowledge in Language Models

5Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Masked language modeling (MLM) plays a key role in pretraining large language models. But the MLM objective is often dominated by high-frequency words that are sub-optimal for learning factual knowledge. In this work, we propose an approach for influencing MLM pretraining in a way that can improve language model performance on a variety of knowledge-intensive tasks. We force the language model to prioritize informative words in a fully unsupervised way. Experiments demonstrate that the proposed approach can significantly improve the performance of pretrained language models on tasks such as factual recall, question answering, sentiment analysis, and natural language inference in a closed-book setting.

Cite

CITATION STYLE

APA

Sadeq, N., Kang, B., Lamba, P., & McAuley, J. (2023). Unsupervised Improvement of Factual Knowledge in Language Models. In EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 2952–2961). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.eacl-main.215

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free