NB-MLM: Efficient Domain Adaptation of Masked Language Models for Sentiment Analysis

7Citations
Citations of this article
57Readers
Mendeley users who have this article in their library.

Abstract

While Masked Language Models (MLM) are pre-trained on massive datasets, the additional training with the MLM objective on domain or task-specific data before fine-tuning for the final task is known to improve the final performance. This is usually referred to as the domain or task adaptation step. However, unlike the initial pre-training, this step is performed for each domain or task individually and is still rather slow, requiring several GPU days compared to several GPU hours required for the final task fine-tuning. We argue that the standard MLM objective leads to inefficiency when it is used for the adaptation step because it mostly learns to predict the most frequent words, which are not necessarily related to a final task. We propose a technique for more efficient adaptation that focuses on predicting words with large weights of the Naive Bayes classifier trained for the task at hand, which are likely more relevant than the most frequent words. The proposed method provides faster adaptation and better final performance for sentiment analysis compared to the standard approach.

Cite

CITATION STYLE

APA

Arefyev, N., Kharchev, D., & Shelmanov, A. (2021). NB-MLM: Efficient Domain Adaptation of Masked Language Models for Sentiment Analysis. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 9114–9124). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.717

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free