Domain adversarial fine-tuning as an effective regularizer

9Citations
Citations of this article
70Readers
Mendeley users who have this article in their library.

Abstract

In Natural Language Processing (NLP), pretrained language models (LMs) that are transferred to downstream tasks have been recently shown to achieve state-of-the-art results. However, standard fine-tuning can degrade the general-domain representations captured during pretraining. To address this issue, we introduce a new regularization technique, AFTER; domain Adversarial Fine-Tuning as an Effective Regularizer. Specifically, we complement the task-specific loss used during fine-tuning with an adversarial objective. This additional loss term is related to an adversarial classifier, that aims to discriminate between in-domain and out-of-domain text representations. In-domain refers to the labeled dataset of the task at hand while out-of-domain refers to unlabeled data from a different domain. Intuitively, the adversarial classifier acts as a regularizer which prevents the model from overfitting to the task-specific domain. Empirical results on various natural language understanding tasks show that AFTER leads to improved performance compared to standard fine-tuning.

Cite

CITATION STYLE

APA

Vernikos, G., Margatina, K., Chronopoulou, A., & Androutsopoulos, I. (2020). Domain adversarial fine-tuning as an effective regularizer. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 3103–3112). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.findings-emnlp.278

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free