Memory-Based Invariance Learning for Out-of-Domain Text Classification

0Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We investigate the task of out-of-domain (OOD) text classification with the aim of extending a classification model, trained on multiple source domains, to an unseen target domain. Recent studies have shown that learning invariant representations can enhance the performance of OOD generalization. However, the inherent disparity in data distribution across different domains poses challenges for achieving effective invariance learning. This study addresses this issue by employing memory augmentations. Specifically, we augment the original feature space using key-value memory and employ a meta-learning-based approach to enhance the quality of the invariant representations. Experimental results on sentiment analysis and natural language inference tasks show the effectiveness of memory-based method for invariance learning, leading to state-of-the-art performance on six datasets.

Cite

CITATION STYLE

APA

Jia, C., & Zhang, Y. (2023). Memory-Based Invariance Learning for Out-of-Domain Text Classification. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 1635–1647). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.101

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free