Unsupervised Reinforcement Adaptation for Class-Imbalanced Text Classification

1Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.

Abstract

Class-imbalance naturally exists when train and test models in different domains. Unsupervised domain adaptation (UDA) augment model performance with only accessible annotations from the source domain and unlabeled data from the target domain. However, existing state-of-the-art UDA models learn domain-invariant representations and evaluate primarily on class-balanced data across domains. In this work, we propose an unsupervised domain adaptation approach via reinforcement learning that jointly leverages feature variants and imbalanced labels across domains. We experiment with the text classification task for its easily accessible datasets and compare the proposed method with five baselines. Experiments on three datasets prove that our proposed method can effectively learn robust domain-invariant representations and successfully adapt text classifiers on imbalanced classes over domains. The code is available at https://github.com/woqingdoua/ImbalanceClass.

Cite

CITATION STYLE

APA

Wu, Y., & Huang, X. (2022). Unsupervised Reinforcement Adaptation for Class-Imbalanced Text Classification. In *SEM 2022 - 11th Joint Conference on Lexical and Computational Semantics, Proceedings of the Conference (pp. 311–322). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.starsem-1.27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free