Semi-Supervised Domain Adaptation for Emotion-Related Tasks

3Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Semi-supervised domain adaptation (SSDA) adopts a model trained from a label-rich source domain to a new but related domain with a few labels of target data. It is shown that, in an SSDA setting, a simple combination of domain adaptation (DA) with semi-supervised learning (SSL) techniques often fails to effectively utilize the target supervision and cannot address distribution shifts across different domains due to the training data bias toward the source-labeled samples. In this paper, inspired by the co-learning of multiple classifiers for the computer vision tasks, we propose to decompose the SSDA framework for emotion-related tasks into two subcomponents of unsupervised domain adaptation (UDA) from the source to the target domain and semi-supervised learning (SSL) in the target domain where the two models iteratively teach each other by interchanging their high confident predictions. We further propose a novel data cartography-based regularization technique for pseudo-label denoising that employs training dynamics to further hone our models' performance. We release our code.

Cite

CITATION STYLE

APA

Hosseini, M., & Caragea, C. (2023). Semi-Supervised Domain Adaptation for Emotion-Related Tasks. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 5402–5410). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.333

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free