Margin-aware unsupervised domain adaptation for cross-lingual text labeling

9Citations
Citations of this article
61Readers
Mendeley users who have this article in their library.

Abstract

Unsupervised domain adaptation addresses the problem of leveraging labeled data in a source domain to learn a well-performing model in a target domain where labels are unavailable. In this paper, we improve upon a recent theoretical work (Zhang et al., 2019b) and adopt the Margin Disparity Discrepancy (MDD) unsupervised domain adaptation algorithm to solve the cross-lingual text labeling problems. Experiments on cross-lingual document classification and NER demonstrate the proposed domain adaptation approach advances the state-of-the-art results by a large margin. Specifically, we improve MDD by efficiently optimizing the margin loss on the source domain via Virtual Adversarial Training (VAT). This bridges the gap between theory and the loss function used in the original work Zhang et al. (2019b), and thereby significantly boosts the performance. Our numerical results also indicate that VAT can remarkably improve the generalization performance of both domains for various domain adaptation approaches.

Cite

CITATION STYLE

APA

Zhang, D., Nallapati, R., Zhu, H., Nan, F., dos Santos, C. N., McKeown, K., & Xiang, B. (2020). Margin-aware unsupervised domain adaptation for cross-lingual text labeling. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 3527–3536). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.findings-emnlp.315

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free