Leveraging Adversarial Training in Self-Learning for Cross-Lingual Text Classification

29Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In cross-lingual text classification, one seeks to exploit labeled data from one language to train a text classification model that can then be applied to a completely different language. Recent multilingual representation models have made it much easier to achieve this. Still, there may still be subtle differences between languages that are neglected when doing so. To address this, we present a semi-supervised adversarial training process that minimizes the maximal loss for label-preserving input perturbations. The resulting model then serves as a teacher to induce labels for unlabeled target lan-guage samples that can be used during further adversarial training, allowing us to gradually adapt our model to the target language. Compared with a number of strong baselines, we observe signifi-cant gains in effectiveness on document and intent classification for a diverse set of languages.

Cite

CITATION STYLE

APA

Dong, X., Zhu, Y., Zhang, Y., Fu, Z., Xu, D., Yang, S., & De Melo, G. (2020). Leveraging Adversarial Training in Self-Learning for Cross-Lingual Text Classification. In SIGIR 2020 - Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 1541–1544). Association for Computing Machinery, Inc. https://doi.org/10.1145/3397271.3401209

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free