Cross-lingual dependency parsing with unlabeled auxiliary languages

28Citations
Citations of this article
87Readers
Mendeley users who have this article in their library.

Abstract

Cross-lingual transfer learning has become an important weapon to battle the unavailability of annotated resources for low-resource languages. One of the fundamental techniques to transfer across languages is learning language-agnostic representations, in the form of word embeddings or contextual encodings. In this work, we propose to leverage unannotated sentences from auxiliary languages to help learning language-agnostic representations. Specifically, we explore adversarial training for learning contextual encoders that produce invariant representations across languages to facilitate cross-lingual transfer. We conduct experiments on cross-lingual dependency parsing where we train a dependency parser on a source language and transfer it to a wide range of target languages. Experiments on 28 target languages demonstrate that adversarial training significantly improves the overall transfer performances under several different settings. We conduct a careful analysis to evaluate the language-agnostic representations resulted from adversarial training.

Cite

CITATION STYLE

APA

Ahmad, W. U., Zhang, Z., Ma, X., Chang, K. W., & Peng, N. (2019). Cross-lingual dependency parsing with unlabeled auxiliary languages. In CoNLL 2019 - 23rd Conference on Computational Natural Language Learning, Proceedings of the Conference (pp. 372–382). Association for Computational Linguistics. https://doi.org/10.18653/v1/K19-1035

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free