Cross-lingual transfer for unsupervised dependency parsing without parallel data

32Citations
Citations of this article
87Readers
Mendeley users who have this article in their library.

Abstract

Cross-lingual transfer has been shown to produce good results for dependency parsing of resource-poor languages. Although this avoids the need for a target language treebank, most approaches have still used large parallel corpora. However, parallel data is scarce for low-resource languages, and we report a new method that does not need parallel data. Our method learns syntactic word embeddings that generalise over the syntactic contexts of a bilingual vocabulary, and incorporates these into a neural network parser. We show empirical improvements over a baseline delexicalised parser on both the CoNLL and Universal Dependency Treebank datasets. We analyse the importance of the source languages, and show that combining multiple source-languages leads to a substantial improvement.

Cite

CITATION STYLE

APA

Duong, L., Cohn, T., Bird, S., & Cook, P. (2015). Cross-lingual transfer for unsupervised dependency parsing without parallel data. In CoNLL 2015 - 19th Conference on Computational Natural Language Learning, Proceedings (pp. 113–122). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/k15-1012

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free