Fine-grained analysis of cross-linguistic syntactic divergences

13Citations
Citations of this article
113Readers
Mendeley users who have this article in their library.

Abstract

The patterns in which the syntax of different languages converges and diverges are often used to inform work on cross-lingual transfer. Nevertheless, little empirical work has been done on quantifying the prevalence of different syntactic divergences across language pairs. We propose a framework for extracting divergence patterns for any language pair from a parallel corpus, building on Universal Dependencies (UD; Nivre et al., 2016). We show that our framework provides a detailed picture of cross-language divergences, generalizes previous approaches, and lends itself to full automation. We further present a novel dataset, a manually word-aligned subset of the Parallel UD corpus in five languages, and use it to perform a detailed corpus study. We demonstrate the usefulness of the resulting analysis by showing that it can help account for performance patterns of a cross-lingual parser.

Cite

CITATION STYLE

APA

Nikolaev, D., Arviv, O., Karidi, T., Kenneth, N., Mitnik, V., Saeboe, L. M., & Abend, O. (2020). Fine-grained analysis of cross-linguistic syntactic divergences. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 1159–1176). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.109

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free