Dependency parsing with structure preserving embeddings

3Citations
Citations of this article
58Readers
Mendeley users who have this article in their library.

Abstract

Modern neural approaches to dependency parsing are trained to predict a tree structure by jointly learning a contextual representation for tokens in a sentence, as well as a head-dependent scoring function. Whereas this strategy results in high performance, it is difficult to interpret these representations in relation to the geometry of the underlying tree structure. Our work seeks instead to learn interpretable representations by training a parser to explicitly preserve structural properties of a tree. We do so by casting dependency parsing as a tree embedding problem where we incorporate geometric properties of dependency trees in the form of training losses within a graph-based parser. We provide a thorough evaluation of these geometric losses, showing that the majority of them yield strong tree distance preservation as well as parsing performance on par with a competitive graph-based parser (Qi et al., 2018). Finally, we show where parsing errors lie in terms of tree relationship in order to guide future work.

Cite

CITATION STYLE

APA

Kádár, Á., Xiao, L., Kemertas, M., Fancellu, F., Jepson, A., & Fazly, A. (2021). Dependency parsing with structure preserving embeddings. In EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 1684–1697). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.eacl-main.144

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free