Bi-directional attention with agreement for dependency parsing

27Citations
Citations of this article
136Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We develop a novel bi-directional attention model for dependency parsing, which learns to agree on headword predictions from the forward and backward parsing directions. The parsing procedure for each direction is formulated as sequentially querying the memory component that stores continuous headword embeddings. The proposed parser makes use of soft headword embeddings, allowing the model to implicitly capture high-order parsing history without dramatically increasing the computational complexity. We conduct experiments on English, Chinese, and 12 other languages from the CoNLL 2006 shared task, showing that the proposed model achieves state-of-the-art unlabeled attachment scores on 6 languages.

Cite

CITATION STYLE

APA

Cheng, H., Fang, H., He, X., Gao, J., & Deng, L. (2016). Bi-directional attention with agreement for dependency parsing. In EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 2204–2214). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d16-1238

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free