Generative incremental dependency parsing with neural networks

14Citations
Citations of this article
105Readers
Mendeley users who have this article in their library.

Abstract

We propose a neural network model for scalable generative transition-based dependency parsing. A probability distribution over both sentences and transition sequences is parameterised by a feedforward neural network. The model surpasses the accuracy and speed of previous generative dependency parsers, reaching 91.1% UAS. Perplexity results show a strong improvement over n-gram language models, opening the way to the efficient integration of syntax into neural models for language generation.

Cite

CITATION STYLE

APA

Buys, J., & Blunsom, P. (2015). Generative incremental dependency parsing with neural networks. In ACL-IJCNLP 2015 - 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, Proceedings of the Conference (Vol. 2, pp. 863–869). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/p15-2142

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free