Recurrent greedy parsing with neural networks

1Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, we propose a bottom-up greedy and purely discriminative syntactic parsing approach that relies only on a few simple features. The core of the architecture is a simple neural network architecture, trained with an objective function similar to that of a Conditional Random Field. This parser leverages continuous word vector representations to model the conditional distributions of context-aware syntactic rules. The learned distribution rules are naturally smoothed, thanks to the continuous nature of the input features and the model. Generalization accuracy compares favorably to existing generative or discriminative (non-reranking) parsers (despite the greedy nature of our approach), while the prediction speed is very fast. © 2014 Springer-Verlag.

Cite

CITATION STYLE

APA

Legrand, J., & Collobert, R. (2014). Recurrent greedy parsing with neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8725 LNAI, pp. 130–144). Springer Verlag. https://doi.org/10.1007/978-3-662-44851-9_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free