Globally normalized transition-based neural networks

260Citations
Citations of this article
925Readers
Mendeley users who have this article in their library.

Abstract

We introduce a globally normalized transition-based neural network model that achieves state-of-the-art part-of-speech tagging, dependency parsing and sentence compression results. Our model is a simple feed-forward neural network that operates on a task-specific transition system, yet achieves comparable or better accuracies than recurrent models. We discuss the importance of global as opposed to local normalization: a key insight is that the label bias problem implies that globally normalized models can be strictly more expressive than locally normalized models.

Cite

CITATION STYLE

APA

Andor, D., Alberti, C., Weiss, D., Severyn, A., Presta, A., Ganchev, K., … Collins, M. (2016). Globally normalized transition-based neural networks. In 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Long Papers (Vol. 4, pp. 2442–2452). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p16-1231

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free