Improved transition-based parsing and tagging with neural networks

33Citations
Citations of this article
135Readers
Mendeley users who have this article in their library.

Abstract

We extend and improve upon recent work in structured training for neural network transition-based dependency parsing. We do this by experimenting with novel features, additional transition systems and by testing on a wider array of languages. In particular, we introduce set-valued features to encode the predicted morphological properties and part-of speech confusion sets of the words being parsed. We also investigate the use of joint parsing and partof-speech tagging in the neural paradigm. Finally, we conduct a multi-lingual evaluation that demonstrates the robustness of the overall structured neural approach, as well as the benefits of the extensions proposed in this work. Our research further demonstrates the breadth of the applicability of neural network methods to dependency parsing, as well as the ease with which new features can be added to neural parsing models.

Cite

CITATION STYLE

APA

Alberti, C., Weiss, D., Coppola, G., & Petrov, S. (2015). Improved transition-based parsing and tagging with neural networks. In Conference Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing (pp. 1354–1359). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d15-1159

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free