Do LSTMs really work so well for PoS tagging? – A replication study

18Citations
Citations of this article
101Readers
Mendeley users who have this article in their library.

Abstract

A recent study by Plank et al. (2016) found that LSTM-based PoS taggers considerably improve over the current state-of-the-art when evaluated on the corpora of the Universal Dependencies project that use a coarse-grained tagset. We replicate this study using a fresh collection of 27 corpora of 21 languages that are annotated with fine-grained tagsets of varying size. Our replication confirms the result in general, and we additionally find that the advantage of LSTMs is even bigger for larger tagsets. However, we also find that for the very large tagsets of morphologically rich languages, hand-crafted morphological lexicons are still necessary to reach state-of-the-art performance.

Cite

CITATION STYLE

APA

Horsmann, T., & Zesch, T. (2017). Do LSTMs really work so well for PoS tagging? – A replication study. In EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 727–736). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d17-1076

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free