Reporting score distributions makes a difference: Performance Study of LSTM-networks for sequence tagging

272Citations
Citations of this article
312Readers
Mendeley users who have this article in their library.

Abstract

In this paper we show that reporting a single performance score is insufficient to compare non-deterministic approaches. We demonstrate for common sequence tagging tasks that the seed value for the random number generator can result in statistically significant (p < 10−4) differences for state-of-the-art systems. For two recent systems for NER, we observe an absolute difference of one percentage point F1-score depending on the selected seed value, making these systems perceived either as state-of-the-art or mediocre. Instead of publishing and reporting single performance scores, we propose to compare score distributions based on multiple executions. Based on the evaluation of 50.000 LSTM-networks for five sequence tagging tasks, we present network architectures that produce both superior performance as well as are more stable with respect to the remaining hyperparameters. The full experimental results are published in (Reimers and Gurevych, 2017).1 The implementation of our network is publicly available.2

Cite

CITATION STYLE

APA

Reimers, N., & Gurevych, I. (2017). Reporting score distributions makes a difference: Performance Study of LSTM-networks for sequence tagging. In EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 338–348). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d17-1035

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free