SeqScore: Addressing Barriers to Reproducible Named Entity Recognition Evaluation

9Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

To address a looming crisis of unreproducible evaluation for named entity recognition, we propose guidelines and introduce SeqScore, a software package to improve reproducibility. The guidelines we propose are extremely simple and center around transparency regarding how chunks are encoded and scored. We demonstrate that despite the apparent simplicity of NER evaluation, unreported differences in the scoring procedure can result in changes to scores that are both of noticeable magnitude and statistically significant. We describe SeqScore, which addresses many of the issues that cause replication failures.

Cite

CITATION STYLE

APA

Palen-Michel, C., Holley, N., & Lignos, C. (2021). SeqScore: Addressing Barriers to Reproducible Named Entity Recognition Evaluation. In Eval4NLP 2021 - Evaluation and Comparison of NLP Systems, Proceedings of the 2nd Workshop (pp. 40–50). Association for Computational Linguistics (ACL). https://doi.org/10.26615/978-954-452-056-4_005

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free