Metrics also Disagree in the Low Scoring Range: Revisiting Summarization Evaluation Metrics

10Citations
Citations of this article
69Readers
Mendeley users who have this article in their library.

Abstract

In text summarization, evaluating the efficacy of automatic metrics without human judgments has become recently popular. One exemplar work (Peyrard, 2019) concludes that automatic metrics strongly disagree when ranking high-scoring summaries. In this paper, we revisit their experiments and find that their observations stem from the fact that metrics disagree in ranking summaries from any narrow scoring range. We hypothesize that this may be because summaries are similar to each other in a narrow scoring range and are thus, difficult to rank. Apart from the width of the scoring range of summaries, we analyze three other properties that impact inter-metric agreement - Ease of Summarization, Abstractiveness, and Coverage. To encourage reproducible research, we make all our analysis code and data publicly available.

Cite

CITATION STYLE

APA

Bhandari, M., Gour, P., Ashfaq, A., & Liu, P. (2020). Metrics also Disagree in the Low Scoring Range: Revisiting Summarization Evaluation Metrics. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 5702–5711). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.501

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free