Intrinsic evaluation of summarization datasets

63Citations
Citations of this article
112Readers
Mendeley users who have this article in their library.

Abstract

High quality data forms the bedrock for building meaningful statistical models in NLP. Consequently, data quality must be evaluated either during dataset construction or post hoc. Almost all popular summarization datasets are drawn from natural sources and do not come with inherent quality assurance guarantees. In spite of this, data quality has gone largely unquestioned for many recent summarization datasets. We perform the first large-scale evaluation of summarization datasets by introducing 5 intrinsic metrics and applying them to 10 popular datasets. We find that data usage in recent summarization research is sometimes inconsistent with the underlying properties of the datasets employed. Further, we discover that our metrics can serve the additional purpose of being inexpensive heuristics for detecting generically low quality examples.

Cite

CITATION STYLE

APA

Bommasani, R., & Cardie, C. (2020). Intrinsic evaluation of summarization datasets. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 8075–8096). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.649

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free