Abstract
This paper tackles automation of the pyramid method, a reliable manual evaluation framework. To construct a pyramid, we transform human-made reference summaries into extractive reference summaries that consist of Elementary Discourse Units (EDUs) obtained from source documents and then weight every EDU by counting the number of extractive reference summaries that contain the EDU. A summary is scored by the correspondences between EDUs in the summary and those in the pyramid. Experiments on DUC and TAC data sets show that our methods strongly correlate with various manual evaluations.
Cite
CITATION STYLE
Hirao, T., Kamigaito, H., & Nagata, M. (2018). Automatic pyramid evaluation exploiting edu-based extractive reference summaries. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018 (pp. 4177–4186). Association for Computational Linguistics. https://doi.org/10.18653/v1/d18-1450
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.