Pyramid evaluation was developed to assess the content of paragraph length summaries of source texts. A pyramid lists the distinct units of content found in several reference summaries, weights content units by how many reference summaries they occur in, and produces three scores based on the weighted content of new summaries. We present an automated method that is more efficient, more transparent, and more complete than previous automated pyramid methods. It is tested on a new dataset of student summaries, and historical NIST data from extractive summarizers.
CITATION STYLE
Gao, Y., Sun, C., & Passonneau, R. J. (2019). Automated pyramid summarization evaluation. In CoNLL 2019 - 23rd Conference on Computational Natural Language Learning, Proceedings of the Conference (pp. 404–418). Association for Computational Linguistics. https://doi.org/10.18653/v1/k19-1038
Mendeley helps you to discover research relevant for your work.