Automated pyramid summarization evaluation

34Citations
Citations of this article
84Readers
Mendeley users who have this article in their library.

Abstract

Pyramid evaluation was developed to assess the content of paragraph length summaries of source texts. A pyramid lists the distinct units of content found in several reference summaries, weights content units by how many reference summaries they occur in, and produces three scores based on the weighted content of new summaries. We present an automated method that is more efficient, more transparent, and more complete than previous automated pyramid methods. It is tested on a new dataset of student summaries, and historical NIST data from extractive summarizers.

Cite

CITATION STYLE

APA

Gao, Y., Sun, C., & Passonneau, R. J. (2019). Automated pyramid summarization evaluation. In CoNLL 2019 - 23rd Conference on Computational Natural Language Learning, Proceedings of the Conference (pp. 404–418). Association for Computational Linguistics. https://doi.org/10.18653/v1/k19-1038

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free