Correlation between rouge and human evaluation of extractive meeting summaries

77Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Automatic summarization evaluation is critical to the development of summarization systems. While ROUGE has been shown to correlate well with human evaluation for content match in text summarization, there are many characteristics in multiparty meeting domain, which may pose potential problems to ROUGE. In this paper, we carefully examine how well the ROUGE scores correlate with human evaluation for extractive meeting summarization. Our experiments show that generally the correlation is rather low, but a significantly better correlation can be obtained by accounting for several unique meeting characteristics, such as disfluencies and speaker information, especially when evaluating system-generated summaries. © 2012, American College of Rheumatology.

Cite

CITATION STYLE

APA

Liu, F., & Liu, Y. (2008). Correlation between rouge and human evaluation of extractive meeting summaries. In ACL-08: HLT - 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 201–204). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1557690.1557747

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free