GEMS: Generative modeling for evaluation of summaries

3Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Automated evaluation is crucial in the context of automated text summaries, as is the case with evaluation of any of the language technologies. In this paper we present a Generative Modeling framework for evaluation of content of summaries. We used two simple alternatives to identifying signature-terms from the reference summaries based on model consistency and Parts-Of-Speech (POS) features. By using a Generative Modeling approach we capture the sentence level presence of these signature-terms in peer summaries. We show that parts-of-speech such as noun and verb, give simple and robust method to signature-term identification for the Generative Modeling approach. We also show that having a large set of 'significant signature-terms' is better than a small set of 'strong signature-terms' for our approach. Our results show that the generative modeling approach is indeed promising - providing high correlations with manual evaluations - and further investigation of signature-term identification methods would obtain further better results. The efficacy of the approach can be seen from its ability to capture 'overall responsiveness' much better than the state-of-the-art in distinguishing a human from a system. © Springer-Verlag 2010.

Cite

CITATION STYLE

APA

Katragadda, R. (2010). GEMS: Generative modeling for evaluation of summaries. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6008 LNCS, pp. 724–735). https://doi.org/10.1007/978-3-642-12116-6_61

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free