A principled framework for evaluating summarizers: Comparing models of summary quality against human judgments

6Citations
Citations of this article
102Readers
Mendeley users who have this article in their library.

Abstract

We present a new framework for evaluating extractive summarizers, which is based on a principled representation as optimization problem. We prove that every extractive summarizer can be decomposed into an objective function and an optimization technique. We perform a comparative analysis and evaluation of several objective functions embedded in well-known summarizers regarding their correlation with human judgments. Our comparison of these correlations across two datasets yields surprising insights into the role and performance of objective functions in the different summarizers.

References Powered by Scopus

Equation of state calculations by fast computing machines

30287Citations
N/AReaders
Get full text

Mastering the game of Go with deep neural networks and tree search

13048Citations
N/AReaders
Get full text

Metaheuristics in Combinatorial Optimization: Overview and Conceptual Comparison

2731Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Objective function learning to match human judgements for optimization-based summarization

17Citations
N/AReaders
Get full text

Reward learning for efficient reinforcement learning in extractive document summarisation

11Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Peyrard, M., & Eckle-Kohler, J. (2017). A principled framework for evaluating summarizers: Comparing models of summary quality against human judgments. In ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers) (Vol. 2, pp. 26–31). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/P17-2005

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 35

74%

Researcher 8

17%

Lecturer / Post doc 3

6%

Professor / Associate Prof. 1

2%

Readers' Discipline

Tooltip

Computer Science 44

81%

Linguistics 6

11%

Social Sciences 2

4%

Engineering 2

4%

Save time finding and organizing research with Mendeley

Sign up for free