We propose a novel metric for evaluating summary content coverage. The evaluation framework follows the Pyramid approach to measure how many summarization content units, considered important by human annotators, are contained in an automatic summary. Our approach automatizes the evaluation process, which does not need any manual intervention on the evaluated summary side. Our approach compares abstract meaning representations of each content unit mention and each summary sentence. We found that the proposed metric complements well the widely-used ROUGE metrics.
CITATION STYLE
Steinberger, J., Krejzl, P., & Brychcín, T. (2017). Pyramid-based summary evaluation using abstract meaning representation. In International Conference Recent Advances in Natural Language Processing, RANLP (Vol. 2017-September, pp. 701–706). Incoma Ltd. https://doi.org/10.26615/978-954-452-049-6_090
Mendeley helps you to discover research relevant for your work.