A Turing test to evaluate a complex summarization task

2Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper deals with a new strategy to evaluate a Natural Language Processing (NLP) complex task using the Turing test. Automatic summarization based on sentence compression requires to asses informativeness and modify inner sentence structures. This is much more intrinsically related with real rephrasing than plain sentence extraction and ranking paradigm so new evaluation methods are needed. We propose a novel imitation game to evaluate Automatic Summarization by Compression (ASC). Rationale of this Turing-like evaluation could be applied to many other NLP complex tasks like Machine translation or Text Generation. We show that a state of the art ASC system can pass such a test and simulate a human summary in 60% of the cases. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Molina, A., SanJuan, E., & Torres-Moreno, J. M. (2013). A Turing test to evaluate a complex summarization task. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8138 LNCS, pp. 75–80). https://doi.org/10.1007/978-3-642-40802-1_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free