Deeper summarisation: The second time around: An overview and some practical suggestions

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper advocates deeper summarisation methods: methods that are closer to text understanding; methods that manipulate intermediate semantic representations. As a field, we are not yet in a position to create these representations perfectly, but I still believe that now is a good time to be a bit more ambitious again in our goals for summarisation. I think that a summariser should be able to provide some form of explanation for the summary it just created; and if we want those types of summarisers, we will have to start manipulating semantic representations. Considering the state of the art in NLP in 2016, I believe that the field is ready for a second attempt at going deeper in summarisation. We NLP folk have come a long way since the days of early AI research. Twenty-five years of statistical research in NLP have given us more robust, more informative processing of many aspects of semantics – such as semantic similarity and relatedness between words (and maybe larger things), semantic role labelling, co-reference resolution, and sentiment detection. Now, with these new tools under our belt, we can try again to create the right kind of intermediate representations for summarisation, and then do something exciting with them. Of course, exactly how is a very big question. In this opinion paper, I will bring forward some suggestions, by taking a second look at historical summarisation models from the era of Strong AI. These may have been over-ambitious back then, but people still talk about them now because of their explanatory power: they make statements about which meaning units in a text are always important, and why. I will discuss two 1980s models for text understanding and summarisation (Wendy Lehnert’s Plot Units, and Kintsch and van Dijk’s memory-restricted discourse structure), both of which have recently been revived by their first modern implementations. The implementation of Plot Unit-style affect analysis is by Goyal et al. (2013), the KvD implementation is by my student Yimai Fang, using a new corpus of language learner texts (Fang and Teufel 2014). Looking at those systems, I will argue that even an imperfect deeper summariser is exciting news.

Cite

CITATION STYLE

APA

Teufel, S. (2018). Deeper summarisation: The second time around: An overview and some practical suggestions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9624 LNCS, pp. 581–598). Springer Verlag. https://doi.org/10.1007/978-3-319-75487-1_44

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free