Unsupervised text recap extraction for TV series

1Citations
Citations of this article
84Readers
Mendeley users who have this article in their library.

Abstract

Sequences found at the beginning of TV shows help the audience absorb the essence of previous episodes, and grab their attention with upcoming plots. In this paper, we propose a novel task, text recap extraction. Compared with conventional summarization, text recap extraction captures the duality of summarization and plot contingency between adjacent episodes. We present a new dataset, TVRecap, for text recap extraction on TV shows. We propose an unsupervised model that identifies text recaps based on plot descriptions. We introduce two contingency factors, concept coverage and sparse reconstruction, that encourage recaps to prompt the upcoming story development. We also propose a multi-view extension of our model which can incorporate dialogues and synopses. We conduct extensive experiments on TVRecap, and conclude that our model outperforms summarization approaches.

Cite

CITATION STYLE

APA

Yu, H., Zhang, S., & Morency, L. P. (2016). Unsupervised text recap extraction for TV series. In EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 1797–1806). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d16-1185

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free