Condensed Movies: Story Based Retrieval with Contextual Embeddings

8Citations
Citations of this article
68Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Our objective in this work is long range understanding of the narrative structure of movies. Instead of considering the entire movie, we propose to learn from the ‘key scenes’ of the movie, providing a condensed look at the full storyline. To this end, we make the following three contributions: (i) We create the Condensed Movies Dataset (CMD) consisting of the key scenes from over 3 K movies: each key scene is accompanied by a high level semantic description of the scene, character face-tracks, and metadata about the movie. The dataset is scalable, obtained automatically from YouTube, and is freely available for anybody to download and use. It is also an order of magnitude larger than existing movie datasets in the number of movies; (ii) We provide a deep network baseline for text-to-video retrieval on our dataset, combining character, speech and visual cues into a single video embedding; and finally (iii) We demonstrate how the addition of context from other video clips improves retrieval performance.

Cite

CITATION STYLE

APA

Bain, M., Nagrani, A., Brown, A., & Zisserman, A. (2021). Condensed Movies: Story Based Retrieval with Contextual Embeddings. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12626 LNCS, pp. 460–479). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-69541-5_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free