Salient montages from unconstrained videos

24Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We present a novel method to generate salient montages from unconstrained videos, by finding "montageable moments" and identifying the salient people and actions to depict in each montage. Our method addresses the need for generating concise visualizations from the increasingly large number of videos being captured from portable devices. Our main contributions are (1) the process of finding salient people and moments to form a montage, and (2) the application of this method to videos taken "in the wild" where the camera moves freely. As such, we demonstrate results on head-mounted cameras, where the camera moves constantly, as well as on videos downloaded from YouTube. Our approach can operate on videos of any length; some will contain many montageable moments, while others may have none. We demonstrate that a novel "montageability" score can be used to retrieve results with relatively high precision which allows us to present high quality montages to users. © 2014 Springer International Publishing.

Cite

CITATION STYLE

APA

Sun, M., Farhadi, A., Taskar, B., & Seitz, S. (2014). Salient montages from unconstrained videos. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8695 LNCS, pp. 472–488). Springer Verlag. https://doi.org/10.1007/978-3-319-10584-0_31

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free