With the increase in media streaming content and consumer-level video creation, there is a high demand for automatic video summarization systems. This paper proposes a bottom-up approach for the automatic generation of dynamic video summaries. Our approach integrates motion and saliency analysis with temporal slicing to extract features from the video, and to further find candidate shots. A shot similarity measure is proposed for constructing the dynamic summaries for candidate shots. From a practical perspective, our main contribution is the design of a video summarization system that is independent on the video domain. We show that the system performs equally well for domains at the extreme opposites of the domain spectrum, namely professionally edited videos and egocentric videos, without any prior information on the video contents.
CITATION STYLE
Dash, A., & Albu, A. B. (2017). A domain independent approach to video summarization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10617 LNCS, pp. 431–442). Springer Verlag. https://doi.org/10.1007/978-3-319-70353-4_37
Mendeley helps you to discover research relevant for your work.