Event video mashup: From hundreds of videos to minutes of Skeleton

6Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

The explosive growth of video content on the Web has been revolutionizing the way people share, exchange and perceive information, such as events. While an individual video usually concerns a specific aspect of an event, the videos that are uploaded by different users at different locations and times can embody different emphasis and compensate each other in describing the event. Combining these videos from different sources together can unveil a more complete picture of the event. Simply concatenating videos together is an intuitive solution, but it may degrade user experience since it is time-consuming and tedious to view those highly redundant, noisy and disorganized content. Therefore, we develop a novel approach, termed event video mashup (EVM), to automatically generate a unified short video from a collection of Web videos to describe the storyline of an event. We propose a submodular based content selection model that embodies both importance and diversity to depict the event from comprehensive aspects in an efficient way. Importantly, the video content is organized temporally and semantically conforming to the event evolution. We evaluate our approach on a real-world YouTube event dataset collected by ourselves. The extensive experimental results demonstrate the effectiveness of the proposed framework.

Cite

CITATION STYLE

APA

Gao, L., Wang, P., Song, J., Huang, Z., Shao, J., & Shen, H. T. (2017). Event video mashup: From hundreds of videos to minutes of Skeleton. In 31st AAAI Conference on Artificial Intelligence, AAAI 2017 (pp. 1323–1330). AAAI press. https://doi.org/10.1609/aaai.v31i1.10725

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free