Adaptive video summarization via robust representation and structured sparsity

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

To improve faster browsing and more efficient content indexing of huge video collections, video summarization has emerged as an important area of research for the multimedia community. One of the mechanisms to generate video summaries is to extract keyframes which represent the most important content of the video. However, there are still some problems like image imperfection and noise interference, which seriously affect the performance of keyframe selection. Aiming at above problems, in this paper, we propose a linear reconstruction framework to summarize the videos. The first model in our framework seeks the most informative keyframes (base vectors) using the structure sparsity of the ℓ21 norm regularization, to represent all the frames as the linear combination of them in a video. Furthermore, we also propose another more robust model via ℓ21 norm based loss function to suppress the outlier, and form the joint sparsity with ℓ21 norm regularization. For the optimization, we design two efficient algorithms for two proposed models respectively. Finally the extensive experiments on real world video datesets are presented to show the effectiveness of the proposed framework.

Cite

CITATION STYLE

APA

Sheng, M., Shi, J., Sun, D., Ding, Z., & Luo, B. (2020). Adaptive video summarization via robust representation and structured sparsity. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11691 LNAI, pp. 201–210). Springer. https://doi.org/10.1007/978-3-030-39431-8_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free