Transforming Multi-concept Attention into Video Summarization

4Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Video summarization is among the challenging tasks in computer vision, which aims at identifying highlight frames or shots over a lengthy video input. In this paper, we propose a novel attention-based framework for video summarization with complex video data. Unlike previous works which only apply attention mechanism on the correspondence between frames, our multi-concept video self-attention (MC-VSA) model is presented to identify informative regions across temporal and concept video features, which jointly exploit context diversity over time and space for summarization purposes. Together with consistency between video and summary enforced in our framework, our model can be applied to both labeled and unlabeled data, making our method preferable to real-world applications. Extensive and complete experiments on two benchmarks demonstrate the effectiveness of our model both quantitatively and qualitatively, and confirms its superiority over the state-of-the-arts.

Cite

CITATION STYLE

APA

Liu, Y. T., Li, Y. J., & Wang, Y. C. F. (2021). Transforming Multi-concept Attention into Video Summarization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12626 LNCS, pp. 498–513). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-69541-5_30

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free