Weakly-supervised video moment retrieval via semantic completion network

116Citations
Citations of this article
63Readers
Mendeley users who have this article in their library.

Abstract

Video moment retrieval is to search the moment that is most relevant to the given natural language query. Existing methods are mostly trained in a fully-supervised setting, which requires the full annotations of temporal boundary for each query. However, manually labeling the annotations is actually time-consuming and expensive. In this paper, we propose a novel weakly-supervised moment retrieval framework requiring only coarse video-level annotations for training. Specifically, we devise a proposal generation module that aggregates the context information to generate and score all candidate proposals in one single pass. We then devise an algorithm that considers both exploitation and exploration to select top- K proposals. Next, we build a semantic completion module to measure the semantic similarity between the selected proposals and query, compute reward and provide feedbacks to the proposal generation module for scoring refinement. Experiments on the ActivityCaptions and Charades-STA demonstrate the effectiveness of our proposed method.

Cite

CITATION STYLE

APA

Lin, Z., Zhao, Z., Zhang, Z., Wang, Q., & Liu, H. (2020). Weakly-supervised video moment retrieval via semantic completion network. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 11539–11546). AAAI press. https://doi.org/10.1609/aaai.v34i07.6820

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free