Context modulated dynamic networks for actor and action video segmentation with language queries

51Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

Actor and action video segmentation with language queries aims to segment out the expression referred objects in the video. This process requires comprehensive language reasoning and fine-grained video understanding. Previous methods mainly leverage dynamic convolutional networks to match visual and semantic representations. However, the dynamic convolution neglects spatial context when processing each region in the frame and is thus challenging to segment similar objects in the complex scenarios. To address such limitation, we construct a context modulated dynamic convolutional network. Specifically, we propose a context modulated dynamic convolutional operation in the proposed framework. The kernels for the specific region are generated from both language sentences and surrounding context features. Moreover, we devise a temporal encoder to incorporate motions into the visual features to further match the query descriptions. Extensive experiments on two benchmark datasets, Actor-Action Dataset Sentences (A2D Sentences) and J-HMDB Sentences, demonstrate that our proposed approach notably outperforms state-of-the-art methods.

Cite

CITATION STYLE

APA

Wang, H., Deng, C., Ma, F., & Yang, Y. (2020). Context modulated dynamic networks for actor and action video segmentation with language queries. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 12152–12159). AAAI press. https://doi.org/10.1609/aaai.v34i07.6895

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free