CATR: Combinatorial-Dependence Audio-Queried Transformer for Audio-Visual Video Segmentation

32Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Audio-visual video segmentation (AVVS) aims to generate pixel-level maps of sound-producing objects within image frames and ensure the maps faithfully adheres to the given audio, such as identifying and segmenting a singing person in a video. However, existing methods exhibit two limitations: 1) they address video temporal features and audio-visual interactive features separately, disregarding the inherent spatial-temporal dependence of combined audio and video, and 2) they inadequately introduce audio constraints and object-level information during the decoding stage, resulting in segmentation outcomes that fail to comply with audio directives. To tackle these issues, we propose a decoupled audio-video transformer that combines audio and video features from their respective temporal and spatial dimensions, capturing their combined dependence. To optimize memory consumption, we design a block, which, when stacked, enables capturing audio-visual fine-grained combinatorial-dependence in a memory-efficient manner. Additionally, we introduce audio-constrained queries during the decoding phase. These queries contain rich object-level information, ensuring the decoded mask adheres to the sounds. Experimental results confirm our approach's effectiveness, with our framework achieving a new SOTA performance on all three datasets using two backbones. The code is available at https://github.com/aspirinone/CATR.github.io.

References Powered by Scopus

Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks

26873Citations
N/AReaders
Get full text

Feature pyramid networks for object detection

20405Citations
N/AReaders
Get full text

Swin Transformer: Hierarchical Vision Transformer using Shifted Windows

17694Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Efficient Emotional Adaptation for Audio-Driven Talking-Head Generation

16Citations
N/AReaders
Get full text

JOTR: 3D Joint Contrastive Learning with Transformers for Occluded Human Mesh Recovery

11Citations
N/AReaders
Get full text

TransHuman: A Transformer-based Human Representation for Generalizable Neural Human Rendering

10Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Li, K., Yang, Z., Chen, L., Yang, Y., & Xiao, J. (2023). CATR: Combinatorial-Dependence Audio-Queried Transformer for Audio-Visual Video Segmentation. In MM 2023 - Proceedings of the 31st ACM International Conference on Multimedia (pp. 1485–1494). Association for Computing Machinery, Inc. https://doi.org/10.1145/3581783.3611724

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 2

100%

Readers' Discipline

Tooltip

Computer Science 2

100%

Save time finding and organizing research with Mendeley

Sign up for free