Comprehensive Multi-Modal Interactions for Referring Image Segmentation

5Citations
Citations of this article
43Readers
Mendeley users who have this article in their library.

Abstract

We investigate Referring Image Segmentation (RIS), which outputs a segmentation map corresponding to the natural language description. Addressing RIS efficiently requires considering the interactions happening across visual and linguistic modalities and the interactions within each modality. Existing methods are limited because they either compute different forms of interactions sequentially (leading to error propagation) or ignore intramodal interactions. We address this limitation by performing all three interactions simultaneously through a Synchronous Multi-Modal Fusion Module (SFM). Moreover, to produce refined segmentation masks, we propose a novel Hierarchical Cross-Modal Aggregation Module (HCAM), where linguistic features facilitate the exchange of contextual information across the visual hierarchy. We present thorough ablation studies and validate our approach's performance on four benchmark datasets, showing considerable performance gains over the existing state-of-the-art (SOTA) methods.

Cite

CITATION STYLE

APA

Jain, K., & Gandhi, V. (2022). Comprehensive Multi-Modal Interactions for Referring Image Segmentation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 3427–3435). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-acl.270

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free