Feature reintegration over differential treatment: A top-down and adaptive fusion network for rgb-d salient object detection

53Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Most methods for RGB-D salient object detection (SOD) utilize the same fusion strategy to explore the cross-modal complementary information at each level. However, this may ignore different feature contributions from two modalities on different levels towards prediction. In this paper, we propose a novel top-down multi-level fusion structure where different fusion strategies are utilized to effectively explore the low-level and high-level features. This is achieved by designing the interweave fusion module (IFM) to effectively integrate the global information and designing the gated select fusion module (GSFM) to discriminatively select useful local information by filtering out the unnecessary one from RGB and depth data. Moreover, we propose an adaptive fusion module (AFM) to reintegrate the fused cross-modal features of each level to predict a more accurate result. Comprehensive experiments on 7 challenging benchmark datasets demonstrate that our method achieves the competitive performance over 14 state-of-the-art RGB-D alternative methods.

Cite

CITATION STYLE

APA

Zhang, M., Zhang, Y., Piao, Y., Hu, B., & Lu, H. (2020). Feature reintegration over differential treatment: A top-down and adaptive fusion network for rgb-d salient object detection. In MM 2020 - Proceedings of the 28th ACM International Conference on Multimedia (pp. 4107–4115). Association for Computing Machinery, Inc. https://doi.org/10.1145/3394171.3413969

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free