Adaptive fusion for rgb-d salient object detection

205Citations
Citations of this article
66Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

RGB-D (red, green, blue, and depth) salient object detection aims to identify the most visually distinctive objects in a pair of color and depth images. Based upon an observation that most of the salient objects may stand out at least in one modality, this paper proposes an adaptive fusion scheme to fuse saliency predictions generated from two modalities. Specifically, we design two-streamed convolutional neural networks (CNN), each of which extracts features and predicts a saliency map from either RGB or depth modality. Then, a saliency fusion module learns a switch map that is used to adaptively fuse the predicted saliency maps. A loss function composed of saliency supervision, switch map supervision, and edge-preserving constraints are designed to make full supervision, and the entire network is trained in an end-to-end manner. Benefited from the adaptive fusion strategy and the edge-preserving constraint, our approach outperforms state-of-the-art methods on three publicly available datasets.

Cite

CITATION STYLE

APA

Wang, N., & Gong, X. (2019). Adaptive fusion for rgb-d salient object detection. IEEE Access, 7, 55277–55284. https://doi.org/10.1109/ACCESS.2019.2913107

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free