Visual saliency detection for RGB-D images under a Bayesian framework

5Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we propose a saliency detection model for RGB-D images based on the deep features of RGB images and depth images within a Bayesian framework. By analysing 3D saliency in the case of RGB images and depth images, the class-conditional mutual information is computed for measuring the dependence of deep features extracted using a convolutional neural network; then, the posterior probability of the RGB-D saliency is formulated by applying Bayes’ theorem. By assuming that deep features are Gaussian distributions, a discriminative mixed-membership naive Bayes (DMNB) model is used to calculate the final saliency map. The Gaussian distribution parameters can be estimated in the DMNB model by using a variational inference-based expectation maximization algorithm. The experimental results on RGB-D images from the NLPR dataset and NJU-DS400 dataset show that the proposed model performs better than other existing models.

Cite

CITATION STYLE

APA

Wang, S., Zhou, Z., Jin, W., & Qu, H. (2018). Visual saliency detection for RGB-D images under a Bayesian framework. IPSJ Transactions on Computer Vision and Applications, 10(1). https://doi.org/10.1186/s41074-017-0037-0

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free