Visual saliency detection for RGB-D images with generative model

7Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose a saliency detection model for RGB-D images based on the contrasting features of colour and depth with a generative mixture model. The depth feature map is extracted based on superpixel contrast computation with spatial priors. We model the depth saliency map by approximating the density of depth-based contrast features using a Gaussian distribution. Similar to the depth saliency computation, the colour saliency map is computed using a Gaussian distribution based on multi-scale contrasts in superpixels by exploiting low-level cues. By assuming that colour- and depth-based contrast features are conditionally independent, given the classes, a discriminative mixed-membership naive Bayes (DMNB) model is used to calculate the final saliency map from the depth saliency and colour saliency probabilities by applying Bayes’ theorem. The Gaussian distribution parameter can be estimated in the DMNB model by using a variational inferencebased expectation maximization algorithm. The experimental results on a recent eye tracking database show that the proposed model performs better than other existing models.

Cite

CITATION STYLE

APA

Wang, S. T., Zhou, Z., Qu, H. B., & Li, B. (2017). Visual saliency detection for RGB-D images with generative model. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10115 LNCS, pp. 20–35). Springer Verlag. https://doi.org/10.1007/978-3-319-54193-8_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free