Cellular automata based on occlusion relationship for saliency detection

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Different from the traditional images, 4D light field images contain the scene structure information and have been proved that can better obtain the saliency. Instead of estimating depth or using the unique refocusing capability, we proposed to obtain the occlusion relationship from the raw image to calculate saliency detection. The occlusion relationship is calculated using the Epipolar Plane Image (EPI) from the raw light field image which can distinguish a region is most likely a foreground or background. By analyzing the occlusion relationship in the scene, true edges of objects can be selected from the surface textures of objects, which is effective to segment the object completely. Moreover, we assume that objects which are non-occluded are more likely to be the foreground and objects that are occluded by lots of objects are background. Then the occlusion relationship is integrated into a modified saliency detection framework to obtain the saliency regions. Experiment results demonstrate that the occlusion relationship can help to improve the saliency detection accuracy, and the proposed method achieves significantly higher accuracy and robustness in comparison with state-of the-art light field saliency detection methods.

Cite

CITATION STYLE

APA

Sheng, H., Feng, W., & Zhang, S. (2016). Cellular automata based on occlusion relationship for saliency detection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9983 LNAI, pp. 28–39). Springer Verlag. https://doi.org/10.1007/978-3-319-47650-6_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free