In this paper, we present a new, easy-to-generate method that is capable of precisely matting salient objects in a large-scale image set in an unsupervised way. Our method extracts only salient object without any user-specified constraints or a manual-thresholding of the saliency-map, which are essentially required in the image matting or saliency-map based segmentation, respectively. In order to provide a more balanced visual saliency as a response to both local features and global contrast, we propose a new, coupled saliency-map based on a linearly combined conspicuity map. Also, we introduce an adaptive tri-map as a refined segmented image of the coupled saliency-map for amore precise object extraction. The proposed method improves the segmentation performance, compared to image matting based on two existing saliency detection measures. Numerical experiments and visual comparisons with large-scale real image set confirm the useful behavior of the proposed method.
CITATION STYLE
Kim, J., & Park, J. (2015). Unsupervised salient object matting. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9386, pp. 752–763). Springer Verlag. https://doi.org/10.1007/978-3-319-25903-1_65
Mendeley helps you to discover research relevant for your work.