Co-saliency detection aims at discovering the common and salient objects in multiple images. It explores not only intra-image but extra inter-image visual cues, and hence compensates the shortages in single-image saliency detection. The performance of co-saliency detection substantially relies on the explored visual cues. However, the optimal cues typically vary from region to region. To address this issue, we develop an approach that detects co-salient objects by region-wise saliency map fusion. Specifically, our approach takes intra-image appearance, inter-image correspondence, and spatial consistence into account, and accomplishes saliency detection with locally adaptive saliency map fusion via solving an energy optimization problem over a graph. It is evaluated on a benchmark dataset and compared to the state-of-the-art methods. Promising results demonstrate its effectiveness and superiority.
CITATION STYLE
Tsai, C. C., Qian, X., & Lin, Y. Y. (2017). Image co-saliency detection via locally adaptive saliency map fusion. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings (pp. 1897–1901). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/ICASSP.2017.7952486
Mendeley helps you to discover research relevant for your work.