Top-down saliency with locality-constrained contextual sparse coding

16Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

We propose a sparse coding based framework for top-down salient object detection in which three locality constraints are integrated. First is the spatial or contextual locality constraint in which features from adjacent regions have similar code, second is the feature-domain locality constraint in which similar features have similar code, and third is the category-domain locality constraint in which features are coded using similar atoms from each partition of the dictionary, where each partition corresponds to an object category. This faster coding strategy produces better saliency maps compared to conventional sparse coding. Proposed codes are max-pooled over a spatial neighborhood for saliency estimation. In spite of its simplicity, the proposed top-down saliency achieves state-of-the-art results at patch-level on two challenging datasets-Graz-02 and PASCAL VOC-07. A novel Gaussian-weighted interpolation further improves pixel-level saliency map derived from the patch-level map.

Cite

CITATION STYLE

APA

Cholakkal, H., Rajan, D., & Johnson, J. (2015). Top-down saliency with locality-constrained contextual sparse coding. In 26th British Machine Vision Conference, BMVC 2015 (Vol. 2015-September, pp. 1–12). British Machine Vision Conference, BMVC. https://doi.org/10.5244/C.29.159

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free