Top down saliency estimation via superpixel-based discriminative dictionaries

33Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Predicting where humans look in images has gained significant popularity in recent years. In this work, we present a novel method for learning top-down visual saliency, which is well-suited to locate objects of interest in complex scenes. During training, we jointly learn a superpixel based class-specific dictionary and a Conditional Random Field (CRF). While using such a discriminative dictionary helps to distinguish target objects from the background, performing the computations at the superpixel level allows us to improve accuracy of object localizations. Experimental results on the Graz-02 and PASCAL VOC 2007 datasets show that the proposed approach is able to achieve stateof- the-art results and provides much better saliency maps.

Cite

CITATION STYLE

APA

Kocak, A., Cizmeciler, K., Erdem, A., & Erdem, E. (2014). Top down saliency estimation via superpixel-based discriminative dictionaries. In BMVC 2014 - Proceedings of the British Machine Vision Conference 2014. British Machine Vision Association, BMVA. https://doi.org/10.5244/c.28.73

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free