Advances in learning visual saliency: From image primitives to semantic contents

4Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Humans and other primates shift their gaze to allocate processing resources to a subset of the visual input. Understanding and emulating the way that human observers free-view a natural scene has both scientific and economic impact. While previous research focused on low-level image features in saliency, the problem of “semantic gap” has recently attracted attention from vision researchers, and higher-level features have been proposed to fill the gap. Based on various features, machine learning has become a popular computational tool to mine human data in the exploration of how people direct their gaze when inspecting a visual scene. While learning saliency consistently boosts the performance of a saliency model, insights of what is learned inside the black box is also of great interest to both the human vision and computer vision communities. This chapter introduces recent advances in features that determine saliency, reviews related learning methods and insights drawn from learning outcomes, and discusses resources and metrics in saliency prediction.

Cite

CITATION STYLE

APA

Zhao, Q., & Koch, C. (2014). Advances in learning visual saliency: From image primitives to semantic contents. In Neural Computation, Neural Devices, and Neural Prosthesis (pp. 335–360). Springer New York. https://doi.org/10.1007/978-1-4614-8151-5_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free