A Deeper Look at Human Visual Perception of Images

5Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.
Get full text

Abstract

How would one describe an image? Interesting? Pleasant? Aesthetic? A number of studies have classified images with respect to these attributes. A common approach is to link lower level image features with higher level properties, and train a computational model to perform classification using human-annotated ground truth. Although these studies generate algorithms with reasonable prediction performance, they provide few insights into why and how the algorithms work. The current study focuses on how multiple visual factors affect human perception of digital images. We extend an existing dataset with quantitative measures for human perception of 31 image attributes under 6 different viewing conditions: images that are intact, inverted, grayscale, inverted and grayscale, and images showing mainly low- or high-spatial frequency information. Statistical analyses indicate varying importance of holistic cues, color information, semantics, and saliency on different types of attributes. Building on these insights we build an empirical model of human image perception. Motivated by the empirical model, we designed computational models that predict high-level image attributes. Extensive experiments demonstrate that understanding human visual perception helps create better computational models.

Cite

CITATION STYLE

APA

Fan, S., Koenig, B. L., Zhao, Q., & Kankanhalli, M. S. (2020). A Deeper Look at Human Visual Perception of Images. SN Computer Science, 1(1). https://doi.org/10.1007/s42979-019-0061-5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free