Global image properties do not guide visual search

12Citations
Citations of this article
92Readers
Mendeley users who have this article in their library.
Get full text

Abstract

While basic visual features such as color, motion, and orientation can guide attention, it is likely that additional features guide search for objects in real-world scenes. Recent work has shown that human observers efficiently extract global scene properties such as mean depth or navigability from a brief glance at a single scene (M. R. Greene & A. Oliva, 2009a, 2009b). Can human observers also efficiently search for an image possessing a particular global scene property among other images lacking that property? Observers searched for scene image targets defined by global properties of naturalness, transience, navigability, and mean depth. All produced inefficient search. Search efficiency for a property was not correlated with its classification threshold time from M. R. Greene and A. Oliva (2009b). Differences in search efficiency between properties can be partially explained by low-level visual features that are correlated with the global property. Overall, while global scene properties can be rapidly classified from a single image, it does not appear to be possible to use those properties to guide attention to one of several images. © ARVO.

Cite

CITATION STYLE

APA

Greene, M. R., & Wolfe, J. M. (2011). Global image properties do not guide visual search. Journal of Vision, 11(6), 1–9. https://doi.org/10.1167/11.6.18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free