Spatial context learning in visual search and change detection

23Citations
Citations of this article
66Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Humans conduct visual search more efficiently when the same display is presented for a second time, showing learning of repeated spatial contexts. In this study, we investigate spatial context learning in two tasks: visual search and change detection. In both tasks, we ask whether subjects learn to associate the target with the entire spatial layout of a repeated display (configural learning) or with individual distractor locations (nonconfigural learning). We show that nonconfigural learning results from visual search tasks, but not from change detection tasks. Furthermore, a spatial layout acquired in visual search tasks does not enhance change detection on the same display, whereas a spatial layout acquired in change detection tasks moderately enhances visual search. We suggest that although spatial context learning occurs in multiple tasks, the content of learning is, in part, task specific. Copyright 2005 Psychonomic Society, Inc.

Cite

CITATION STYLE

APA

Jiang, Y., & Song, J. H. (2005). Spatial context learning in visual search and change detection. Perception and Psychophysics, 67(7), 1128–1139. https://doi.org/10.3758/BF03193546

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free