What types of representations support our ability to integrate information acquired during one eye fixation with information acquired during the next fixation? In Experiment 1, transsaccadic integration was explored by manipulating whether or not the relative position of a picture of an object was maintained across a saccade. In Experiment 2, the degree to which visual details of a picture are coded in a position-specific representational system was explored by manipulating whether or not both the relative position and the left-right orientation of the picture were maintained across a saccade. Position-specific and nonspecific preview benefits were observed in both experiments. Only the position-specific benefits were influenced by the number of task-relevant pictures presented in the preview display (Experiment 1) and the left-right orientation of the picture presented in the preview display (Experiment 2). The results support a model of transsaccadic integration based on two independent representational systems. One system codes abstract, prestored object types, and the other codes episodic tokens consisting of stimulus properties linked to scene- or configuration-based position markers.
CITATION STYLE
Henderson, J. M., & Siefert, A. B. C. (2001). Types and tokens in transsaccadic object identification: Effects of spatial position and left-right orientation. Psychonomic Bulletin and Review, 8(4), 753–760. https://doi.org/10.3758/BF03196214
Mendeley helps you to discover research relevant for your work.