Cross-modal object recognition is viewpoint-independent

65Citations
Citations of this article
85Readers
Mendeley users who have this article in their library.

Abstract

Background. Previous research suggests that visual and haptic object recognition are viewpoint-dependent both within- and cross-modally. However, this conclusion mav not be generally valid as it was reached using objects oriented along their extended y-axis, resulting in differential surface processing in vision and touch. In the present study, we removed this differential by presenting objects along the z-axis, thus making all object surfaces more equally available to vision and touch. Methodology/Principal Findings. Participants studied previously unfamiliar objects, in groups of four, using either vision or touch. Subsequently, they performed a four-alternative forced-choice object identification task with the studied objects presented in both unrotated and rotated (180° about the x-, y-, and z-axes) orientations. Rotation impaired within-modal recognition accuracy in both vision and touch, but not cross-modal recognition accuracy. Within-modally, visual recognition accuracy was reduced by rotation about the x- and y-axes more than the x-axis, whilst haptic recognition was equally affected by rotation about all three axes. Cross-modal (but not within-modal) accuracy correlated with spatial (but not object) imagery scores. Conclusions/Significance. The viewpoint-independence of cross-modal object identification points to its mediation by a high-level abstract representation. The correlation between spatial imagery scores and cross-modal performance suggest that construction of this high-level representation is linked to the ability to perform spatial transformations. Within-modal viewpoint-dependence appears to have a different basis in vision than in touch, possibly due to surface occlusion being important in vision but not touch. © 2007 Lacey et al.

Figures

  • Figure 1. An example object used in the present study in the original orientation (A) and rotated 180u about the z-axis (B), x-axis (C) and y-axis (D). doi:10.1371/journal.pone.0000890.g001
  • Figure 2. The effect on recognition accuracy of rotating objects away from the learned orientation was confined to the within-modal conditions, with no effect in the cross-modal conditions. (Error bars = s.e.m.; asterisk = significant difference; horizontal line = chance performance at 25% in the four-alternative forced-choice task used). doi:10.1371/journal.pone.0000890.g002
  • Figure 3. Interaction between modality and rotation. Rotation away from the learned orientation only affected within-modal, not cross-modal, recognition accuracy. (Error bars = s.e.m.; asterisk = significant difference; horizontal line = chance performance at 25% in the four-alternative forcedchoice task used). doi:10.1371/journal.pone.0000890.g003
  • Figure 4. Interaction between the within-modal conditions and the axis of rotation. Haptic within-modal recognition accuracy was equally disrupted by rotation about each axis whereas visual within-modal recognition was disrupted by the x- and y-rotations more than the z-rotation. The graph shows the percentage decrease in accuracy due to rotating the object away from the learned view. (Error bars = s.e.m.; asterisk = significant difference). doi:10.1371/journal.pone.0000890.g004
  • Figure 5. Scatterplots showing that OSIQ-spatial imagery scores correlate with cross-modal (A & B) but not within-modal object recognition accuracy (C & D).

References Powered by Scopus

Get full text
Get full text

This article is free to access.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Lacey, S., Peters, A., & Sathian, K. (2007). Cross-modal object recognition is viewpoint-independent. PLoS ONE, 2(9). https://doi.org/10.1371/journal.pone.0000890

Readers over time

‘08‘10‘11‘12‘13‘14‘15‘16‘17‘18‘19‘20‘21‘22‘23‘2405101520

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 32

51%

Researcher 18

29%

Professor / Associate Prof. 9

14%

Lecturer / Post doc 4

6%

Readers' Discipline

Tooltip

Psychology 31

57%

Neuroscience 12

22%

Agricultural and Biological Sciences 7

13%

Medicine and Dentistry 4

7%

Save time finding and organizing research with Mendeley

Sign up for free
0