A common representation of spatial features drives action and perception: Grasping and judging object features within trials

10Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.

Abstract

Spatial features of an object can be specified using two different response types: either by use of symbols or motorically by directly acting upon the object. Is this response dichotomy reflected in a dual representation of the visual world: one for perception and one for action? Previously, symbolic and motoric responses, specifying location, has been shown to rely on a common representation. What about more elaborate features such as length and orientation? Here we show that when motoric and symbolic responses are made within the same trial, the probability of making the same symbolic and motoric response is well above chance for both length and orientation. This suggests that motoric and symbolic responses to length and orientation are driven by a common representation. We also show that, for both response types, the spatial features of an object are processed independently. This finding of matching object-processing characteristics is also in agreement with the idea of a common representation driving both response types. © 2014 Christiansen et al.

Cite

CITATION STYLE

APA

Christiansen, J. H., Christensen, J., Grun̈baum, T., & Kyllingsbæk, S. (2014). A common representation of spatial features drives action and perception: Grasping and judging object features within trials. PLoS ONE, 9(5). https://doi.org/10.1371/journal.pone.0094744

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free