The Lazy Visual Word Form Area: Computational Insights into Location-Sensitivity

2Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.

Abstract

In a recent study, Rauschecker et al. convincingly demonstrate that visual words evoke neural activation signals in the Visual Word Form Area that can be classified based on where they were presented in the visual fields. This result goes against the prevailing consensus, and begs an explanation. We show that one of the simplest possible models for word recognition, a multilayer feedforward network, will exhibit precisely the same behavior when trained to recognize words at different locations. The model suggests that the VWFA initially starts with information about location, which is not being suppressed during reading acquisition more than is needed to meet the requirements of location-invariant word recognition. Some new interpretations of Rauschecker et al.'s results are proposed, and three specific predictions are derived to be tested in further studies. © 2013 Hannagan, Grainger.

Cite

CITATION STYLE

APA

Hannagan, T., & Grainger, J. (2013). The Lazy Visual Word Form Area: Computational Insights into Location-Sensitivity. PLoS Computational Biology, 9(10). https://doi.org/10.1371/journal.pcbi.1003250

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free