How are visual words represented? Insights from EEG-based visual word decoding, feature derivation and image reconstruction

16Citations
Citations of this article
61Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Investigations into the neural basis of reading have shed light on the cortical locus and the functional role of visual-orthographic processing. Yet, the fine-grained structure of neural representations subserving reading remains to be clarified. Here, we capitalize on the spatiotemporal structure of electroencephalography (EEG) data to examine if and how EEG patterns can serve to decode and reconstruct the internal representation of visually presented words in healthy adults. Our results show that word classification and image reconstruction were accurate well above chance, that their temporal profile exhibited an early onset, soon after 100 ms, and peaked around 170 ms. Further, reconstruction results were well explained by a combination of visual-orthographic word properties. Last, systematic individual differences were detected in orthographic representations across participants. Collectively, our results establish the feasibility of EEG-based word decoding and image reconstruction. More generally, they help to elucidate the specific features, dynamics, and neurocomputational principles underlying word recognition.

Cite

CITATION STYLE

APA

Ling, S., Lee, A. C. H., Armstrong, B. C., & Nestor, A. (2019). How are visual words represented? Insights from EEG-based visual word decoding, feature derivation and image reconstruction. Human Brain Mapping, 40(17), 5056–5068. https://doi.org/10.1002/hbm.24757

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free