Decoding face information in time, frequency and space from direct intracranial recordings of the human brain

92Citations
Citations of this article
266Readers
Mendeley users who have this article in their library.

Abstract

Faces are processed by a neural stem with distributed anatomical components, but the roles of these components remain unclear. A dominant theory of face perception postulates independent representations of aspects of faces (e.g., identity) in ventral temporal cortex including the fusiform gyrus, and changeable aspects of faces (e,g., emotion) in lateral temporal cortex including thesuperior temporal sulcus. Here we recorded neuronal activity directly from the cortical surface in 9 neurosurgical subjects undergoing epilepsy monitoring while they viewed static and dynamic facial expressions. Applying novel decoding analyses to the power spectrogram of electrocorticograms (ECoG) from over 100 contacts in ventral and lateral temporal cortex, we found better representation of both invariant and changeable aspects of faces. In ventral than lateral temporal cortex. Critical information for discriminating faces from geometric patterns was carried by power modulations between 50 to 150 Hz. For both static and dynamic face stimuli, we obtained 6 higher decoding performance in ventral than lateral temporal cortex. For discriminating fearful from happy expressions, critical information was carried by power modulation between 60-150 Hz and below 30 Hz, and again better decoded in ventral than laterat temporal cortex. Task-relevant attention improved decoding accuracy more than 10% across a wide frequency range in ventral but not at all in lateral temporal cortex. Spatial searchlight decoding showed that decoding performance was highest around the middle fusiform gyrus. Finally, we found that the right hemisphere, in general, showed superior decoding to the left hemisphere. Taken together, our results challenge the dominant model for independent face representation of invariant and changeable aspects: information about both face attributes was better decoded from a single region in the middle fusiform gyrus. © 2008 Tsuchiya et al.

Cite

CITATION STYLE

APA

Tsuchiya, N., Kawasaki, H., Oya, H., Howard, M. A., & Adolphs, R. (2008). Decoding face information in time, frequency and space from direct intracranial recordings of the human brain. PLoS ONE, 3(12). https://doi.org/10.1371/journal.pone.0003892

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free