Decoding visual perception from human brain activity

0Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Despite the wide-spread use of human neuroimaging, its potential to read out perceptual contents has not been fully explored. Animal neurophysiology has revealed the roles of the early visual cortex in representing visual features such as orientation and motion direction. However, non-invasive neuroimaging methods have been thought to lack the resolution to probe into these putative feature representations in the human brain. In this paper, we present methods for decoding early visual representations from fMRI activity patterns based on machine learning techniques. We first show how early visual features represented in "subvoxel" neural structures could be decoded from ensemble fMRI responses. Decoding of stimulus features is extended to the method for neural mind-reading, which attempts to decode a person's subjective state using a decoder trained with responses to unambiguous stimuli. We next present a more general decoding approach, in which the subject's percept can be decoded among a large number of possibilities by combining the outputs of multiple decoding modules. We demonstrate how this technique can be used to reconstruct arbitrary small pixel images perceived by the subject from fMRI activity patterns.

Cite

CITATION STYLE

APA

Kamitani, Y. (2010). Decoding visual perception from human brain activity. In APSIPA ASC 2010 - Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (pp. 944–951). https://doi.org/10.3389/conf.fnins.2010.13.00007

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free