A brain-computer interface using motion-onset visual evoked potential

143Citations
Citations of this article
110Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a novel brain-computer interface (BCI) based on motion-onset visual evoked potentials (mVEPs). mVEP has never been used in BCI research, but has been widely studied in basic research. For the BCI application, the brief motion of objects embedded into onscreen virtual buttons is used to evoke mVEP that is time locked to the onset of motion. EEG data registered from 15 subjects are used to investigate the spatio-temporal pattern of mVEP in this paradigm. N2 and P2 components, with distinct temporo-occipital and parietal topography, respectively, are selected as the salient features of the brain response to the attended target that the subject selects by gazing at it. The computer determines the attended target by finding which button elicited prominent N2/P2 components. Besides a simple feature extraction of N2/P2 area calculation, the stepwise linear discriminant analysis is adopted to assess the target detection accuracy of a five-class BCI. A mean accuracy of 98% is achieved when ten trials data are averaged. Even with only three trials, the accuracy remains above 90%, suggesting that the proposed mVEP-based BCI could achieve a high information transfer rate in online implementation. © 2008 IOP Publishing Ltd.

Cite

CITATION STYLE

APA

Guo, F., Hong, B., Gao, X., & Gao, S. (2008). A brain-computer interface using motion-onset visual evoked potential. Journal of Neural Engineering, 5(4), 477–485. https://doi.org/10.1088/1741-2560/5/4/011

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free