Decoding Hand Motor Imagery Tasks within the Same Limb from EEG Signals Using Deep Learning

13Citations
Citations of this article
55Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Motor imagery (MI) tasks of different body parts have been successfully decoded by conventional classifiers, such as LDA and SVM. On the other hand, decoding MI tasks within the same limb is a challenging problem with these classifiers; however, it would provide more options to control robotic devices. This work proposes to improve the hand MI tasks decoding within the same limb in a brain-computer interface using convolutional neural networks (CNNs); the CNN EEGNet, LDA, and SVM classifiers were evaluated for two (flexion/extension) and three (flexion/extension/grasping) MI tasks. Our approach is the first attempt to apply CNNs for solving this problem to our best knowledge. In addition, visual and electrotactile stimulation were included as BCI training reinforcement after the MI task similar to feedback sessions; then, they were compared. The EEGNet achieved maximum mean accuracies of 78.46% (±12.50%) and 76.72% (±11.67%) for two and three classes, respectively. Outperforming conventional classifiers with results around 60% and 48%, and similar works with results lower than 67% and 75%, respectively. Moreover, the electrical stimulation over the visual stimulus was not significant during the calibration session. The deep learning scheme enhanced the decoding of MI tasks within the same limb against the conventional framework.

Cite

CITATION STYLE

APA

Achanccaray, D., & Hayashibe, M. (2020). Decoding Hand Motor Imagery Tasks within the Same Limb from EEG Signals Using Deep Learning. IEEE Transactions on Medical Robotics and Bionics, 2(4), 692–699. https://doi.org/10.1109/TMRB.2020.3025364

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free