Learning Spatiotemporal Graph Representations for Visual Perception Using EEG Signals

5Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Perceiving and recognizing objects enable interaction with the external environment. Recently, decoding brain signals based on brain-computer interface (BCI) that recognize the user's intentions by just looking at objects has attracted attention as a next-generation intuitive interface. However, classifying signals from different objects is very challenging, and in practice, decoding performance for visual perception is not yet high enough to be used in real environments. In this study, we aimed to classify single-trial electroencephalography signals evoked by visual stimuli into their corresponding semantic category. We proposed a two-stream convolutional neural network to increase classification performance. The model consists of a spatial stream and a temporal stream that use graph convolutional neural network and channel-wise convolutional neural network respectively. Two public datasets were used to evaluate the proposed model; (i) SU DB (a set of 72 photographs of objects belonging to 6 semantic categories) and MPI DB (8 exemplars belonging to two categories). Our results outperform state-of-the-art methods, with accuracies of 54.28 ± 7.89% for SU DB (6-class) and 84.40 ± 8.03% for MPI DB (2-class). These results could facilitate the application of intuitive BCI systems based on visual perception.

Cite

CITATION STYLE

APA

Kalafatovich, J., Lee, M., & Lee, S. W. (2023). Learning Spatiotemporal Graph Representations for Visual Perception Using EEG Signals. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 31, 97–108. https://doi.org/10.1109/TNSRE.2022.3217344

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free