Recognition of eeg signals from imagined vowels using deep learning methods

25Citations
Citations of this article
44Readers
Mendeley users who have this article in their library.

Abstract

The use of imagined speech with electroencephalographic (EEG) signals is a promising field of brain‐computer interfaces (BCI) that seeks communication between areas of the cerebral cortex related to language and devices or machines. However, the complexity of this brain process makes the analysis and classification of this type of signals a relevant topic of research. The goals of this study were: to develop a new algorithm based on Deep Learning (DL), referred to as CNNeeg1‐ 1, to recognize EEG signals in imagined vowel tasks; to create an imagined speech database with 50 subjects specialized in imagined vowels from the Spanish language (/a/,/e/,/i/,/o/,/u/); and to contrast the performance of the CNNeeg1‐1 algorithm with the DL Shallow CNN and EEGNet benchmark algorithms using an open access database (BD1) and the newly developed database (BD2). In this study, a mixed variance analysis of variance was conducted to assess the intra‐subject and inter-subject training of the proposed algorithms. The results show that for intra‐subject training analysis, the best performance among the Shallow CNN, EEGNet, and CNNeeg1‐1 methods in classifying imagined vowels (/a/,/e/,/i/,/o/,/u/) was exhibited by CNNeeg1‐1, with an accuracy of 65.62% for BD1 database and 85.66% for BD2 database.

Cite

CITATION STYLE

APA

Sarmiento, L. C., Villamizar, S., López, O., Collazos, A. C., Sarmiento, J., & Rodríguez, J. B. (2021). Recognition of eeg signals from imagined vowels using deep learning methods. Sensors, 21(19). https://doi.org/10.3390/s21196503

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free