Deep Feature Learning and Visualization for EEG Recording Using Autoencoders

10Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this era of deep learning and big data, the transformation of biomedical big data into recognizable patterns is an important research focus and a great challenge in bioinformatics. An important form of biomedical data is electroencephalography (EEG) signals, which are generally strongly affected by noise and there exists notable individual, environmental and device differences. In this paper, we focus on learning discriminative features from short time EEG signals. Inspired by traditional image compression techniques to learn a robust representation of an image, we introduce and compare two strategies for learning features from EEG using two specifically designed autoencoders. Channel-wise autoencoders focus on features in each channel, while Image-wise autoencoders instead learn features from the whole trial. Our results on a UCI EEG dataset show that using both Channel-wise and Image-wise autoencoders achieve good performance for a classification problem with state of art accuracy in both within-subject and cross-subject tests. A further experiment using shared weights shows that the shared weights technique only slightly influenced learning but it reduced training time significantly.

Cite

CITATION STYLE

APA

Yao, Y., Plested, J., & Gedeon, T. (2018). Deep Feature Learning and Visualization for EEG Recording Using Autoencoders. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11307 LNCS, pp. 554–566). Springer Verlag. https://doi.org/10.1007/978-3-030-04239-4_50

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free