Multi-Task Learning-Based Deep Neural Network for Steady-State Visual Evoked Potential-Based Brain–Computer Interfaces

4Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Amyotrophic lateral sclerosis (ALS) causes people to have difficulty communicating with others or devices. In this paper, multi-task learning with denoising and classification tasks is used to develop a robust steady-state visual evoked potential-based brain–computer interface (SSVEP-based BCI), which can help people communicate with others. To ease the operation of the input interface, a single channel-based SSVEP-based BCI is selected. To increase the practicality of SSVEP-based BCI, multi-task learning is adopted to develop the neural network-based intelligent system, which can suppress the noise components and obtain a high level of accuracy of classification. Thus, denoising and classification tasks are selected in multi-task learning. The experimental results show that the proposed multi-task learning can effectively integrate the advantages of denoising and discriminative characteristics and outperform other approaches. Therefore, multi-task learning with denoising and classification tasks is very suitable for developing an SSVEP-based BCI for practical applications. In the future, an augmentative and alternative communication interface can be implemented and examined for helping people with ALS communicate with others in their daily lives.

Cite

CITATION STYLE

APA

Chuang, C. C., Lee, C. C., So, E. C., Yeng, C. H., & Chen, Y. J. (2022). Multi-Task Learning-Based Deep Neural Network for Steady-State Visual Evoked Potential-Based Brain–Computer Interfaces. Sensors, 22(21). https://doi.org/10.3390/s22218303

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free