Distributed Neural Network System for Multimodal Sleep Stage Detection

1Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Existing automatic sleep stage detection methods predominantly use convolutional neural network classifiers (CNNs) trained on features extracted from single-modality signals such as electroencephalograms (EEG). On the other hand, multimodal approaches propose very complexly stacked network structures with multiple CNN branches merged by a fully connected layer. It leads to very high computational and data requirements. This study proposes replacing a stacked network with a distributed neural network system for multimodal sleep stage detection. It has relatively low computational and training data requirements while providing highly competitive results. The proposed multimodal classification and decision-making system (MM-DMS) method applies a fully connected shallow neural network, arbitrating between classification outcomes given by an assembly of independent convolutional neural networks (CNNs), each using a different single-modality signal. Experiments conducted on the CAP Sleep Database data, including the EEG-, ECG-, and EMG modalities representing six stages of sleep, show that the MM-DMS significantly outperforms each single-modality CNN. The fully-connected shallow network arbitration included in the MM-DMS outperforms the traditional majority voting-, average probability-, and maximum probability decision-making methods.

Cite

CITATION STYLE

APA

Cheng, Y. H., Lech, M., & Wilkinson, R. H. (2023). Distributed Neural Network System for Multimodal Sleep Stage Detection. IEEE Access, 11, 29048–29061. https://doi.org/10.1109/ACCESS.2023.3260215

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free