Non-stationary neural signal to image conversion framework for image-based deep learning algorithms

10Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a time-efficient preprocessing framework that converts any given 1D physiological signal recordings into a 2D image representation for training image-based deep learning models. The non-stationary signal is rasterized into the 2D image using Bresenham’s line algorithm with time complexity O(n). The robustness of the proposed approach is evaluated based on two publicly available datasets. This study classified three different neural spikes (multi-class) and EEG epileptic seizure and non-seizure (binary class) based on shapes using a modified 2D Convolution Neural Network (2D CNN). The multi-class dataset consists of artificially simulated neural recordings with different Signal-to-Noise Ratios (SNR). The 2D CNN architecture showed significant performance for all individual SNRs scores with (SNR/ACC): 0.5/99.69, 0.75/99.69, 1.0/99.49, 1.25/98.85, 1.5/97.43, 1.75/95.20 and 2.0/91.98. Additionally, the binary class dataset also achieved 97.52% accuracy by outperforming several others proposed algorithms. Likewise, this approach could be employed on other biomedical signals such as Electrocardiograph (EKG) and Electromyography (EMG).

Cite

CITATION STYLE

APA

Patel, S. A., & Yildirim, A. (2023). Non-stationary neural signal to image conversion framework for image-based deep learning algorithms. Frontiers in Neuroinformatics, 17. https://doi.org/10.3389/fninf.2023.1081160

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free