Deep learning (DL) methods have been widely used in the field of seizure prediction from electroencephalogram (EEG) in recent years. However, DL methods usually have numerous multiplication operations resulting in high computational complexity. In addtion, most of the current approaches in this field focus on designing models with special architectures to learn representations, ignoring the use of intrinsic patterns in the data. In this study, we propose a simple and effective end-To-end adder network and supervised contrastive learning (AddNet-SCL). The method uses addition instead of the massive multiplication in the convolution process to reduce the computational cost. Besides, contrastive learning is employed to effectively use label information, points of the same class are clustered together in the projection space, and points of different class are pushed apart at the same time. Moreover, the proposed model is trained by combining the supervised contrastive loss from the projection layer and the cross-entropy loss from the classification layer. Since the adder networks uses the $\ell -{{1}}$-norm distance as the similarity measure between the input feature and the filters, the gradient function of the network changes, an adaptive learning rate strategy is employed to ensure the convergence of AddNet-CL. Experimental results show that the proposed method achieves 94.9% sensitivity, an area under curve (AUC) of 94.2%, and a false positive rate of (FPR) 0.077/h on 19 patients in the CHB-MIT database and 89.1% sensitivity, an AUC of 83.1%, and an FPR of 0.120/h in the Kaggle database. Competitive results show that this method has broad prospects in clinical practice.
CITATION STYLE
Zhao, Y., Li, C., Liu, X., Qian, R., Song, R., & Chen, X. (2022). Patient-Specific Seizure Prediction via Adder Network and Supervised Contrastive Learning. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 30, 1536–1547. https://doi.org/10.1109/TNSRE.2022.3180155
Mendeley helps you to discover research relevant for your work.