Functional near-infrared spectroscopy (fNIRS), a non-invasive optical technique, is widely used to monitor brain activities for disease diagnosis and brain-computer interfaces (BCIs). Deep learning-based fNIRS classification faces three major barriers: limited datasets, confusing evaluation criteria, and domain barriers. We apply more appropriate evaluation methods to three open-access datasets to solve the first two barriers. For domain barriers, we propose a general and scalable vision fNIRS framework that converts multi-channel fNIRS signals into multi-channel virtual images using the Gramian angular difference field (GADF). We use the framework to train state-of-the-art visual models from computer vision (CV) within a few minutes, and the classification performance is competitive with the latest fNIRS models. In cross-validation experiments, visual models achieve the highest average classification results of 78.68% and 73.92% on mental arithmetic and word generation tasks, respectively. Although visual models are slightly lower than the fNIRS models on unilateral finger- and foot-tapping tasks, the F1-score and kappa coefficient indicate that these differences are insignificant in subject-independent experiments. Furthermore, we study fNIRS signal representations and the classification performance of sequence-to-image methods. We hope to introduce rich achievements from the CV domain to improve fNIRS classification research.
CITATION STYLE
Wang, Z., Zhang, J., Xia, Y., Chen, P., & Wang, B. (2022). A General and Scalable Vision Framework for Functional Near-Infrared Spectroscopy Classification. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 30, 1982–1991. https://doi.org/10.1109/TNSRE.2022.3190431
Mendeley helps you to discover research relevant for your work.