Conventional visual BCIs, in which control channels are tagged with stimulation patterns to elicit distinguishable brain patterns, has made impressive progress in terms of the information transfer rates (ITRs). However, less development has been seen with respect to user experience and complexity of the technical setup. The requirement to tag each of targets by a unique stimulus substantially limits the flexibility of conventional visual BCI systems. A method for decoding the targets in the environment flexibly was therefore proposed in the present study. A BCI speller with thirteen symbols drawn on paper was developed. The symbols were interspersed with four flickers with distinct frequencies, but the user did not have to gaze at flickers. Rather, subjects could spell a sequence by looking at the symbols on the paper. In a cue-guided spelling task, the average offline and online accuracies reached 89.3± 7.3% and 90.3± 6.9% for 13 subjects, corresponding to ITRs of 43.0± 7.4 bit/min and 43.8± 6.8 bit/min. In an additional free-spelling task for seven out of thirteen subjects, an accuracy of 92.3± 3.1% and an ITR of 45.6± 3.3 bit/min were achieved. Analysis of a simulated online system showed the possibility to reach an average ITR of 105.8 bit/min by reducing the epoch duration from 4 to 1 second. Reliable BCI control is possible by gazing at targets in the environment instead of dedicated stimuli which encode control channels. The proposed method can drastically reduce the technical effort for visual BCIs and thereby advance their applications outside the laboratory.
CITATION STYLE
Chen, J., Wang, Y., Maye, A., Hong, B., Gao, X., Engel, A. K., & Zhang, D. (2021). A spatially-coded visual brain-computer interface for flexible visual spatial information decoding. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 29, 926–933. https://doi.org/10.1109/TNSRE.2021.3080045
Mendeley helps you to discover research relevant for your work.