Cross-subject classification of speaking modes using fNIRS

11Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In Brain-Computer Interface (BCI) research, subject and session specific training data is usually used to ensure satisfying classification results. In this paper, we show that neural responses to different speaking tasks recorded with functional Near Infrared spectroscopy (fNIRS) are consistent enough across speakers to robustly classify speaking modes with models trained exclusively on other subjects. Our study thereby suggests that future fNIRS-based BCIs can be designed without time-consuming training, which, besides being cumbersome, might be impossible for users with disabilities. Accuracies of 71% and 61% were achieved in distinguishing segments containing overt speech and silent speech from segments in which subjects were not speaking, without using any of the subject's data for training. To rule out artifact contamination, we filtered the data rigorously. To the best of our knowledge, there are no previous studies showing the zero training capability of fNIRS based BCIs. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Herff, C., Heger, D., Putze, F., Guan, C., & Schultz, T. (2012). Cross-subject classification of speaking modes using fNIRS. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7664 LNCS, pp. 417–424). https://doi.org/10.1007/978-3-642-34481-7_51

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free