An Interpretable Deep Learning Model for Speech Activity Detection Using Electrocorticographic Signals

8Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

Numerous state-of-the-art solutions for neural speech decoding and synthesis incorporate deep learning into the processing pipeline. These models are typically opaque and can require significant computational resources for training and execution. A deep learning architecture is presented that learns input bandpass filters that capture task-relevant spectral features directly from data. Incorporating such explainable feature extraction into the model furthers the goal of creating end-to-end architectures that enable automated subject-specific parameter tuning while yielding an interpretable result. The model is implemented using intracranial brain data collected during a speech task. Using raw, unprocessed timesamples, the model detects the presence of speech at every timesample in a causal manner, suitable for online application. Model performance is comparable or superior to existing approaches that require substantial signal preprocessing and the learned frequency bands were found to converge to ranges that are supported by previous studies.

Cite

CITATION STYLE

APA

Stuart, M., Lesaja, S., Shih, J. J., Schultz, T., Manic, M., & Krusienski, D. J. (2022). An Interpretable Deep Learning Model for Speech Activity Detection Using Electrocorticographic Signals. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 30, 2783–2792. https://doi.org/10.1109/TNSRE.2022.3207624

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free