Synthesizing Speech from Intracranial Depth Electrodes using an Encoder-Decoder Framework

  • Kohler J
  • Ottenhoff M
  • Goulis S
  • et al.
N/ACitations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

Speech Neuroprostheses have the potential to enable communication for people with dysarthria or anarthria. Recent advances have demonstrated high-quality text decoding and speech synthesis from electrocorticographic grids placed on the cortical surface. Here, we investigate a less invasive measurement modality in three participants, namely stereotactic EEG (sEEG) that provides sparse sampling from multiple brain regions, including subcortical regions. To evaluate whether sEEG can also be used to synthesize high-quality audio from neural recordings, we employ a recurrent encoder-decoder model based on modern deep learning methods. We find that speech can indeed be reconstructed with correlations up to 0.8 from these minimally invasive recordings, despite limited amounts of training data.

Cite

CITATION STYLE

APA

Kohler, J., Ottenhoff, M. C., Goulis, S., Angrick, M., Colon, A. J., Wagner, L., … Herff, C. (2022). Synthesizing Speech from Intracranial Depth Electrodes using an Encoder-Decoder Framework. Neurons, Behavior, Data Analysis, and Theory, 6(1). https://doi.org/10.51628/001c.57524

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free