Detecting overlapping speech with long short-term memory recurrent neural networks

37Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Detecting segments of overlapping speech (when two or more speakers are active at the same time) is a challenging problem. Previously, mostly HMM-based systems have been used for overlap detection, employing various different audio features. In this work, we propose a novel overlap detection system using Long Short-Term Memory (LSTM) recurrent neural networks. LSTMs are used to generate framewise overlap predictions which are applied for overlap detection. Furthermore, a tandem HMM-LSTM system is obtained by adding LSTM predictions to the HMM feature set. Experiments with the AMI corpus show that overlap detection performance of LSTMs is comparable to HMMs. The combination of HMMs and LSTMs improves overlap detection by achieving higher recall. Copyright © 2013 ISCA.

Cite

CITATION STYLE

APA

Geiger, J. T., Eyben, F., Schuller, B., & Rigoll, G. (2013). Detecting overlapping speech with long short-term memory recurrent neural networks. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH (pp. 1668–1672). International Speech and Communication Association. https://doi.org/10.21437/interspeech.2013-27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free