Automatic speech recognition and speech activity detection in the CHIL smart room

10Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An important step to bring speech technologies into wide deployment as a functional component in man-machine interfaces is to free the users from close-talk or desktop microphones, and enable far-field operation in various natural communication environments. In this work, we consider far-field automatic speech recognition and speech activity detection in conference rooms. The experiments are conducted on the smart room platform provided by the CHIL project. The first half of the paper addresses the development of speech recognition systems for the seminar transcription task. In particular, we look into the effect of combining parallel recognizers in both single-channel and multi-channel settings. In the second half of the paper, we describe a novel algorithm for speech activity detection based on fusing phonetic likelihood scores and energy features. It is shown that the proposed technique is able to handle non-stationary noise events and achieves good performance on the CHIL seminar corpus. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Chu, S. M., Marcheret, E., & Potamianos, G. (2006). Automatic speech recognition and speech activity detection in the CHIL smart room. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3869 LNCS, pp. 332–343). Springer Verlag. https://doi.org/10.1007/11677482_29

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free