Speech corpus recycling for acoustic cross-domain environments for automatic speech recognition

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

In recent years, server-based automatic speech recognition (ASR) systems have become ubiquitous, and unprecedented amounts of speech data are now available for system training. The availability of such training data has greatly improved ASR accuracy, but how to maximize the ASR performance in new domains or domains where ASR systems currently fail (thus limiting data availability) is still an important open question. In this paper, we propose a framework for mapping large speech corpora to different acoustic environments, so that such data can be transformed to build high-quality acoustic models for other acoustic domains. In our experiments using a large corpus, our proposed method reduced errors by 18.6%.

Cite

CITATION STYLE

APA

Ichikawa, O., Rennie, S. J., Fukuda, T., & Willett, D. (2016). Speech corpus recycling for acoustic cross-domain environments for automatic speech recognition. Acoustical Science and Technology, 37(2), 55–65. https://doi.org/10.1250/ast.37.55

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free