ASR hypothesis reranking using prior-informed restricted boltzmann machine

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Discriminative language models (DLMs) have been widely used for reranking competing hypotheses produced by an Automatic Speech Recognition (ASR) system. While existing DLMs suffer from limited generalization power, we propose a novel DLM based on a discriminatively trained Restricted Boltzmann Machine (RBM). The hidden layer of the RBM improves generalization and allows for employing additional prior knowledge, including pre-trained parameters and entity-related prior. Our approach outperforms the single-layer-perceptron (SLP) reranking model, and fusing our approach with SLP achieves up to 1.3% absolute Word Error Rate (WER) reduction and a relative 180% improvement in terms of WER reduction over the SLP reranker. In particular, it shows that proposed prior informed RBM reranker achieves largest ASR error reduction (3.1% absolute WER) on content words.

Cite

CITATION STYLE

APA

Ma, Y., Cambria, E., & Bigot, B. (2018). ASR hypothesis reranking using prior-informed restricted boltzmann machine. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10761 LNCS, pp. 503–514). Springer Verlag. https://doi.org/10.1007/978-3-319-77113-7_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free