Masked Audio Text Encoders are Effective Multi-Modal Rescorers

3Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Masked Language Models (MLMs) have proven to be effective for second-pass rescoring in Automatic Speech Recognition (ASR) systems. In this work, we propose Masked Audio Text Encoder (MATE), a multi-modal masked language model rescorer which incorporates acoustic representations into the input space of MLM. We adopt contrastive learning for effectively aligning the modalities by learning shared representations. We show that using a multi-modal rescorer is beneficial for domain generalization of the ASR system when target domain data is unavailable. MATE reduces word error rate (WER) by 4%-16% on in-domain, and 3%-7% on out-of-domain datasets, over the text-only baseline. Additionally, with very limited amount of training data (0.8 hours) MATE achieves a WER reduction of 8%-23% over the first-pass baseline.

Cite

CITATION STYLE

APA

Cai, J., Sunkara, M., Li, X., Bhatia, A., Pan, X., & Bodapati, S. (2023). Masked Audio Text Encoders are Effective Multi-Modal Rescorers. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 10718–10730). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.682

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free