We introduce a new cross-modal fusion technique designed for generative error correction in automatic speech recognition (ASR). Our methodology leverages both acoustic information and external linguistic representations to generate accurate speech transcription contexts. This marks a step towards a fresh paradigm in generative error correction within the realm of n-best hypotheses. Unlike the existing ranking-based rescoring methods, our approach adeptly uses distinct initialization techniques and parameter-efficient algorithms to boost ASR performance derived from pre-trained speech and text models. Through evaluation across diverse ASR datasets, we assess our fusion technique, demonstrating a 37.66% improvement in word error rate (WER) relative performance compared to the n-best Oracle. To encourage future research, we have made our code and pre-trained models open source at https://github.com/Srijith-rkr/Whispering-LLaMA.
CITATION STYLE
Radhakrishnan, S., Yang, C. H. H., Khan, S. A., Kumar, R., Kiani, N. A., Gomez-Cabrero, D., & Tegner, J. N. (2023). Whispering LLaMA: A Cross-Modal Generative Error Correction Framework for Speech Recognition. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 10007–10016). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.618
Mendeley helps you to discover research relevant for your work.