Multiresolution and multimodal speech recognition with transformers

ISSN: 0736587X
23Citations
Citations of this article
120Readers
Mendeley users who have this article in their library.

Abstract

This paper presents an audio visual automatic speech recognition (AV-ASR) system using a Transformer-based architecture. We particularly focus on the scene context provided by the visual information, to ground the ASR. We extract representations for audio features in the encoder layers of the transformer and fuse video features using an additional crossmodal multihead attention layer. Additionally, we incorporate a multitask training criterion for multiresolution ASR, where we train the model to generate both character and subword level transcriptions. Experimental results on the How2 dataset, indicate that multiresolution training can speed up convergence by around 50% and relatively improves word error rate (WER) performance by upto 18% over subword prediction models. Further, incorporating visual information improves performance with relative gains upto 3.76% over audio only models. Our results are comparable to state-of-the-art Listen, Attend and Spell-based architectures.

Cite

CITATION STYLE

APA

Paraskevopoulos, G., Parthasarathy, S., Khare, A., & Sundaram, S. (2020). Multiresolution and multimodal speech recognition with transformers. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 2381–2387). Association for Computational Linguistics (ACL).

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free