CMU’s IWSLT 2022 Dialect Speech Translation System

15Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.

Abstract

This paper describes CMU’s submissions to the IWSLT 2022 dialect speech translation (ST) shared task for translating Tunisian-Arabic speech to English text. We use additional paired Modern Standard Arabic data (MSA) to directly improve the speech recognition (ASR) and machine translation (MT) components of our cascaded systems. We also augment the paired ASR data with pseudo translations via sequence-level knowledge distillation from an MT model and use these artificial triplet ST data to improve our end-to-end (E2E) systems. Our E2E models are based on the Multi-Decoder architecture with searchable hidden intermediates. We extend the Multi-Decoder by orienting the speech encoder towards the target language by applying ST supervision as hierarchical connectionist temporal classification (CTC) multi-task. During inference, we apply joint decoding of the ST CTC and ST autoregressive decoder branches of our modified Multi-Decoder. Finally, we apply ROVER voting, posterior combination, and minimum bayes-risk decoding with combined N-best lists to ensemble our various cascaded and E2E systems. Our best systems reached 20.8 and 19.5 BLEU on test2 (blind) and test1 respectively Without any additional MSA data, we reached 20.4 and 19.2 on the same test sets.

Cite

CITATION STYLE

APA

Yan, B., Fernandes, P., Dalmia, S., Shi, J., Peng, Y., Berrebbi, D., … Watanabe, S. (2022). CMU’s IWSLT 2022 Dialect Speech Translation System. In IWSLT 2022 - 19th International Conference on Spoken Language Translation, Proceedings of the Conference (pp. 298–307). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.iwslt-1.27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free