FST: the FAIR Speech Translation System for the IWSLT21 Multilingual Shared Task

9Citations
Citations of this article
59Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we describe our end-to-end multilingual speech translation system submitted to the IWSLT 2021 evaluation campaign on the Multilingual Speech Translation shared task. Our system is built by leveraging transfer learning across modalities, tasks and languages. First, we leverage general-purpose multilingual modules pretrained with large amounts of unlabelled and labelled data. We further enable knowledge transfer from the text task to the speech task by training two tasks jointly. Finally, our multilingual model is finetuned on speech translation task-specific data to achieve the best translation results. Experimental results show our system outperforms the reported systems, including both end-to-end and cascaded based approaches, by a large margin. In some translation directions, our speech translation results evaluated on the public Multilingual TEDx test set are even comparable with the ones from a strong text-to-text translation system, which uses the oracle speech transcripts as input.

Cite

CITATION STYLE

APA

Tang, Y., Gong, H., Li, X., Wang, C., Pino, J., Schwenk, H., & Goyal, N. (2021). FST: the FAIR Speech Translation System for the IWSLT21 Multilingual Shared Task. In IWSLT 2021 - 18th International Conference on Spoken Language Translation, Proceedings (pp. 131–137). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.iwslt-1.14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free