VUS at IWSLT 2021: A Finetuned Pipeline for Offline Speech Translation

2Citations
Citations of this article
41Readers
Mendeley users who have this article in their library.

Abstract

In this technical report, we describe the fine-tuned1 ASR-MT pipeline used for the IWSLT shared task. We remove less useful speech samples by checking WER with an ASR model, and further train a wav2vec and Transformers-based ASR module based on the filtered data. In addition, we cleanse the errata that can interfere with the machine translation process and use it for Transformer-based MT module training. Finally, in the actual inference phase, we use a sentence boundary detection model trained with constrained data to properly merge fragment ASR outputs into full sentences. The merged sentences are post-processed using part of speech. The final result is yielded by the trained MT module. The performance using the dev set displays BLEU 20.37, and this model records the performance of BLEU 20.9 with the test set.

Cite

CITATION STYLE

APA

Jo, Y. R., Moon, Y. K., Jung, M., Choi, J., Moon, J., & Cho, W. I. (2021). VUS at IWSLT 2021: A Finetuned Pipeline for Offline Speech Translation. In IWSLT 2021 - 18th International Conference on Spoken Language Translation, Proceedings (pp. 120–124). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.iwslt-1.12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free