Improving Speech Translation by Fusing Speech and Text

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

In speech translation, leveraging multimodal data to improve model performance and address limitations of individual modalities has shown significant effectiveness. In this paper, we harness the complementary strengths of speech and text to improve speech translation. However, speech and text are disparate modalities, we observe three aspects of modality gap that impede their integration in a speech translation model. To tackle these gaps, we propose Fuse-Speech-Text (FuseST), a cross-modal model which supports three distinct input modalities for translation: speech, text and fused speech-text. We leverage multiple techniques for cross-modal alignment and conduct a comprehensive analysis to assess its impact on speech translation, machine translation and fused speech-text translation. We evaluate FuseST on MuST-C, GigaST and newstest benchmark. Experiments show that the proposed FuseST achieves an average 34.0 BLEU on MuST-C En→De/Es/Fr (vs SOTA +1.1 BLEU). Further experiments demonstrate that FuseST does not degrade on MT task, as observed in previous works. Instead, it yields an average improvement of 3.2 BLEU over the pre-trained MT model. Code is available at https://github.com/WenbiaoYin/FuseST.

Cite

CITATION STYLE

APA

Yin, W., Liu, Z., Zhao, C., Wang, T., Tong, J., & Ye, R. (2023). Improving Speech Translation by Fusing Speech and Text. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 6262–6273). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.414

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free