Learning Shared Semantic Space for Speech-to-Text Translation

61Citations
Citations of this article
82Readers
Mendeley users who have this article in their library.

Abstract

Having numerous potential applications and great impact, end-to-end speech translation (ST) has long been treated as an independent task, failing to fully draw strength from the rapid advances of its sibling - text machine translation (MT). With text and audio inputs represented differently, the modality gap has rendered MT data and its end-to-end models incompatible with their ST counterparts. In observation of this obstacle, we propose to bridge this representation gap with Chimera. By projecting audio and text features to a common semantic representation, Chimera unifies MT and ST tasks and boosts the performance on ST benchmarks, MuST-C and Augmented Librispeech, to a new state-of-the-art. Specifically, Chimera obtains 27.1 BLEU on MuST-C EN-DE, improving the SOTA by a +1.9 BLEU margin. Further experimental analyses demonstrate that the shared semantic space indeed conveys common knowledge between these two tasks and thus paves a new way for augmenting training resources across modalities.

Cite

CITATION STYLE

APA

Han, C., Wang, M., Ji, H., & Li, L. (2021). Learning Shared Semantic Space for Speech-to-Text Translation. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 2214–2225). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.195

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free