AdaST: Dynamically Adapting Encoder States in the Decoder for End-to-End Speech-to-Text Translation

6Citations
Citations of this article
51Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In end-to-end speech translation, acoustic representations learned by the encoder are usually fixed and static, from the perspective of the decoder, which is not desirable for dealing with the cross-modal and cross-lingual challenge in speech translation. In this paper, we show the benefits of varying acoustic states according to decoder hidden states and propose an adaptive speech-to-text translation model that is able to dynamically adapt acoustic states in the decoder. We concatenate the acoustic state and target word embedding sequence and feed the concatenated sequence into subsequent blocks in the decoder. In order to model the deep interaction between acoustic states and target hidden states, a speech-text mixed attention sublayer is introduced to replace the conventional cross-attention network. Experiment results on two widely-used datasets show that the proposed method significantly outperforms state-of-the-art neural speech translation models.

Cite

CITATION STYLE

APA

Huang, W., Wang, D., & Xiong, D. (2021). AdaST: Dynamically Adapting Encoder States in the Decoder for End-to-End Speech-to-Text Translation. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 2539–2545). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.224

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free