Bridging the gap between pre-training and fine-tuning for end-to-end speech translation

71Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.

Abstract

End-to-end speech translation, a hot topic in recent years, aims to translate a segment of audio into a specific language with an end-to-end model. Conventional approaches employ multi-task learning and pre-training methods for this task, but they suffer from the huge gap between pre-training and fine-tuning. To address these issues, we propose a Tandem Connectionist Encoding Network (TCEN) which bridges the gap by reusing all subnets in fine-tuning, keeping the roles of subnets consistent, and pre-training the attention module. Furthermore, we propose two simple but effective methods to guarantee the speech encoder outputs and the MT encoder inputs are consistent in terms of semantic representation and sequence length. Experimental results show that our model leads to significant improvements in En-De and En-Fr translation irrespective of the backbones.

Cite

CITATION STYLE

APA

Wang, C., Wu, Y., Liu, S., Yang, Z., & Zhou, M. (2020). Bridging the gap between pre-training and fine-tuning for end-to-end speech translation. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 9161–9168). AAAI press. https://doi.org/10.1609/aaai.v34i05.6452

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free