Listen, Understand and Translate: Triple Supervision Decouples End-to-end Speech-to-text Translation

47Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.

Abstract

An end-to-end speech-to-text translation (ST) takes audio in a source language and outputs the text in a target language. Existing methods are limited by the amount of parallel corpus. Can we build a system to fully utilize signals in a parallel ST corpus? We are inspired by human understanding system which is composed of auditory perception and cognitive processing. In this paper, we propose Listen-Understand- Translate, (LUT), a unified framework with triple supervision signals to decouple the end-to-end speech-to-text translation task. LUT is able to guide the acoustic encoder to extract as much information from the auditory input. In addition, LUT utilizes a pre-trained BERT model to enforce the upper encoder to produce as much semantic information as possible, without extra data. We perform experiments on a diverse set of speech translation benchmarks, including Librispeech English-French, IWSLT English-German and TED English-Chinese. Our results demonstrate LUT achieves the state-of-the-art performance, outperforming previous methods. The code is available at https://github.com/dqqcasia/st.

Cite

CITATION STYLE

APA

Dong, Q., Ye, R., Wang, M., Zhou, H., Xu, S., Xu, B., & Li, L. (2021). Listen, Understand and Translate: Triple Supervision Decouples End-to-end Speech-to-text Translation. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 14B, pp. 12749–12759). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i14.17509

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free