Abstract
This paper describes AISP-SJTU’s submissions for the IWSLT 2022 Simultaneous Translation task. We participate in the text-to-text and speech-to-text simultaneous translation from English to Mandarin Chinese. The training of the CAAT is improved by training across multiple values of right context window size, which achieves good online performance without setting a prior right context window size for training. For speech-to-text task, the best model we submitted achieves 25.87, 26.21, 26.45 BLEU in low, medium and high regimes on tst-COMMON, corresponding to 27.94, 28.31, 28.43 BLEU in text-to-text task.
Cite
CITATION STYLE
Zhu, Q., Wu, R., Liu, G., Zhu, X., Chen, X., Zhou, Y., … Yu, K. (2022). The AISP-SJTU Simultaneous Translation System for IWSLT 2022. In IWSLT 2022 - 19th International Conference on Spoken Language Translation, Proceedings of the Conference (pp. 208–215). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.iwslt-1.16
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.