Abstract
Simultaneous translation is a task that requires starting translation before the speaker has finished speaking, so we face a trade-off between latency and accuracy. In this work, we focus on prefix-to-prefix translation and propose a method to extract alignment between bilingual prefix pairs. We use the alignment to segment a streaming input and fine-tune a translation model. The proposed method demonstrated higher BLEU than those of baselines in low latency ranges in our experiments on the IWSLT simultaneous translation benchmark.
Cite
CITATION STYLE
Kano, Y., Sudoh, K., & Nakamura, S. (2022). Simultaneous Neural Machine Translation with Prefix Alignment. In IWSLT 2022 - 19th International Conference on Spoken Language Translation, Proceedings of the Conference (pp. 22–31). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.iwslt-1.3
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.