In-Image Machine Translation (IIMT) aims to convert images containing texts from one language to another. Traditional approaches for this task are cascade methods, which utilize optical character recognition (OCR) followed by neural machine translation (NMT) and text rendering. However, the cascade methods suffer from compounding errors of OCR and NMT, leading to a decrease in translation quality. In this paper, we propose an end-to-end model instead of the OCR, NMT and text rendering pipeline. Our neural architecture adopts an encoder-decoder paradigm with segmented pixel sequences as inputs and outputs. Through end-to-end training, our model yields improvements across various dimensions, (i) it achieves higher translation quality by avoiding error propagation, (ii) it demonstrates robustness for out domain data, and (iii) it displays insensitivity to incomplete words. To validate the effectiveness of our method and support for future research, we construct our dataset containing 4M pairs of De-En images and train our end-to-end model. The experimental results show that our approach outperforms both cascade method and current end-to-end model.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Tian, Y., Li, X., Liu, Z., Guo, Y., & Wang, B. (2023). In-Image Neural Machine Translation with Segmented Pixel Sequence-to-Sequence Model. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 15046–15057). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.1004