For many languages, machine translation progress is hindered by the lack of reliable training data. Models are trained on whatever preexisting datasets may be available and then augmented with synthetic data, because it is often not economical to pay for the creation of large-scale datasets. But for the case of low-resource languages, would the creation of a few thousand professionally translated sentence pairs give any benefit? In this paper, we show that it does. We describe a broad data collection effort involving around 6k professionally translated sentence pairs for each of 39 low-resource languages, which we make publicly available. We analyse the gains of models trained on this small but high-quality data, showing that it has significant impact even when larger but lower quality pre-existing corpora are used, or when data is augmented with millions of sentences through backtranslation.
CITATION STYLE
Maillard, J., Gao, C., Kalbassi, E., Sadagopan, K. R., Goswami, V., Koehn, P., … Guzmán, F. (2023). Small Data, Big Impact: Leveraging Minimal Data for Effective Machine Translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 2740–2756). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.154
Mendeley helps you to discover research relevant for your work.