Machine translation quality estimation (QE) predicts human judgements of a translation hypothesis without seeing the reference. State-of-the-art QE systems based on pretrained language models have been achieving remarkable correlations with human judgements yet they are computationally heavy and require human annotations, which are slow and expensive to create. To address these limitations, we define the problem of metric estimation (ME) where one predicts the automated metric scores also without the reference. We show that even without access to the reference, our model can estimate automated metrics (ρ=60% for BLEU, (Equation presented)=51% for other metrics) at the sentence-level. Because automated metrics correlate with human judgements, we can leverage the ME task for pre-training a QE model. For the QE task, we find that pre-training on TER is better (ρ=23%) than training for scratch (ρ=20%). github.com/zouharvi/mt-metric-estimation.
CITATION STYLE
Zouhar, V., Dhuliawala, S., Zhou, W., Daheim, N., Kocmi, T., Jiang, Y. E., & Sachan, M. (2023). Poor Man’s Quality Estimation: Predicting Reference-Based MT Metrics Without the Reference. In EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 1303–1317). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.eacl-main.95
Mendeley helps you to discover research relevant for your work.