The neural metrics recently received considerable attention from the research community in the automatic evaluation of machine translation. Unlike text-based metrics that have interpretable and consistent evaluation mechanisms for various data sources, the reliability of neural metrics in assessing out-of-distribution data remains a concern due to the disparity between training data and real-world data. This paper aims to address the inference bias of neural metrics through uncertainty minimization during test time, without requiring additional data. Our proposed method comprises three steps: uncertainty estimation, test-time adaptation, and inference. Specifically, the model employs the prediction uncertainty of the current data as a signal to update a small fraction of parameters during test time and subsequently refine the prediction through optimization. To validate our approach, we apply the proposed method to three representative models and conduct experiments on the WMT21 benchmarks. The results obtained from both in-domain and out-of-distribution evaluations consistently demonstrate improvements in correlation performance across different models. Furthermore, we provide evidence that the proposed method effectively reduces model uncertainty. The code is publicly available at https://github.com/NLP2CT/TaU.
CITATION STYLE
Zhan, R., Liu, X., Wong, D. F., Zhang, C., Chao, L. S., & Zhang, M. (2023). Test-time Adaptation for Machine Translation Evaluation by Uncertainty Minimization. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 807–820). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.47
Mendeley helps you to discover research relevant for your work.