Explainable Quality Estimation: CUNI Eval4NLP Submission

1Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper describes our participating system in the shared task Explainable quality estimation of 2nd Workshop on Evaluation & Comparison of NLP Systems. The task of quality estimation (QE, a.k.a. reference-free evaluation) is to predict the quality of MT output at inference time without access to reference translations. In this proposed work, we first build a word-level quality estimation model, then we finetune this model for sentence-level QE. Our proposed models achieve near state-of-the-art results. In the word-level QE, we place 2nd and 3rd on the supervised Ro-En and Et-En test sets. In the sentence-level QE, we achieve a relative improvement of 8.86% (Ro-En) and 10.6% (Et-En) in terms of the Pearson correlation coefficient over the baseline model.

Cite

CITATION STYLE

APA

Polák, P., Singh, M., & Bojar, O. (2021). Explainable Quality Estimation: CUNI Eval4NLP Submission. In Eval4NLP 2021 - Evaluation and Comparison of NLP Systems, Proceedings of the 2nd Workshop (pp. 250–255). Association for Computational Linguistics (ACL). https://doi.org/10.26615/978-954-452-056-4_024

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free