Global Explainability of BERT-Based Evaluation Metrics by Disentangling along Linguistic Factors

15Citations
Citations of this article
73Readers
Mendeley users who have this article in their library.

Abstract

Evaluation metrics are a key ingredient for progress of text generation systems. In recent years, several BERT-based evaluation metrics have been proposed (including BERTScore, MoverScore, BLEURT, etc.) which correlate much better with human assessment of text generation quality than BLEU or ROUGE, invented two decades ago. However, little is known what these metrics, which are based on black-box language model representations, actually capture (it is typically assumed they model semantic similarity). In this work, we use a simple regression based global explainability technique to disentangle metric scores along linguistic factors, including semantics, syntax, morphology, and lexical overlap. We show that the different metrics capture all aspects to some degree, but that they are all substantially sensitive to lexical overlap, just like BLEU and ROUGE. This exposes limitations of these novelly proposed metrics, which we also highlight in an adversarial test scenario.

Cite

CITATION STYLE

APA

Kaster, M., Zhao, W., & Eger, S. (2021). Global Explainability of BERT-Based Evaluation Metrics by Disentangling along Linguistic Factors. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 8912–8925). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.701

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free