Spurious Correlations in Reference-Free Evaluation of Text Generation

12Citations
Citations of this article
53Readers
Mendeley users who have this article in their library.

Abstract

Model-based, reference-free evaluation metrics have been proposed as a fast and cost-effective approach to evaluate Natural Language Generation (NLG) systems. Despite promising recent results, we find evidence that reference-free evaluation metrics of summarization and dialog generation may be relying on spurious correlations with measures such as word overlap, perplexity, and length. We further observe that for text summarization, these metrics have high error rates when ranking current state-of the-art abstractive summarization systems. We demonstrate that these errors can be mitigated by explicitly designing evaluation metrics to avoid spurious features in reference-free evaluation.

Cite

CITATION STYLE

APA

Durmus, E., Ladhak, F., & Hashimoto, T. (2022). Spurious Correlations in Reference-Free Evaluation of Text Generation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 1443–1454). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.102

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free