Model-based, reference-free evaluation metrics have been proposed as a fast and cost-effective approach to evaluate Natural Language Generation (NLG) systems. Despite promising recent results, we find evidence that reference-free evaluation metrics of summarization and dialog generation may be relying on spurious correlations with measures such as word overlap, perplexity, and length. We further observe that for text summarization, these metrics have high error rates when ranking current state-of the-art abstractive summarization systems. We demonstrate that these errors can be mitigated by explicitly designing evaluation metrics to avoid spurious features in reference-free evaluation.
CITATION STYLE
Durmus, E., Ladhak, F., & Hashimoto, T. (2022). Spurious Correlations in Reference-Free Evaluation of Text Generation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 1443–1454). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.102
Mendeley helps you to discover research relevant for your work.