FrugalScore: Learning Cheaper, Lighter and Faster Evaluation Metrics for Automatic Text Generation

9Citations
Citations of this article
63Readers
Mendeley users who have this article in their library.

Abstract

Fast and reliable evaluation metrics are key to R&D progress. While traditional natural language generation metrics are fast, they are not very reliable. Conversely, new metrics based on large pretrained language models are much more reliable, but require significant computational resources. In this paper, we propose FrugalScore, an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance. Experiments with BERTScore and MoverScore on summarization and translation show that FrugalScore is on par with the original metrics (and sometimes better), while having several orders of magnitude less parameters and running several times faster. On average over all learned metrics, tasks, and variants, FrugalScore retains 96.8% of the performance, runs 24 times faster, and has 35 times less parameters than the original metrics. We make our trained metrics publicly available and easily accessible via Hugging Face, to benefit the entire NLP community and in particular researchers and practitioners with limited resources.

Cite

CITATION STYLE

APA

Eddine, M. K., Shang, G., Tixier, A. J. P., & Vazirgiannis, M. (2022). FrugalScore: Learning Cheaper, Lighter and Faster Evaluation Metrics for Automatic Text Generation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 1305–1318). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.93

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free