G-EVAL: NLG Evaluation using GPT-4 with Better Human Alignment

233Citations
Citations of this article
322Readers
Mendeley users who have this article in their library.

Abstract

The quality of texts generated by natural language generation (NLG) systems is hard to measure automatically. Conventional reference-based metrics, such as BLEU and ROUGE, have been shown to have relatively low correlation with human judgments, especially for tasks that require creativity and diversity. Recent studies suggest using large language models (LLMs) as reference-free metrics for NLG evaluation, which have the benefit of being applicable to new tasks that lack human references. However, these LLM-based evaluators still have lower human correspondence than medium-size neural evaluators. In this work, we present G-EVAL, a framework of using large language models with chain-of-thoughts (CoT) and a form-filling paradigm, to assess the quality of NLG outputs. We experiment with two generation tasks, text summarization and dialogue generation. We show that G-EVAL with GPT-4 as the backbone model achieves a Spearman correlation of 0.514 with human on summarization task, outperforming all previous methods by a large margin. We also propose analysis on the behavior of LLM-based evaluators, and highlight the potential concern of LLM-based evaluators having a bias towards the LLM-generated texts..

Cited by Powered by Scopus

102Citations
475Readers

This article is free to access.

This article is free to access.

Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Liu, Y., Iter, D., Xu, Y., Wang, S., Xu, R., & Zhu, C. (2023). G-EVAL: NLG Evaluation using GPT-4 with Better Human Alignment. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 2511–2522). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.153

Readers over time

‘23‘24‘2504590135180

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 70

67%

Researcher 26

25%

Lecturer / Post doc 5

5%

Professor / Associate Prof. 3

3%

Readers' Discipline

Tooltip

Computer Science 96

86%

Engineering 10

9%

Medicine and Dentistry 3

3%

Mathematics 3

3%

Save time finding and organizing research with Mendeley

Sign up for free
0