Despite recent advances, evaluating how well large language models (LLMs) follow user instructions remains an open problem. While evaluation methods of language models have seen a rise in prompt-based approaches, limited work on the correctness of these methods has been conducted. In this work, we perform a meta-evaluation of a variety of metrics to quantify how accurately they measure the instruction-following abilities of LLMs. Our investigation is performed on grounded query-based summarization by collecting a new short-form, real-world dataset riSum, containing 300 document-instruction pairs with 3 answers each. All 900 answers are rated by 3 human annotators. Using riSum, we analyze the agreement between evaluation methods and human judgment. Finally, we propose new LLM-based reference-free evaluation methods that improve upon established baselines and perform on par with costly reference-based metrics that require high-quality summaries.
CITATION STYLE
Skopek, O., Aralikatte, R., Gooding, S., & Cărbune, V. (2023). Towards Better Evaluation of Instruction-Following: A Case-Study in Summarization. In CoNLL 2023 - 27th Conference on Computational Natural Language Learning, Proceedings (pp. 221–237). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.conll-1.16
Mendeley helps you to discover research relevant for your work.