Consultation Checklists: Standardising the Human Evaluation of Medical Note Generation

3Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Evaluating automatically generated text is generally hard due to the inherently subjective nature of many aspects of the output quality. This difficulty is compounded in automatic consultation note generation by differing opinions between medical experts both about which patient statements should be included in generated notes and about their respective importance in arriving at a diagnosis. Previous real-world evaluations of note-generation systems saw substantial disagreement between expert evaluators. In this paper we propose a protocol that aims to increase objectivity by grounding evaluations in Consultation Checklists, which are created in a preliminary step and then used as a common point of reference during quality assessment. We observed good levels of inter-annotator agreement in a first evaluation study using the protocol; further, using Consultation Checklists produced in the study as reference for automatic metrics such as ROUGE or BERTScore improves their correlation with human judgements compared to using the original human note.

Cite

CITATION STYLE

APA

Savkov, A., Moramarco, F., Korfiatis, A. P., Perera, M., Belz, A., & Reiter, E. (2022). Consultation Checklists: Standardising the Human Evaluation of Medical Note Generation. In EMNLP 2022 - Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track (pp. 121–130). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-industry.10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free