A Pilot Study on Annotation Interfaces for Summary Comparisons

ISSN: 0736587X
1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

The task of summarisation is notoriously difficult to evaluate, with agreement even between expert raters unlikely to be perfect. One technique for summary evaluation relies on collecting comparison data by presenting annotators with generated summaries and tasking them with selecting the best one. This paradigm is currently being exploited in reinforcement learning using human feedback, whereby a reward function is trained using pairwise choice data. Comparisons are an easier way to elicit human feedback for summarisation, however, such decisions can be bottle necked by the usability of the annotator interface. In this paper, we present the results of a pilot study exploring how the user interface impacts annotator agreement when judging summary quality.

Cite

CITATION STYLE

APA

Gooding, S., Werner, L., & Caˇrbune, V. (2023). A Pilot Study on Annotation Interfaces for Summary Comparisons. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 179–187). Association for Computational Linguistics (ACL).

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free