A Review of Human Evaluation for Style Transfer

14Citations
Citations of this article
52Readers
Mendeley users who have this article in their library.

Abstract

This paper reviews and summarizes human evaluation practices described in 97 style transfer papers with respect to three main evaluation aspects: style transfer, meaning preservation, and fluency. In principle, evaluations by human raters should be the most reliable. However, in style transfer papers, we find that protocols for human evaluations are often underspecified and not standardized, which hampers the reproducibility of research in this field and progress toward better human and automatic evaluation methods.

Cite

CITATION STYLE

APA

Briakou, E., Agrawal, S., Zhang, K., Tetreault, J., & Carpuat, M. (2021). A Review of Human Evaluation for Style Transfer. In GEM 2021 - 1st Workshop on Natural Language Generation, Evaluation, and Metrics, Proceedings (pp. 58–67). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.gem-1.6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free