Are Experts Needed? On Human Evaluation of Counselling Reflection Generation

11Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Reflection is a crucial counselling skill where the therapist conveys to the client their interpretation of what the client said. Language models have recently been used to generate reflections automatically, but human evaluation is challenging, particularly due to the cost of hiring experts. Laypeople-based evaluation is less expensive and easier to scale, but its quality is unknown for reflections. Therefore, we explore whether laypeople can be an alternative to experts in evaluating a fundamental quality aspect: coherence and context-consistency. We do so by asking a group of laypeople and a group of experts to annotate both synthetic reflections and human reflections from actual therapists. We find that both laypeople and experts are reliable annotators and that they have moderate-to-strong inter-group correlation, which shows that laypeople can be trusted for such evaluations. We also discover that GPT-3 mostly produces coherent and consistent reflections, and we explore changes in evaluation results when the source of synthetic reflections changes to GPT-3 from the less powerful GPT-2.

Cite

CITATION STYLE

APA

Wu, Z., Balloccu, S., Reiter, E., Helaoui, R., Recupero, D. R., & Riboni, D. (2023). Are Experts Needed? On Human Evaluation of Counselling Reflection Generation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 6906–6930). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.382

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free