Achieving Reliable Human Assessment of Open-Domain Dialogue Systems

28Citations
Citations of this article
62Readers
Mendeley users who have this article in their library.

Abstract

Evaluation of open-domain dialogue systems is highly challenging and development of better techniques is highlighted time and again as desperately needed. Despite substantial efforts to carry out reliable live evaluation of systems in recent competitions, annotations have been abandoned and reported as too unreliable to yield sensible results. This is a serious problem since automatic metrics are not known to provide a good indication of what may or may not be a high-quality conversation. Answering the distress call of competitions that have emphasized the urgent need for better evaluation techniques in dialogue, we present the successful development of human evaluation that is highly reliable while still remaining feasible and low cost. Self-replication experiments reveal almost perfectly repeatable results with a correlation of r = 0.969. Furthermore, due to the lack of appropriate methods of statistical significance testing, the likelihood of potential improvements to systems occurring due to chance is rarely taken into account in dialogue evaluation, and the evaluation we propose facilitates application of standard tests. Since we have developed a highly reliable evaluation method, new insights into system performance can be revealed. We therefore include a comparison of state-of-the-art models (i) with and without personas, to measure the contribution of personas to conversation quality, as well as (ii) prescribed versus freely chosen topics. Interestingly with respect to personas, results indicate that personas do not positively contribute to conversation quality as expected.

Cite

CITATION STYLE

APA

Ji, T., Graham, Y., Jones, G., Lyu, C., & Liu, Q. (2022). Achieving Reliable Human Assessment of Open-Domain Dialogue Systems. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 6416–6437). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.445

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free