Towards an automatic turing test: Learning to evaluate dialogue responses

150Citations
Citations of this article
486Readers
Mendeley users who have this article in their library.

Abstract

Automatically evaluating the quality of dialogue responses for unstructured domains is a challenging problem. Unfortunately, existing automatic evaluation metrics are biased and correlate very poorly with human judgements of response quality. Yet having an accurate automatic evaluation procedure is crucial for dialogue research, as it allows rapid prototyping and testing of new models with fewer expensive human evaluations. In response to this challenge, we formulate automatic dialogue evaluation as a learning problem. We present an evaluation model (ADEM) that learns to predict human-like scores to input responses, using a new dataset of human response scores. We show that the ADEM model's predictions correlate significantly, and at a level much higher than word-overlap metrics such as BLEU, with human judgements at both the utterance and system-level. We also show that ADEM can generalize to evaluating dialogue models unseen during training, an important step for automatic dialogue evaluation.

Cite

CITATION STYLE

APA

Lowe, R., Gontier, N. A., Noseworthy, M., Bengio, Y., Serban, I. V., & Pineau, J. (2017). Towards an automatic turing test: Learning to evaluate dialogue responses. In ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers) (Vol. 1, pp. 1116–1126). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/P17-1103

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free