RADE: Reference-Assisted Dialogue Evaluation for Open-Domain Dialogue

4Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Evaluating open-domain dialogue systems is challenging for reasons such as the one-to-many problem, i.e., many appropriate responses other than just the golden response. As of now, automatic evaluation methods need better consistency with humans, while reliable human evaluation can be time- and cost-intensive. To this end, we propose the Reference-Assisted Dialogue Evaluation (RADE) approach under the multi-task learning framework, which leverages the pre-created utterance as reference other than the gold response to relief the one-to-many problem. Specifically, RADE explicitly compares reference and the candidate response to predict their overall scores. Moreover, an auxiliary response generation task enhances prediction via a shared encoder. To support RADE, we extend three datasets with additional rated responses other than just a golden response by human annotation. Experiments on our three datasets and two existing benchmarks demonstrate the effectiveness of our method, where Pearson, Spearman, and Kendall correlations with human evaluation outperform state-of-the-art baselines.

Cite

CITATION STYLE

APA

Shi, Z., Sun, W., Zhang, S., Zhang, Z., Ren, P., & Ren, Z. (2023). RADE: Reference-Assisted Dialogue Evaluation for Open-Domain Dialogue. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 12856–12875). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.719

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free