Evaluating Conversational Recommender Systems via User Simulation

99Citations
Citations of this article
108Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Conversational information access is an emerging research area. Currently, human evaluation is used for end-to-end system evaluation, which is both very time and resource intensive at scale, and thus becomes a bottleneck of progress. As an alternative, we propose automated evaluation by means of simulating users. Our user simulator aims to generate responses that a real human would give by considering both individual preferences and the general flow of interaction with the system. We evaluate our simulation approach on an item recommendation task by comparing three existing conversational recommender systems. We show that preference modeling and task-specific interaction models both contribute to more realistic simulations, and can help achieve high correlation between automatic evaluation measures and manual human assessments.

Cite

CITATION STYLE

APA

Zhang, S., & Balog, K. (2020). Evaluating Conversational Recommender Systems via User Simulation. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1512–1520). Association for Computing Machinery. https://doi.org/10.1145/3394486.3403202

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free