Addressing Hiccups in Conversations with Recommender Systems

4Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Conversational Agents (CAs) employing voice as their main interaction mode produce natural language utterances with the aim of mimicking human conversations. To unveil hiccups in conversations with recommender systems, we observed users interacting with CAs. Our findings suggest that those occur as users struggle to start the session, as CAs do not appear exploratory, and as CAs remained silent after offering recommendation(s) or after reporting errors. Users enacted mental models derived from years of experience with Graphical User Interfaces, but also expected human-like characteristics such as explanations and proactivity. Anchoring on these, we designed a dialogue model for a multimodal Conversational Recommender System (CRS) mimicking humans and GUIs. We probed the state of hiccups further with a Wizard-of-Oz prototype implementing this dialogue model. Our findings suggest that participants rapidly adopted GUI mimicries, cooperated for error resolution, appreciated explainable recommendations, and provided insights to improve persisting hiccups in proactivity and navigation. Based on these, we provide implications for design to address hiccups in CRS.

Cite

CITATION STYLE

APA

Viswanathan, S., Guillot, F., Chang, M., Grasso, A. M., & Renders, J. M. (2022). Addressing Hiccups in Conversations with Recommender Systems. In DIS 2022 - Proceedings of the 2022 ACM Designing Interactive Systems Conference: Digital Wellbeing (pp. 1243–1259). Association for Computing Machinery, Inc. https://doi.org/10.1145/3532106.3533491

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free