Due to the complexity of natural language, chatbots are prone to misinterpreting user requests. Such misinterpretations may lead the chatbot to provide answers that are not adequate responses to user request – so called false positives – potentially leading to conversational breakdown. A promising repair strategy in such cases is for the chatbot to express uncertainty and suggest likely alternatives in cases where prediction confidence falls below threshold. However, little is known about how such repair affects chatbot dialogues. We present findings from a study where a solution for expressing uncertainty and suggesting likely alternatives was implemented in a live chatbot for customer service. Chatbot dialogues (N = 700) were sampled at two points in time – immediately before and after implementation – and compared by conversational quality. Preliminary analyses suggest that introducing such a solution for conversational repair may substantially reduce the proportion of false positives in chatbot dialogues. At the same time, expressing uncertainty and suggesting likely alternatives does not seem to strongly affect the dialogue process and the likelihood of reaching a successful outcome. Based on the findings, we discuss theoretical and practical implications and suggest directions for future research.
CITATION STYLE
Følstad, A., & Taylor, C. (2020). Conversational Repair in Chatbots for Customer Service: The Effect of Expressing Uncertainty and Suggesting Alternatives. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11970 LNCS, pp. 201–214). Springer. https://doi.org/10.1007/978-3-030-39540-7_14
Mendeley helps you to discover research relevant for your work.