Chatbots are getting increasingly popular in businesses, and particularly in customer services, but there is also a growing interest in developing artificial conversational agents able to coach people for medical or social purposes. Yet in too many cases, they remain frustrating to use when it comes to actual conversation going beyond simple question-answer interactions. In this paper, we show that this inability to sustain conversations is mostly caused by the lack of consideration of the user’s expectations, intentions, and current knowledge by the chatbot: a lack of a Theory of Mind. We investigated this hypothesis by designing an experiment using 5 chatbots having won the Loebner prize, in two different kinds of interaction: one relying heavily on implicit information, and the other not. As expected, no chatbot was able to keep conversing in the implicit condition.
CITATION STYLE
Jacquet, B., & Baratgin, J. (2021). Mind-reading chatbots: We are not there yet. In Advances in Intelligent Systems and Computing (Vol. 1253 AISC, pp. 266–271). Springer. https://doi.org/10.1007/978-3-030-55307-4_40
Mendeley helps you to discover research relevant for your work.