Investigating person-specific errors in chat-oriented dialogue systems

2Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Creating chatbots to behave like real people is important in terms of believability. Errors in general chatbots and chatbots that follow a rough persona have been studied, but those in chatbots that behave like real people have not been thoroughly investigated. We collected a large amount of user interactions of a generation-based chatbot trained from large-scale dialogue data of a specific character, i.e., “target person” and analyzed errors related to that person. We found that person-specific errors can be divided into two types: errors in attributes and those in relations, each of which can be divided into two levels: self and other. The correspondence with an existing taxonomy of errors was also investigated, and person-specific errors that should be addressed in the future were clarified.

Cite

CITATION STYLE

APA

Mitsuda, K., Higashinaka, R., Li, T., & Yoshida, S. (2022). Investigating person-specific errors in chat-oriented dialogue systems. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 2, pp. 464–469). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-short.50

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free