Automated scoring of chatbot responses in conversational dialogue

5Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Rapid advancement in natural language processing (NLP) and machine learning has led to the recent development of many chatbot systems using various algorithms. However, in a conversational dialogue setting, creating a system to communicate with humans in a meaningful and coherent manner remains a challenging task. Furthermore, it is very difficult even for humans to evaluate the responses of a chatbot system given the context of the conversation. In this paper, we will focus on the problem of automatically evaluating and scoring the quality of chatbot responses in human-chatbot dialogue settings. We propose a novel approach of combining the word representations of human and chatbot responses, and using machine learning algorithms, such as support vector machines (SVM), random forests (RF), and neural networks (NN) to learn the quality of the chatbot responses. Our experimental results show that our proposed approach is able to perform well.

Cite

CITATION STYLE

APA

Yuwono, S. K., Wu, B., & D’Haro, L. F. (2019). Automated scoring of chatbot responses in conversational dialogue. In Lecture Notes in Electrical Engineering (Vol. 579, pp. 357–369). Springer. https://doi.org/10.1007/978-981-13-9443-0_31

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free