Explainable Machine Learning for Chatbots

  • Galitsky B
  • Goldberg S
N/ACitations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Machine learning (ML) has been successfully applied to a wide variety of fields ranging from information retrieval, data mining, and speech recognition, to computer graphics, visualization, and human-computer interaction. However, most users often treat a machine learning model as a black box because of its incomprehensible functions and unclear working mechanism (Liu et al. 2017). Without a clear understanding of how and why a model works, the development of high performance models for chatbots typically relies on a time-consuming trial-and-error process. As a result, academic and industrial ML chatbot developers are facing challenges that demand more transparent and explainable systems for better understanding and analyzing ML models, especially their inner working mechanisms. In this Chapter we focus on explainability. We first discuss what is explainable ML and how its features are desired by users. We then draw an example chatbot-related classification problem and show how it is solved by a transparent rule-based or ML method. After that we present a decision support-enabled chatbot that shares its explanations to back up its decisions and tackles that of a human peer. We conclude this chapter with a learning framework representing a deterministic induc-tive approach with complete explainability.

Cite

CITATION STYLE

APA

Galitsky, B., & Goldberg, S. (2019). Explainable Machine Learning for Chatbots. In Developing Enterprise Chatbots (pp. 53–83). Springer International Publishing. https://doi.org/10.1007/978-3-030-04299-8_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free