Probing neural dialog models for conversational understanding

9Citations
Citations of this article
96Readers
Mendeley users who have this article in their library.

Abstract

The predominant approach to open-domain dialog generation relies on end-to-end training of neural models on chat datasets. However, this approach provides little insight as to what these models learn (or do not learn) about engaging in dialog. In this study, we analyze the internal representations learned by neural open-domain dialog systems and evaluate the quality of these representations for learning basic conversational skills. Our results suggest that standard open-domain dialog systems struggle with answering questions, inferring contradiction, and determining the topic of conversation, among other tasks. We also find that the dyadic, turn-taking nature of dialog is not fully leveraged by these models. By exploring these limitations, we highlight the need for additional research into architectures and training methods that can better capture high-level information about dialog.1

Cite

CITATION STYLE

APA

Saleh, A., Deutsch, T., Casper, S., Belinkov, Y., & Shieber, S. (2020). Probing neural dialog models for conversational understanding. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 132–143). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.nlp4convai-1.15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free