Deep reinforcement learning for modeling chit-chat dialog with discrete attributes

17Citations
Citations of this article
112Readers
Mendeley users who have this article in their library.

Abstract

Open domain dialog systems face the challenge of being repetitive and producing generic responses. In this paper, we demonstrate that by conditioning the response generation on interpretable discrete dialog attributes and composed attributes, it helps improve the model perplexity and results in diverse and interesting non-redundant responses. We propose to formulate the dialog attribute prediction as a reinforcement learning (RL) problem and use policy gradients methods to optimize utterance generation using long-term rewards. Unlike existing RL approaches which formulate the token prediction as a policy, our method reduces the complexity of the policy optimization by limiting the action space to dialog attributes, thereby making the policy optimization more practical and sample efficient. We demonstrate this with experimental and human evaluations.

References Powered by Scopus

This article is free to access.

SWITCHBOARD: Telephone speech corpus for research and development

1532Citations
326Readers
Get full text

A diversity-promoting objective function for neural conversation models

1520Citations
965Readers

Cited by Powered by Scopus

This article is free to access.

A Large-Scale Dataset for Empathetic Response Generation

46Citations
85Readers

A modular data-driven architecture for empathetic conversational agents

12Citations
15Readers
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Sankar, C., & Ravi, S. (2019). Deep reinforcement learning for modeling chit-chat dialog with discrete attributes. In SIGDIAL 2019 - 20th Annual Meeting of the Special Interest Group Discourse Dialogue - Proceedings of the Conference (pp. 1–10). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w19-5901

Readers over time

‘19‘20‘21‘22‘23‘24‘2508162432

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 45

79%

Researcher 7

12%

Lecturer / Post doc 4

7%

Professor / Associate Prof. 1

2%

Readers' Discipline

Tooltip

Computer Science 53

84%

Linguistics 5

8%

Engineering 3

5%

Social Sciences 2

3%

Save time finding and organizing research with Mendeley

Sign up for free
0