Interactive Question Clarification in Dialogue via Reinforcement Learning

5Citations
Citations of this article
75Readers
Mendeley users who have this article in their library.

Abstract

Coping with ambiguous questions has been a perennial problem in real-world dialogue systems. Although clarification by asking questions is a common form of human interaction, it is hard to define appropriate questions to elicit more specific intents from a user. In this work, we propose a reinforcement model to clarify ambiguous questions by suggesting refinements of the original query. We first formulate a collection partitioning problem to select a set of labels enabling us to distinguish potential unambiguous intents. We list the chosen labels as intent phrases to the user for further confirmation. The selected label along with the original user query then serves as a refined query, for which a suitable response can more easily be identified. The model is trained using reinforcement learning with a deep policy network. We evaluate our model based on real-world user clicks and demonstrate significant improvements across several different experiments.

Cite

CITATION STYLE

APA

Hu, X., Wen, Z., Wang, Y., Li, X., & de Melo, G. (2020). Interactive Question Clarification in Dialogue via Reinforcement Learning. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Industry Track (pp. 78–89). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-industry.8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free