A Controllable Model of Grounded Response Generation

47Citations
Citations of this article
88Readers
Mendeley users who have this article in their library.

Abstract

Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process, often resulting in uninteresting responses. Attempts to boost informativeness alone come at the expense of factual accuracy, as attested by pretrained language models' propensity to "hallucinate" facts. While this may be mitigated by access to background knowledge, there is scant guarantee of relevance and informativeness in generated responses. We propose a framework that we call controllable grounded response generation (CGRG), in which lexical control phrases are either provided by a user or automatically extracted by a control phrase predictor from dialogue context and grounding knowledge. Quantitative and qualitative results show that, using this framework, a transformer based model with a novel inductive attention mechanism, trained on a conversation-like Reddit dataset, outperforms strong generation baselines.

Cite

CITATION STYLE

APA

Wu, Z., Galley, M., Brockett, C., Zhang, Y., Gao, X., Quirk, C., … Dolan, B. (2021). A Controllable Model of Grounded Response Generation. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 16, pp. 14085–14093). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i16.17658

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free