Achieving Conversational Goals with Unsupervised Post-hoc Knowledge Injection

10Citations
Citations of this article
64Readers
Mendeley users who have this article in their library.

Abstract

A limitation of current neural dialog models is that they tend to suffer from a lack of specificity and informativeness in generated responses, primarily due to dependence on training data that covers a limited variety of scenarios and conveys limited knowledge. One way to alleviate this issue is to extract relevant knowledge from external sources at decoding time and incorporate it into the dialog response. In this paper, we propose a post-hoc knowledge-injection technique where we first retrieve a diverse set of relevant knowledge snippets conditioned on both the dialog history and an initial response from an existing dialog model. We construct multiple candidate responses, individually injecting each retrieved snippet into the initial response using a gradient-based decoding method, and then select the final response with an unsupervised ranking step. Our experiments in goal-oriented and knowledge-grounded dialog settings demonstrate that human annotators judge the outputs from the proposed method to be more engaging and informative compared to responses from prior dialog systems. We further show that knowledge-augmentation promotes success in achieving conversational goals in both experimental settings.

Cite

CITATION STYLE

APA

Majumder, B. P., Jhamtani, H., Berg-Kirkpatrick, T., & McAuley, J. (2022). Achieving Conversational Goals with Unsupervised Post-hoc Knowledge Injection. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 3140–3153). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.224

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free