Multi-level Adaptive Contrastive Learning for Knowledge Internalization in Dialogue Generation

0Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Knowledge-grounded dialogue generation aims to mitigate the issue of text degeneration by incorporating external knowledge to supplement the context. However, the model often fails to internalize this information into responses in a human-like manner. Instead, it simply inserts snippets of the provided knowledge into generic responses. As a result, the generated responses tend to be tedious, incoherent, and in lack of interactivity which means the degeneration problem is still unsolved. In this work, we find that such copying-style degeneration is primarily due to the weak likelihood objective, which allows the model to "cheat" the objective by merely duplicating knowledge snippets in a superficial pattern matching manner based on overlap. To overcome this challenge, we propose a Multi-level Adaptive Contrastive Learning (MACL) framework that dynamically samples negative examples and subsequently penalizes degeneration behaviors at both the token-level and sequence-level. Extensive experiments on the WoW dataset demonstrate the effectiveness of our approach across various pre-trained models and decoding strategies.

Cite

CITATION STYLE

APA

Yang, C., Lin, Z., Wang, L., Tian, C., Pang, L., Li, J., … Wang, W. (2023). Multi-level Adaptive Contrastive Learning for Knowledge Internalization in Dialogue Generation. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 8002–8015). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.497

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free