Position Matters! Empirical Study of Order Effect in Knowledge-grounded Dialogue

0Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

With the power of large pretrained language models, various research works have integrated knowledge into dialogue systems. The traditional techniques treat knowledge as part of the input sequence for the dialogue system, prepending a set of knowledge statements in front of dialogue history. However, such a mechanism forces knowledge sets to be concatenated in an ordered manner, making models implicitly pay imbalanced attention to the sets during training. In this paper, we first investigate how the order of the knowledge set can influence autoregressive dialogue systems’ responses. We conduct experiments on two commonly used dialogue datasets with two types of transformer-based models and find that models view the input knowledge unequally. To this end, we propose a simple and novel technique to alleviate the order effect by modifying the position embeddings of knowledge input in these models. With the proposed position embedding method, the experimental results show that each knowledge statement is uniformly considered to generate responses.

Cite

CITATION STYLE

APA

Su, H., Kumar, S. H., Mazumder, S., Chen, W., Manuvinakurike, R., Okur, E., … Lee, H. Y. (2023). Position Matters! Empirical Study of Order Effect in Knowledge-grounded Dialogue. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 36–43). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.dialdoc-1.4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free