Improving Multi-turn Response Selection Models with Complementary Last-Utterance Selection by Instance Weighting

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Open-domain retrieval-based dialogue systems require a considerable amount of training data to learn their parameters. However, in practice, the negative samples of training data are usually selected from an unannotated conversation data set at random. The generated training data is likely to contain noise and affect the performance of the response selection models. To address this difficulty, we consider utilizing the underlying correlation in the data resource itself to derive different kinds of supervision signals and reduce the influence of noisy data. More specially, we consider a main-complementary task pair. The main task (i.e., our focus) selects the correct response given the last utterance and context, and the complementary task selects the last utterance given the response and context. The key point is that the output of the complementary task is used to set instance weights for the main task. We conduct extensive experiments in two public datasets and obtain significant improvement in both datasets. We also investigate the variant of our approach in multiple aspects, and the results have verified the effectiveness of our approach.

Cite

CITATION STYLE

APA

Zhou, K., Zhao, W. X., Zhu, Y., Wen, J. R., & Yu, J. (2020). Improving Multi-turn Response Selection Models with Complementary Last-Utterance Selection by Instance Weighting. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12085 LNAI, pp. 475–486). Springer. https://doi.org/10.1007/978-3-030-47436-2_36

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free