Learning to customize model structures for few-shot dialogue generation tasks

25Citations
Citations of this article
153Readers
Mendeley users who have this article in their library.

Abstract

Training the generative models with minimal corpus is one of the critical challenges for building open-domain dialogue systems. Existing methods tend to use the meta-learning framework which pre-trains the parameters on all non-target tasks then fine-tunes on the target task. However, fine-tuning distinguishes tasks from the parameter perspective but ignores the model-structure perspective, resulting in similar dialogue models for different tasks. In this paper, we propose an algorithm that can customize a unique dialogue model for each task in the few-shot setting. In our approach, each dialogue model consists of a shared module, a gating module, and a private module. The first two modules are shared among all the tasks, while the third one will differentiate into different network structures to better capture the characteristics of the corresponding task. The extensive experiments on two datasets show that our method outperforms all the baselines in terms of task consistency, response quality, and diversity.

Cite

CITATION STYLE

APA

Song, Y., Liu, Z., Bi, W., Yan, R., & Zhang, M. (2020). Learning to customize model structures for few-shot dialogue generation tasks. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 5832–5841). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.517

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free