Generating responses following a desired style has great potentials to extend applications of open-domain dialogue systems, yet is refrained by lacking of parallel data for training. In this work, we explore the challenging task with pre-trained language models that have brought breakthrough to various natural language tasks. To this end, we introduce a KL loss and a style classifier to the fine-tuning step in order to steer response generation towards the target style in both a word-level and a sentence-level. Comprehensive empirical studies with two public datasets indicate that our model can significantly outperform state-of-the-art methods in terms of both style consistency and contextual coherence.
CITATION STYLE
Yang, Z., Wu, W., Xu, C., Liang, X., Bai, J., Wang, L., … Li, Z. (2020). STYLEDGPT: Stylized response generation with pre-trained language models. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 1548–1559). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.findings-emnlp.140
Mendeley helps you to discover research relevant for your work.