Mixed-initiative dialogue tasks involve repeated exchanges of information and conversational control. Conversational agents gain control by generating responses that follow particular dialogue intents or strategies, prescribed by a policy planner. The standard approach has been fine-tuning pre-trained language models to perform generation conditioned on these intents. However, these supervised generation models are limited by the cost and quality of data annotation. We instead prompt large language models as a drop-in replacement to fine-tuning on conditional generation. We formalize prompt construction for controllable mixed-initiative dialogue. Our findings show improvements over fine-tuning and ground truth responses according to human evaluation and automatic metrics for two tasks: PersuasionForGood and Emotional Support Conversations.
CITATION STYLE
Chen, M., Yu, X., Shi, W., Awasthi, U., & Yu, Z. (2023). Controllable Mixed-Initiative Dialogue Generation through Prompting. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 2, pp. 951–966). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-short.82
Mendeley helps you to discover research relevant for your work.