Enhancing Multilingual Document-Grounded Dialogue Using Cascaded Prompt-Based Post-Training Models

2Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The Dialdoc23 shared task presents a Multilingual Document-Grounded Dialogue Systems (MDGDS) challenge, where system responses are generated in multiple languages using user’s queries, historical dialogue records and relevant passages. A major challenge for this task is the limited training data available in low-resource languages such as French and Vietnamese. In this paper, we propose Cascaded Prompt-based Post-training Models, dividing the task into three subtasks: Retrieval, Reranking and Generation. We conduct post-training on high-resource language such as English and Chinese to enhance performance of low-resource languages by using the similarities of languages. Additionally, we utilize the prompt method to activate model’s ability on diverse languages within the dialogue domain and explore which prompt is a good prompt. Our comprehensive experiments demonstrate the effectiveness of our proposed methods, which achieved the first place on the leaderboard with a total score of 215.40 in token-level F1, SacreBleu, and Rouge-L metrics.

Cite

CITATION STYLE

APA

Liu, J., Cheng, S., Zhou, Z., Gu, Y., Ye, J., & Luo, H. (2023). Enhancing Multilingual Document-Grounded Dialogue Using Cascaded Prompt-Based Post-Training Models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 44–51). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.dialdoc-1.5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free