Abstract
Providing conversation models with background knowledge has been shown to make open-domain dialogues more informative and engaging. Existing models treat knowledge selection as a sentence ranking or classification problem where each sentence is handled individually, ignoring the internal semantic connection among sentences in background document. In this work, we propose to automatically convert the background knowledge documents into document semantic graphs and then perform knowledge selection over such graphs. Our document semantic graphs preserve sentence-level information through the use of sentence nodes and provide concept connections between sentences. We apply multitask learning for sentence-level knowledge selection and concept-level knowledge selection jointly, and show that it improves sentence-level selection. Our experiments show that our semantic graph based knowledge selection improves over sentence selection baselines for both the knowledge selection task and the end-to-end response generation task on HollE (Moghe et al., 2018) and improves generalization on unseen topics in WoW (Dinan et al., 2019).
Cite
CITATION STYLE
Li, S., Namazifar, M., Jin, D., Bansal, M., Ji, H., Liu, Y., & Hakkani-Tur, D. (2022). Enhanced Knowledge Selection for Grounded Dialogues via Document Semantic Graphs. In NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 2810–2823). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.naacl-main.202
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.