A Knowledge Storage and Semantic Space Alignment Method for Multi-documents Dialogue Generation

4Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Question Answering (QA) is a Natural Language Processing (NLP) task that can measure language and semantics understanding ability, it requires a system not only to retrieve relevant documents from a large number of articles but also to answer corresponding questions according to documents. However, various language styles and sources of human questions and evidence documents form the different embedding semantic spaces, which may bring some errors to the downstream QA task. To alleviate these problems, we propose a framework for enhancing downstream evidence retrieval by generating evidence, aiming at improving the performance of response generation. Specifically, we take the pre-training language model as a knowledge base, storing documents' information and knowledge into model parameters. With the Child-Tuning approach being designed, the knowledge storage and evidence generation avoid catastrophic forgetting for response generation. Extensive experiments carried out on the multi-documents dataset show that the proposed method can improve the final performance, which demonstrates the effectiveness of the proposed framework.

Cite

CITATION STYLE

APA

Zhu, M., Li, B., Xia, F., & Weng, Y. (2022). A Knowledge Storage and Semantic Space Alignment Method for Multi-documents Dialogue Generation. In DialDoc 2022 - Proceedings of the 2nd DialDoc Workshop on Document-Grounded Dialogue and Conversational Question Answering, Proceedings of the Workshop (pp. 130–135). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.dialdoc-1.14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free