Improving Multiple Documents Grounded Goal-Oriented Dialog Systems via Diverse Knowledge Enhanced Pretrained Language Model

4Citations
Citations of this article
42Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we mainly discuss about our submission to MultiDoc2Dial task, which aims to model the goal-oriented dialogues grounded in multiple documents. The proposed task is split into grounding span prediction and agent response generation. The baseline for the task is the retrieval augmented generation model, which consists of a dense passage retrieval model for the retrieval part and the BART model for the generation part. The main challenge of this task is that the system requires a great amount of pre-trained knowledge to generate answers grounded in multiple documents. To overcome this challenge, we adopt multitask learning, data augmentation, model pretraining and contrastive learning to enhance our model's coverage of pretrained knowledge. We experiment with various settings of our method to show the effectiveness of our approaches. Our final model achieved 37.78 F1 score, 22.94 SacreBLEU, 36.97 Meteor, 35.46 RougeL, a total of 133.15 on DialDoc Shared Task at ACL 2022 released test set.

References Powered by Scopus

Commonsense knowledge aware conversation generation with graph attention

454Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Can LMs Learn New Entities from Descriptions? Challenges in Propagating Injected Knowledge

24Citations
N/AReaders
Get full text

Easy and effective! Data augmentation for knowledge-aware dialogue generation via multi-perspective sentences interaction

1Citations
N/AReaders
Get full text

Conversational Search Based on Utterance-Mask-Passage Post-training

1Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Jang, Y., Lee, D., Park, H., Kang, T., Lee, H., Bae, H., & Jung, K. (2022). Improving Multiple Documents Grounded Goal-Oriented Dialog Systems via Diverse Knowledge Enhanced Pretrained Language Model. In DialDoc 2022 - Proceedings of the 2nd DialDoc Workshop on Document-Grounded Dialogue and Conversational Question Answering, Proceedings of the Workshop (pp. 136–141). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.dialdoc-1.15

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 7

58%

Researcher 3

25%

Professor / Associate Prof. 1

8%

Lecturer / Post doc 1

8%

Readers' Discipline

Tooltip

Computer Science 12

75%

Linguistics 2

13%

Neuroscience 1

6%

Engineering 1

6%

Save time finding and organizing research with Mendeley

Sign up for free