Abstract
The task of dialogue rewriting aims to reconstruct the latest dialogue utterance by copying the missing content from the dialogue context. Until now, the existing models for this task suffer from the robustness issue, i.e., performances drop dramatically when testing on a different dataset. We address this robustness issue by proposing a novel sequence-tagging-based model so that the search space is significantly reduced, yet the core of this task is still well covered. As a common issue of most tagging models for text generation, the model's outputs may lack fluency. To alleviate this issue, we inject the loss signal from BLEU or GPT-2 under a REINFORCE framework. Experiments show huge improvements of our model over the current state-of-the-art systems when transferring to another dataset.
Cite
CITATION STYLE
Hao, J., Song, L., Wang, L., Xu, K., Tu, Z., & Yu, D. (2021). RAST: Domain-Robust Dialogue Rewriting as Sequence Tagging. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 4913–4924). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.402
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.