ConQuest: Contextual Question Paraphrasing through Answer-Aware Synthetic Question Generation

2Citations
Citations of this article
40Readers
Mendeley users who have this article in their library.

Abstract

Despite excellent performance on tasks such as question answering, Transformer-based architectures remain sensitive to syntactic and contextual ambiguities. Question Paraphrasing (QP) offers a promising solution as a means to augment existing datasets. The main challenges of current QP models include lack of training data and difficulty in generating diverse and natural questions. In this paper, we present ConQuest, a framework for generating synthetic datasets for contextual question paraphrasing. To this end, ConQuest first employs an answer-aware question generation (QG) model to create a question-pair dataset and then uses this data to train a contextualized question paraphrasing model. We extensively evaluate ConQuest and show its ability to produce more diverse and fluent question pairs than existing approaches. Our contextual paraphrase model also establishes a strong baseline for end-to-end contextual paraphrasing. Further, We find that context can improve BLEU-1 score on contextual compression and expansion by 4.3 and 11.2 respectively, compared to a non-contextual model.

Cite

CITATION STYLE

APA

Mirshekari, M., Gu, J., & Sisto, A. (2021). ConQuest: Contextual Question Paraphrasing through Answer-Aware Synthetic Question Generation. In W-NUT 2021 - 7th Workshop on Noisy User-Generated Text, Proceedings of the Conference (pp. 222–229). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.wnut-1.25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free