Capturing greater context for question generation

53Citations
Citations of this article
79Readers
Mendeley users who have this article in their library.

Abstract

Automatic question generation can benefit many applications ranging from dialogue systems to reading comprehension. While questions are often asked with respect to long documents, there are many challenges with modeling such long documents. Many existing techniques generate questions by effectively looking at one sentence at a time, leading to questions that are easy and not reflective of the human process of question generation. Our goal is to incorporate interactions across multiple sentences to generate realistic questions for long documents. In order to link a broad document context to the target answer, we represent the relevant context via a multi-stage attention mechanism, which forms the foundation of a sequence to sequence model. We outperform stateof- the-art methods on question generation on three questionanswering datasets - SQuAD, MS MARCO and NewsQA.

Cite

CITATION STYLE

APA

Tuan, L. A., Shah, D. J., & Barzilay, R. (2020). Capturing greater context for question generation. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 9065–9072). AAAI press. https://doi.org/10.1609/aaai.v34i05.6440

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free