Capturing Conversational Interaction for Question Answering via Global History Reasoning

6Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.

Abstract

Conversational Question Answering (ConvQA) is required to answer the current question, conditioned on the observable paragraph-level context and conversation history. Previous works have intensively studied history-dependent reasoning. They perceive and absorb topic-related information of prior utterances in the interactive encoding stage. It yielded significant improvement compared to history-independent reasoning. This paper further strengthens the ConvQA encoder by establishing long-distance dependency among global utterances in multiturn conversation. We use multi-layer transformers to resolve long-distance relationships, which potentially contribute to the reweighting of attentive information in historical utterances. Experiments on QuAC show that our method obtains a substantial improvement (1%), yielding the F1 score of 73.7%. All source codes are available at https://github.com/ jaytsien/GHR.

Cite

CITATION STYLE

APA

Qian, J., Zou, B., Dong, M., Li, X., Aw, A. T., & Hong, Y. (2022). Capturing Conversational Interaction for Question Answering via Global History Reasoning. In Findings of the Association for Computational Linguistics: NAACL 2022 - Findings (pp. 2071–2078). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-naacl.159

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free