Natural Questions is a new challenging machine reading comprehension benchmark with two-grained answers, which are a long answer (typically a paragraph) and a short answer (one or more entities inside the long answer). Despite the effectiveness of existing methods on this benchmark, they treat these two sub-tasks individually during training while ignoring their dependencies. To address this issue, we present a novel multi-grained machine reading comprehension framework that focuses on modeling documents at their hierarchical nature, which are different levels of granularity: documents, paragraphs, sentences, and tokens. We utilize graph attention networks to obtain different levels of representations so that they can be learned simultaneously. The long and short answers can be extracted from paragraph-level representation and token-level representation, respectively. In this way, we can model the dependencies between the two-grained answers to provide evidence for each other. We jointly train the two sub-tasks, and our experiments show that our approach significantly outperforms previous systems at both long and short answer criteria.
CITATION STYLE
Zheng, B., Wen, H., Liang, Y., Duan, N., Che, W., Jiang, D., … Liu, T. (2020). Document modeling with graph attention networks for multi-grained machine reading comprehension. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 6708–6718). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.599
Mendeley helps you to discover research relevant for your work.