Does Structure Matter? Encoding Documents for Machine Reading Comprehension

7Citations
Citations of this article
75Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Machine reading comprehension is a challenging task especially for querying documents with deep and interconnected contexts. Transformer-based methods have shown advanced performances on this task; however, most of them still treat documents as a flat sequence of tokens. This work proposes a new Transformer-based method that reads a document as tree slices. It contains two modules for identifying more relevant text passage and the best answer span respectively, which are not only jointly trained but also jointly consulted at inference time. Our evaluation results show that our proposed method outperforms several competitive baseline approaches on two datasets from varied domains.

Cite

CITATION STYLE

APA

Wan, H., Feng, S., Gunasekara, C., Patel, S. S., Joshi, S., & Lastras, L. A. (2021). Does Structure Matter? Encoding Documents for Machine Reading Comprehension. In NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 4626–4634). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.naacl-main.367

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free