Rception:Wide and Deep Interaction Networks for Machine Reading Comprehension

7Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Most of models for machine reading comprehension (MRC) usually focus on recurrent neural networks (RNNs) and attention mechanism, though convolutional neural networks (CNNs) are also involved for time efficiency. However, little attention has been paid to leverage CNNs and RNNs in MRC. For a deeper understanding, humans sometimes need local information for short phrases, sometimes need global context for long passages. In this paper, we propose a novel architecture, i.e., Rception, to capture and leverage both local deep information and global wide context. It fuses different kinds of networks and hyper-parameters horizontally rather than simply stacking them layer by layer vertically. Experiments on the Stanford Question Answering Dataset (SQuAD) show that our proposed architecture achieves good performance.

Cite

CITATION STYLE

APA

Zhang, X., & Wang, Z. (2020). Rception:Wide and Deep Interaction Networks for Machine Reading Comprehension. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 13987–13988). AAAI press.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free