Comparing Attention-based Convolutional and Recurrent Neural Networks: Success and limitations in machine reading comprehension

29Citations
Citations of this article
133Readers
Mendeley users who have this article in their library.

Abstract

We propose a machine reading comprehension model based on the compare-aggregate framework with two-staged attention that achieves state-of-the-art results on the MovieQA question answering dataset. To investigate the limitations of our model as well as the behavioral difference between convolutional and recurrent neural networks, we generate adversarial examples to confuse the model and compare to human performance. Furthermore, we assess the generalizability of our model by analyzing its differences to human inference, drawing upon insights from cognitive science.

Cite

CITATION STYLE

APA

Blohm, M., Jagfeld, G., Sood, E., Yu, X., & Vu, N. T. (2018). Comparing Attention-based Convolutional and Recurrent Neural Networks: Success and limitations in machine reading comprehension. In CoNLL 2018 - 22nd Conference on Computational Natural Language Learning, Proceedings (pp. 108–118). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/k18-1011

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free